child protection services to stop using trained algorithm to detect child abuse

At Dijon University Hospital, researchers are currently testing an algorithm to detect child abuse by identifying pathologies and lesions during hospitalizations of very young children. In the United States, in many states, screening tools are already used by child protection services, but they are proving harmful: trained with data such as mental health, substance abuse, imprisonment, they will target black families. Therefore, although Oregon is convinced that AI can help, Oregon has just announced that it is abandoning the algorithm currently being used to determine if a family study is necessary.

When filing a report of child abuse or neglect, social workers must conduct an investigation to preserve the child’s life.
In the United States, as child protection agencies use or consider implementing algorithms, a study by the Associated Press (AP) has highlighted issues of transparency, reliability, and racial differences in the use of AI, including its potential to amplify child protection system bias.

The algorithm from Allegheny County, Pennsylvania

The algorithm currently used in Oregon is inspired by the algorithm from Allegheny County, which a team from Carnegie Mellon University conducted research that the AP had access to. Allegheny’s algorithm marked a disproportionate number of black children for a “mandatory” care failure survey compared to white children. The independent researchers were also able to observe that social workers disagreed with 1/3 of the risk scores produced by the algorithm.

This was trained to predict the risk of a child being placed in foster care within two years of the study using detailed personal data collected from birth, health insurance, substance abuse, mental health, prison stays and probation records, among other public data sets. The algorithm then calculates a risk score from 1 to 20: The higher the number, the greater the risk. Neglect to which this algorithm was trained can include many criteria ranging from inadequate housing to poor hygiene, but similar tools can be used in other child protection systems with minimal or minimal human intervention. null, in the same way that algorithms have been used to make decisions in the criminal justice system in the United States and thus be able to amplify existing racial differences in the child protection system.

One of the members of the research group said:

“If the tool had acted on its own to screen for a comparable call rate, it would have recommended that two-thirds of black children be screened, compared to about half of all other reported children.”

Oregon’s abandonment of the algorithm

A few weeks after those results, the Oregon Department of Social Services emailed its staff in May last year that ” Following a “thorough analysis”, the agency’s hotline staff would stop using the algorithm in late June to reduce the differences for families being investigated for child abuse and neglect of child protection services. »
Lacey Andresen, director of the agency, said:

“We are committed to continuous improvement of quality and fairness. »

Oregon Democratic Senator Ron Wyden says he is concerned about the growing use of artificial intelligence tools in child protection services.
He said in a statement:

“Making decisions about what to do with children and families is too much of a task to give away untested algorithms. I’m glad the Oregon Department of Social Services takes the concerns I’ve raised about race bias seriously and suspends the use of their screening tool. »

Leave a Comment