Published
Reading time
2 min read
Animation showing probability of children who may benefit from intervention

Officials in charge of protecting children stopped using a machine learning model designed to help them make decisions in difficult cases.

What’s new: The U.S. state of Oregon halted its use of an algorithm intended to identify children who may benefit from intervention, The Associated Press reported. The state did not disclose the reason for the move. It came roughly one month after a similar algorithm used by the state of Pennsylvania, which inspired Oregon’s effort, came under criticism for bias.

How it works: Oregon’s Department of Human Services developed the Safety at Screening Tool to help social workers screen reports of at-risk children. Social workers were empowered to decide whether to take action with respect to any given report.

  • The developers trained the algorithm on hundreds of thousands of existing child-welfare reports. The dataset included over 180 features including reports of abuse or neglect, numbers of children per report, and whether those children had been involved in previous reports.
  • They trained two models. One, trained on reports that had prompted an investigation, determined the probability that a child would be removed from their home within two years. The other, trained on reports that hadn’t prompted an investigation, found the probability that a child would be involved in a future investigation. At inference, the models examined a report and produced separate scores.
  • The developers acknowledged that bias was inevitable but sought to mitigate it by separately modeling the probabilities of removal from a home and involvement in a future investigation, and by scoring on a scale of 0 to 100 rather than 0 to 20, the scale used in previous work.
  • The department told its employees that it would stop using the tool at the end of June. An official told The Associated Press that a change in the screening process had made the tool unnecessary.

Pennsylvania’s problem: Researchers at Carnegie Mellon University found signs of bias in a similar tool used in Pennsylvania. That algorithm, which assesses the probability that a child will enter foster care within two years, is still in use.

  • The researchers found that the algorithm disproportionately flagged cases involving Black children relative to their White counterparts. They also found that social workers — who were authorized to make decisions — displayed significantly less racial disparity than the algorithm.
  • Officials countered that the analysis used old data and a different method for pre-processing data.
  • The researchers undertook a second analysis using newer data and the officials’ recommended pre-processing steps. They reached the same conclusion.

Why it matters: Oregon’s decision to drop its learning algorithm sounds a note of caution for public agencies that hope to take advantage of machine learning. Many states have applied machine learning to ease the burden on social workers as both the number of child welfare cases has risen steadily over the past decade. However, the effort to automate risk assessments may come at the expense of minority communities whose members may bear the brunt of biases in the trained models.

We’re thinking: We’re heartened to learn that independent researchers identified the flaws in such systems and public officials may have acted on those findings. Our sympathy goes out to children and families who face social and economic hardships, and to officials who are trying to do their best under difficult circumstances. We continue to believe that AI, with robust auditing for bias, can help.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox