Predictive Analytics in Child Welfare – Helping Hand, or Racial Bias? 2

Computer screens in front of skyscrapers
Predictive analytics: does CPS help or hurt families when they make remote judgments?

Having looked at the concern that several people have expressed regarding the way in which predictive analytics may be violating civil rights when used improperly in the criminal justice system, we’re now moving on to talk about how predictive analytics would affect the fairness of child welfare.

In July of 2015, the County of Los Angeles announced the fact that they intend to start using predictive analytics in their child welfare system. An algorithm, which they claim is secret, will be used to identify the children considered to be at risk for abuse and neglect. Factors like frequent emergency room visits, changing schools more often than what is deemed standard, and living with a family member who has a history of drug abuse are all factors that would lead to a child being labelled as “at risk.”

According to Los Angeles County, predictive analytics would help their social workers to quickly identify the most at-risk children, which would help them provide prioritized responses and deliver tailored services more efficiently.  In response, a surprising number of people spoke out against it, expressing fears that this “predictive child welfare” software would instead violate human rights and perpetuate racism.

The software developers who Los Angeles County’s predictive analysis software, named AURA (Approach to Understanding Risk Assessment), said that during the developmental stages, they tested the software on the county’s historical data, and discovered something interesting: Had Los Angeles County started using AURA in 2013, the software would have flagged a total of 76% of the cases that later resulted in the death of a child as “high risk.” This in turn would have meant that social workers could have intervened sooner and perhaps prevented a child’s death.

Is data-driven decision making a good thing in child welfare?

This would lead one to believe that using data to assist in decision making, specifically, where it applies to child welfare, would be a good thing. But as it turns out, that isn’t the full picture. While AURA did flag 76% of cases that resulted in a death as high risk, it also flagged no less than 3,829 ‘false positives.’ According to Armand Montiel, the Public Affairs Director for Los Angeles County’s Department of Department of Children and Family Services, that means that no less than 95.6% of the cases flagged were false positives, where no critical event of child death followed.

In addition, a study completed by the federal government as part of their second ‘National Incidence Study’ of child abuse, revealed that caseworkers were two to six times more likely to wrongly substantiate a possible abuse case than they are to wrongly label the same case as unfounded.

Which brings us to the million dollar question: what is more important, potentially saving the lives of countless children at the expense of civil rights? Or violating people’s individuality with a potentially racist algorithm in order to protect some children from abuse? Which one is the higher cause, or better yet, is there a way to strike a balance between the two? What do you think?

Regardless of your stance on this subject, false allegations are a frequent occurrence in CPS investigations. If you’ve been falsely accused of child abuse, you are going to need an experienced attorney with many years of practical knowledge in how to defend against lies and misinterpretations. The attorneys at the Kronzek Firm have been successfully handling CPS cases, and false allegation cases for decades. So contact us immediately, we can help you.


Posted

in

by