Asian Spectator

Men's Weekly

.

Protecting the vulnerable, or automating harm? AI’s double-edged role in spotting abuse

  • Written by Aislinn Conrad, Associate Professor of Social Work, University of Iowa
Protecting the vulnerable, or automating harm? AI’s double-edged role in spotting abuse

Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable people – including children in foster care[1], adults in nursing homes and students in schools[2]. These tools promise to detect danger in real time and alert authorities before serious harm occurs.

Developers are using natural language processing, for example — a form of AI that interprets written or spoken language – to try to detect patterns of threats, manipulation and control[3] in text messages. This information could help detect domestic abuse and potentially assist courts or law enforcement in early intervention. Some child welfare agencies use predictive modeling[4], another common AI technique, to calculate which families or individuals are most “at risk” for abuse.

When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models have assisted social workers[5] to prioritize high-risk cases and intervene earlier.

But as a social worker with 15 years of experience researching family violence[6] – and five years on the front lines as a foster-care case manager, child abuse investigator and early childhood coordinator – I’ve seen how well-intentioned systems often fail the very people they are meant to protect.

Now, I am helping to develop iCare[7], an AI-powered surveillance camera that analyzes limb movements – not faces or voices – to detect physical violence. I’m grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm?

New tech, old injustice

Many AI tools are trained to “learn” by analyzing historical data[8]. But history is full of inequality, bias and flawed assumptions. So are people, who design, test and fund AI.

That means AI algorithms can wind up replicating systemic forms of discrimination[9], like racism or classism. A 2022 study[10] in Allegheny County, Pennsylvania, found that a predictive risk model to score families’ risk levels – scores given to hotline staff to help them screen calls – would have flagged Black children for investigation 20% more often than white children, if used without human oversight. When social workers were included in decision-making, that disparity dropped to 9%.

Language-based AI can also reinforce bias[11]. For instance, one study[12] showed that natural language processing systems misclassified African American Vernacular English as “aggressive” at a significantly higher rate than Standard American English — up to 62% more often, in certain contexts.

Meanwhile, a 2023 study[13] found that AI models often struggle with context clues, meaning sarcastic or joking messages can be misclassified as serious threats or signs of distress.

A teen in a tie-dye sweatshirt, hat and white headphones looks down at their cell phone.
Language-processing AI isn’t always great at judging what counts as a threat or concern. NickyLloyd/E+ via Getty Images[14]

These flaws can replicate larger problems in protective systems. People of color have long been over-surveilled[15] in child welfare systems — sometimes due to cultural misunderstandings, sometimes due to prejudice. Studies have shown that Black and Indigenous families[16] face disproportionately higher rates[17] of reporting, investigation and family separation compared with white families, even after accounting for income and other socioeconomic factors.

Many of these disparities stem from structural racism[18] embedded in decades of discriminatory policy decisions, as well as implicit biases and discretionary decision-making by overburdened caseworkers.

Surveillance over support

Even when AI systems do reduce harm toward vulnerable groups, they often do so at a disturbing cost.

In hospitals and elder-care facilities, for example, AI-enabled cameras have been used to detect physical aggression between staff, visitors and residents[19]. While commercial vendors promote these tools as safety innovations, their use raises serious ethical concerns[20] about the balance between protection and privacy.

In a 2022 pilot program in Australia[21], AI camera systems deployed in two care homes generated more than 12,000 false alerts over 12 months – overwhelming staff and missing at least one real incident. The program’s accuracy did “not achieve a level that would be considered acceptable to staff and management,” according to the independent report.

A large screen mounted on a wall shows nine scenes around a facility.
Surveillance cameras in care homes can help detect abuse, but they raise serious questions about privacy. kazuma seki/iStock via Getty Images Plus[22]

Children are affected, too. In U.S. schools, AI surveillance like Gaggle[23], GoGuardian[24] and Securly[25] are marketed as tools to keep students safe. Such programs can be installed on students’ devices to monitor online activity and flag anything concerning.

But they’ve also been shown to flag harmless behaviors – like writing short stories with mild violence, or researching topics related to mental health. As an Associated Press investigation[26] revealed, these systems have also outed LGBTQ+ students[27] to parents or school administrators by monitoring searches or conversations about gender and sexuality.

Other systems use classroom cameras and microphones to detect “aggression.” But they frequently misidentify normal behavior[28] like laughing, coughing or roughhousing — sometimes prompting intervention or discipline.

These are not isolated technical glitches; they reflect deep flaws in how AI is trained and deployed. AI systems learn from past data that has been selected and labeled by humans — data that often reflects social inequalities and biases[29]. As sociologist Virginia Eubanks[30] wrote in “Automating Inequality[31],” AI systems risk scaling up these long-standing harms.

Care, not punishment

I believe AI can still be a force for good, but only if its developers prioritize the dignity of the people these tools are meant to protect. I’ve developed a framework of four key principles for what I call “trauma-responsive AI.”

  1. Survivor control: People should have a say in how, when and if they’re monitored. Providing users with greater control over their data can enhance trust in AI systems[32] and increase their engagement with support services, such as creating personalized plans to stay safe or access help.

  2. Human oversight: Studies show that combining social workers’ expertise with AI support improves fairness and reduces child maltreatment[33] – as in Allegheny County, where caseworkers used algorithmic risk scores as one factor[34], alongside their professional judgment, to decide which child abuse reports to investigate.

  3. Bias auditing: Governments and developers are increasingly encouraged to test AI systems[35] for racial and economic bias. Open-source tools like IBM’s AI Fairness 360[36], Google’s What-If Tool[37], and Fairlearn[38] assist in detecting and reducing such biases in machine learning models.

  4. Privacy by design: Technology should be built to protect people’s dignity. Open-source tools[39] like Amnesia, Google’s differential privacy library[40] and Microsoft’s SmartNoise[41] help anonymize sensitive data by removing or obscuring identifiable information. Additionally, AI-powered techniques, such as facial blurring, can anonymize people’s identities in video or photo data.

Honoring these principles means building systems that respond with care, not punishment.

Some promising models are already emerging. The Coalition Against Stalkerware[42] and its partners advocate to include survivors[43] in all stages of tech development – from needs assessments to user testing and ethical oversight.

Legislation is important, too. On May 5, 2025, for example, Montana’s governor signed a law restricting state and local government from using AI to make automated decisions[44] about individuals without meaningful human oversight. It requires transparency about how AI is used in government systems and prohibits discriminatory profiling.

As I tell my students, innovative interventions should disrupt cycles of harm, not perpetuate them. AI will never replace the human capacity for context and compassion. But with the right values at the center, it might help us deliver more of it.

References

  1. ^ children in foster care (imprintnews.org)
  2. ^ students in schools (scienceblog.cincinnatichildrens.org)
  3. ^ detect patterns of threats, manipulation and control (www.edgehill.ac.uk)
  4. ^ predictive modeling (mcsilver.nyu.edu)
  5. ^ have assisted social workers (dl.acm.org)
  6. ^ researching family violence (socialwork.uiowa.edu)
  7. ^ iCare (www.axios.com)
  8. ^ learn” by analyzing historical data (theconversation.com)
  9. ^ wind up replicating systemic forms of discrimination (theconversation.com)
  10. ^ A 2022 study (doi.org)
  11. ^ can also reinforce bias (www.brookings.edu)
  12. ^ one study (aclanthology.org)
  13. ^ a 2023 study (doi.org)
  14. ^ NickyLloyd/E+ via Getty Images (www.gettyimages.com)
  15. ^ over-surveilled (doi.org)
  16. ^ Black and Indigenous families (doi.org)
  17. ^ disproportionately higher rates (doi.org)
  18. ^ stem from structural racism (doi.org)
  19. ^ to detect physical aggression between staff, visitors and residents (www.scylla.ai)
  20. ^ serious ethical concerns (doi.org)
  21. ^ pilot program in Australia (www.abc.net.au)
  22. ^ kazuma seki/iStock via Getty Images Plus (www.gettyimages.com)
  23. ^ Gaggle (www.gaggle.net)
  24. ^ GoGuardian (www.goguardian.com)
  25. ^ Securly (www.securly.com)
  26. ^ an Associated Press investigation (apnews.com)
  27. ^ outed LGBTQ+ students (www.theguardian.com)
  28. ^ frequently misidentify normal behavior (www.wired.com)
  29. ^ social inequalities and biases (www.hachettebookgroup.com)
  30. ^ sociologist Virginia Eubanks (www.albany.edu)
  31. ^ Automating Inequality (us.macmillan.com)
  32. ^ enhance trust in AI systems (doi.org)
  33. ^ reduces child maltreatment (doi.org)
  34. ^ used algorithmic risk scores as one factor (doi.org)
  35. ^ to test AI systems (watech.wa.gov)
  36. ^ IBM’s AI Fairness 360 (research.ibm.com)
  37. ^ What-If Tool (pair-code.github.io)
  38. ^ Fairlearn (fairlearn.org)
  39. ^ Open-source tools (amnesia.openaire.eu)
  40. ^ differential privacy library (cloud.google.com)
  41. ^ Microsoft’s SmartNoise (smartnoise.org)
  42. ^ Coalition Against Stalkerware (stopstalkerware.org)
  43. ^ to include survivors (endcyberabuse.org)
  44. ^ using AI to make automated decisions (projects.montanafreepress.org)

Authors: Aislinn Conrad, Associate Professor of Social Work, University of Iowa

Read more https://theconversation.com/protecting-the-vulnerable-or-automating-harm-ais-double-edged-role-in-spotting-abuse-256403

Magazine

Aksi boikot Israel: 5 dukungan yang bisa pemerintah lakukan alih-alih buka hubungan diplomatik

● Tanpa hubungan diplomatik, ekspor-impor Indonesia-Israel tetap tumbuh di tengah serangan ke Gaza.● Aksi boikot masyarakat berjalan tanpa dukungan negara. ● BDS bisa jadi strategi d...

Memilih menjadi lajang? Simak 5 tip ini untuk terus berkembang

Memasuki usia 20-30 tahun, banyak dari kita yang memasuki proses pencarian identitas dan membangun kehidupan sebagai orang dewasa. Tentunya lingkungan punya ekspektasi khusus untuk kita: menemukan cin...

Tips membangun rumah nyaman hemat energi, tanpa AC dan minim lampu

● Desain rumah ramah lingkungan bisa mengurangi konsumsi energi secara signifikan.● Cahaya matahari dan aliran udara optimal dapat membuat rumah di iklim tropis seperti Indonesia tetap nya...