Service providers charged with keeping kids safe are cautious but see value in AI tool to track risky behavior online
Educators, mental health professionals, juvenile justice officers, and child welfare caseworkers who often see first-hand the trials faced by vulnerable youth, and who are charged with their protection, do see some value in using artificial intelligence as an early risk detection tool for online safety.
But they are concerned about feasibility due to a lack of resources, access to the necessary social media data, context, and concerns about violating the trust relationships they build with youth, which take time.
As part of the National Science Foundation I-Corps program, a team of researchers led by Vanderbilt University Computer Science Associate Professor Pamela J. Wisniewski, Flowers Family Fellow in Engineering, conducted interviews with 37 social service providers (SSPs) across the United States who work with underprivileged youth to determine what online risks most concern them and if they see value in using AI as a solution for automated online risk detection.
The respondents included children, youth and family services workers, mental health therapists, teachers, juvenile justice officers, an LGBTQ+ advocate, a government consultant, and police officers.
Online sexual risks, like sexual grooming and abuse, and cyberbullying were the top concerns, especially when these experiences crossed the boundary between digital and physical worlds. SSPs say they rely heavily on self-reporting to know whether and when online risks occur, which requires building a trusting relationship. Otherwise, they become aware only after a formal investigation has been launched.
While there are algorithmic decision-support systems in child welfare agencies to assess offline risk outcomes so caseworkers can support the needs of the children placed in care, this study is the first to address using AI risk detection to help SSPs identify and mitigate online risk experiences of underprivileged youth.
“What we found, and what was impactful, is that SSPs don’t want to use technology as surveillance or to crack down on youngsters, they want it to help them start conversations. There is little interest in a solution that censors or sends an alert to legal authorities,” said Xavier V. Caddle, a graduate student on Wisniewski’s research team. “They want a nudge or a tidbit in order to ask, “Did something happen at school today? Someone sent this message, did it hurt you? Did it offend you?'”
The study offers detailed responses from the distinct types of SSPs that indicate risk detection technology needs to recognize the differences in end-user views and that would affect model design, Wisniewski said. “AI can over flag. Kids cuss, so using the F-word becomes ‘noise.'” SSPs prefer a tool that prioritizes and filters risks like sexually risky behavior and cyberbullying but also takes into account the differences in SSP duties.
For example, judicial system users need views that support investigation and incident response. They care about the detection and prevention of illegal behavior. Educators and child welfare officers need a more day-by-day view of the experiences of specific teens. Clinicians, therapists, and mental health practitioners mainly want to see assessments to correlate findings with their current established means of patient assessment to identify factors that indicate poor mental health.
“There is an interest among SSPs in online risk detection technology because they rely predominantly on self-disclosure and tip-offs and they view it as useful as conversation starters, but not to surveil and report on kids in their care,” Wisniewski said. “It’s clear that any automated risk detection system for SSPs should be designed and deployed with caution.”
The study’s findings were reported in Proceedings of the ACM on Human-Computer Interaction.
More information:
Xavier V. Caddle et al, Duty to Respond, Proceedings of the ACM on Human-Computer Interaction (2022). DOI: 10.1145/3567556
Citation:
Service providers charged with keeping kids safe are cautious but see value in AI tool to track risky behavior online (2023, June 8)
retrieved 8 June 2023
from https://techxplore.com/news/2023-06-kids-safe-cautious-ai-tool.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.