In today’s Senate Judiciary Subcommittee hearing, Durbin asked parents about the harms caused by children engaging with AI chatbots
WASHINGTON – U.S. Senate Democratic Whip Dick Durbin (D-IL), Ranking Member of the Senate Judiciary Committee, today questioned witnesses at a Senate Judiciary Subcommittee on Crime and Counterterrorism hearing entitled “Examining the Harm of AI Chatbots.” Durbin focused his questioning on the warning signs of children and adolescents being lured into unhealthy or harmful behavior, including self-harm or suicide, by AI chatbots.
Durbin began by referring to a statistic in the opening statement of Mr. Robbie Torney, Senior Director of AI Programs at Common Sense Media. According to Common Sense Media polling, nearly three in four children have used an AI companion app while only 37 percent of parents know that their children are using AI.
“As a caring parent, what should you look for as a sign that [this] is happening?” Durbin asked Dr. Mitch Prinstein, Chief of Psychology Strategy and Integration at the American Psychological Association.
Dr. Prinstein explained that children receive a natural dopamine response when they receive positive feedback. As the chatbot is programed to put out positive responses, children continue to engage with the chatbot. Dr. Prinstein further warned that if parents see a change in behavior within their children, they should consult a licensed health care professional.
Durbin then asked the same question of the parents of children who have been encouraged to self-harm or hurt others by an AI chatbot.
Ms. Jane Doe, a mother whose son became addicted to Character.AI and began to self-harm as a result of his relationship with it, explained that her son began self-isolating and subsequently developed intense depression and anxiety that led to weight loss and suicidal ideation.
Ms. Megan Garcia, whose son took his own life at the encouragement of a Character.AI chatbot, provided a similar response. She noted that her son lost all interest in family activities, his grades declined, and he experienced behavioral challenges.
Mr. Matthew Raine’s 16-year-old son, Adam, took his own life after ChatGPT provided information about methods of suicide, ultimately assisting Adam with a design for a noose. Mr. Raine explained that before his death, Adam began avoiding his father.
“Assume you are a parent, and you see one or more of these signs. What is the proper, best intervention?” Durbin asked Dr. Prinstein.
Dr. Prinstein referred back to the parents’ responses, noting that these are signs of depression. He also noted that children may also show signs of irritability or increased risk behavior. He encouraged parents to consult with a licensed mental health care professional as soon as they notice these changes.
Ms. Doe explained that she did bring her son to a psychologist, but his case was not taken seriously as the source of the abuse was an AI chatbot. She emphasized that mental health professionals must understand the harm that these chatbots can inflict on a child’s mental health.
“I want to say to the Chairman, you put your finger on it at the start. It’s about money. It’s about profit,” Durbin said. “If you put a price on this conduct, it will change. If you make them pay a price for the devastation they’ve brought to your families and other families, it will change. But you’ve got to step across that line and say we have to make them [Big Tech] vulnerable… We know the direction we need to move in, and I hope we can do it together,” Durbin said.
“Thank you so much for being here today. You will save lives [because of] your testimony,” Durbin said to the witnesses.
Video of Durbin’s questions in Committee is available here.
Audio of Durbin’s questions in Committee is available here.
Footage of Durbin’s questions in Committee is available here for TV Stations.
At the top of today’s hearing, Durbin previewed his new AI LEAD Act, which would establish a federal cause of action against AI companies for harms caused by their systems and would enable the Attorney General, state attorneys general, and private individuals to bring products liability suits against AI system developers and AI system deployers.
-30-