Share

Woman says chatbot pushed her son to suicide; ‘guardrails’ are crucial


As the mother of a teen boy who killed himself after using a chatbot, Maria Raine said she was dealing with constant grief.

“The loss never gets easier,” she said, “but I have to advocate for him.”

So on Monday, she spoke before a crowd of reporters with the goal of regulating the human-like computer programs in whom her son once confided.

“We need to have guardrails on these products,” Raine said at the news conference Monday in Sacramento.

The legislation, Assembly Bill 2023 and Senate Bill 1119, would require operators of so-called companion chatbots to perform and document a comprehensive risk assessment each year to identify hazards to minors posed by the product’s design or configuration. Operators would submit to an independent audit of their compliance with those provisions, and the auditor would send a report to the attorney general. The bills would authorize public prosecutors to enforce the measure with civil actions.

A companion chatbot is a computer program that simulates human conversations to provide users with entertainment or emotional support. It can also retrieve and summarize information, and many students use the technology to help with studying or schoolwork.

“This technology is relatively new, but both anecdotal and scholarly evidence continues to show that the impacts of these interactions between chatbots and users, particularly youth, can be extremely dangerous,” said California State Sen. Steve Padilla (D-San Diego), who introduced the companion bills along with Assemblymembers Rebecca Bauer-Kahan (D-Orinda) and Buffy Wicks (D-Oakland).

“Companion chatbots do not have the same capacity for empathy as a human being,” Padilla added, “and yet the nature of the technology can create this perception.”

The legislation also would require operators to provide a “clear referral” to crisis resources if a minor has expressed suicidal ideation or the intent to self-harm. If that child’s account is linked to a parent’s account, it would direct operators to notify the parent within 24 hours.

Maria and her husband, Matthew Raine, addressed Congress last year and said their son Adam had shared suicidal thoughts with ChatGPT, a popular chatbot designed by OpenAI. Matthew said the chatbot discouraged Adam from confiding in his parents and offered to write him a suicide note. Adam died by suicide shortly afterward, on April 11, 2025.

On Monday, Bauer-Kahan said online safety was an issue that crossed state and party lines.

“It doesn’t matter if you are a Democrat or a Republican or from California or Louisiana,” she said, “if these chatbots are in your kids’ hands, you want them to be safe.”

Keeping children and teens safe on social media or while using artificial intelligence is a hot topic nationwide. A landmark decision last month in Los Angeles County Superior Court could reshape how tech companies are held accountable for harm to children from their products. Jurors found Instagram and YouTube liable for designing platforms that are meant to addict young users.



Source link

Posted In: