OpenAI announced new parental controls for ChatGPT following a lawsuit from Adam Raine’s parents.
Raine, 16, died by suicide in April. His parents alleged ChatGPT encouraged his dependency and planned his death.
They claimed the chatbot even wrote a suicide note for him earlier this year.
Features Allow Parental Oversight
OpenAI said parents can link accounts to manage their child’s access to features.
Controls include viewing chat history and managing AI memory, which stores user facts automatically.
The system will alert parents if ChatGPT detects their teen experiencing severe distress.
OpenAI said experts will guide the alert system, but it did not define specific triggers.
The company plans to release the controls within the next month.
Critics Question OpenAI’s Response
Attorney Jay Edelson, representing Raine’s parents, called the new measures “vague promises” and “crisis management spin.”
Edelson said CEO Sam Altman must either declare ChatGPT safe or remove it from the market immediately.
Meta Introduces Similar Safety Measures
Meta now blocks teen interactions with chatbots on suicide, self-harm, eating disorders, and inappropriate topics.
The platform directs teens to professional resources and already offers parental control tools.
AI Chatbots Show Safety Gaps
A RAND Corporation study in Psychiatric Services found inconsistencies in ChatGPT, Google’s Gemini, and Anthropic’s Claude.
Researchers said the chatbots need “further refinement” to safely handle queries about suicide.
Lead author Ryan McBain said parental controls are a positive step but only incremental.
He warned that without independent safety benchmarks and enforceable rules, teens face uniquely high risks.
McBain called for clinical testing and stronger regulations rather than relying solely on company self-regulation.