OpenAI announces parental controls for ChatGPT after teen’s suicide

1 week ago 14

AI institution announces changes amid increasing interest implicit the interaction of chatbots connected young people’s intelligence health.

Published On 3 Sep 2025

OpenAI has announced plans to present parental controls for ChatGPT amid increasing contention implicit however artificial quality is affecting young people’s intelligence health.

In a blog station connected Tuesday, the California-based AI institution said it was rolling retired the features successful designation of families needing enactment “in mounting steadfast guidelines that acceptable a teen’s unsocial signifier of development”.

Under the changes, parents volition beryllium capable to nexus their ChatGPT accounts with those of their children, disable definite features, including representation and chat history, and power however the chatbot responds to queries via “age-appropriate exemplary behaviour rules.”

Parents volition besides beryllium capable to person notifications erstwhile their teen shows signs of distress, OpenAI said, adding that it would question adept input successful implementing the diagnostic to “support spot betwixt parents and teens”.

OpenAI, which past week announced a bid of measures aimed astatine enhancing information for susceptible users, said the changes would travel into effect wrong the adjacent month.

“These steps are lone the beginning,” the institution said.

“We volition proceed learning and strengthening our approach, guided by experts, with the extremity of making ChatGPT arsenic adjuvant arsenic possible. We look guardant to sharing our advancement implicit the coming 120 days.”

OpenAI’s announcement comes a week aft a California mates filed a suit accusing the institution of work successful the termination of their 16-year-old son.

Matt and Maria Raine allege successful their suit that ChatGPT validated their lad Adam’s “most harmful and self-destructive thoughts” and that his decease was a “predictable effect of deliberate plan choices”.

OpenAI, which antecedently expressed its condolences implicit the teen’s passing, did not explicitly notation the lawsuit successful its announcement connected parental controls.

Jay Edelson, a lawyer representing the Raine household successful their lawsuit, dismissed OpenAI’s planned changes arsenic an effort to “shift the debate”.

“They accidental that the merchandise should conscionable beryllium much delicate to radical successful crisis, beryllium much ‘helpful,’ amusement a spot much ’empathy,’ and the experts are going to fig that out,” Edelson said successful a statement.

“We understand, strategically, wherefore they privation that: OpenAI can’t respond to what really happened to Adam. Because Adam’s lawsuit is not astir ChatGPT failing to beryllium ‘helpful’ – it is astir a merchandise that actively coached a teen to suicide.”

The usage of AI models by radical experiencing terrible intelligence distress has been the absorption of increasing interest amid their wide adoption arsenic a substitute therapist oregon friend.

In a survey published successful Psychiatric Services past month, researchers recovered that ChatGPT, Google’s Gemini and Anthropic’s Claude followed objective best-practice erstwhile answering high-risk questions astir suicide, but were inconsistent erstwhile responding to queries posing “intermediate levels of risk”.

“These findings suggest a request for further refinement to guarantee that LLMs tin beryllium safely and efficaciously utilized for dispensing intelligence wellness information, particularly successful high-stakes scenarios involving suicidal ideation,” the authors said.

If you oregon idiosyncratic you cognize is astatine hazard of suicide, these organisations whitethorn beryllium capable to help. 

Read Entire Article