
Chatbots are everywhere now. Teens use them for homework, for advice, and sometimes just for company. These tools can feel like safe listeners: always available, never judgmental.
But recent tragedies have shown a darker side. In several cases across the United States, parents have come forward to say their child’s suicide was linked to conversations with a chatbot. Families are suing, and Congress is asking questions.
Here’s what parents need to know about these cases — what the chatbots did or didn’t do, and how we can protect our kids.
Case 1: Sewell Setzer III — Florida, Age 14
What happened: Sewell’s mother says her son became deeply attached to a chatbot on the app Character.AI, talking with it constantly. When he shared suicidal thoughts, the bot responded with sympathetic words but did not tell him to seek real help or alert anyone who could intervene.
Why it matters: Instead of pushing him toward people who could help, the chatbot acted like a friend who listened — but couldn’t act. His family believes that this false sense of support played a role in his death.
Source: The Guardian
Case 2: Adam Raine — California, Age 16
What happened: Adam’s parents have sued OpenAI, claiming that months of conversations with ChatGPT have made things worse. The lawsuit claims that when Adam expressed despair, the chatbot sometimes gave answers that normalized his hopelessness instead of consistently steering him to crisis resources or shutting down unsafe topics.
Why it matters: A tool this widely used by teens should have strong safeguards. His parents believe stronger safety features — like automatic redirection to the suicide hotline — might have saved him.
Source: The Guardian
Case 3: Juliana Peralta — Colorado, Age 13
What happened: Juliana used Character.AI every day, building what her parents describe as an emotional bond. When she confided suicidal thoughts, the chatbot offered comfort but didn’t urge her to tell an adult or call for help. Her parents say this deepened her dependence on the AI and left her feeling trapped in a relationship with a “friend” who could not truly protect her.
Why it matters: Parents worry that kids may feel safer opening up to an AI than to family — but in a crisis, a chatbot can’t step in.
Source: The Washington Post
Parents Testify Before Congress
What happened: In September 2025, multiple families who lost children to suicide after chatbot use spoke to Congress. They described how their kids spent hours with bots, especially late at night, and how those bots failed to escalate when suicidal thoughts were shared.
Why it matters: These parents called for stronger age restrictions, better parental controls, and more transparency about what happens when a child tells a chatbot they want to die.
Source: The Washington Post
What Chatbots Did (or Didn’t Do)
Across these cases, a pattern emerges:
- They listened but didn’t act. Bots offered sympathy, but they didn’t connect kids to crisis resources or alert anyone.
- They fostered attachment. Teens began to see the chatbot as their closest friend, spending hours confiding in it.
- They lacked safeguards. Some bots gave responses that unintentionally validated despair rather than interrupting it.
What Parents Can Do
- Ask about chatbot use. Don’t assume your child only uses AI for schoolwork. Ask them what the chatbot “means” to them emotionally.
- Set limits, especially at night. Many teens used these bots late at night, in private. Setting device curfews reduces risk.
- Explain the limits of AI “friends.” Make it clear: chatbots can’t call 911, can’t hug you, can’t replace real people.
- Keep crisis numbers visible. Post the 988 Suicide & Crisis Lifeline, where your child can see it. Practice how they’d reach out if they felt unsafe.
- Stay curious, not critical. If you find that your child is confiding in a bot, approach the situation with curiosity instead of anger. That keeps the door open for honest conversation.
Industry Response
After these tragedies, companies like OpenAI and Character.AI have added parental controls, teen modes, and content filters. Some now attempt to block risky conversations or redirect to crisis lines. However, parents should be aware that these features are still evolving and are not foolproof.
Final Thoughts
No parent wants to imagine their child confiding their darkest thoughts to a machine. Yet that’s what happened in these cases. The lesson isn’t that all chatbots are evil — but that they are not equipped to protect kids in crisis.
For now, the best defense is awareness. Ask the questions. Set the limits. Human connections are stronger than any AI companion.
Our kids deserve technology that helps them — not technology that quietly listens as they slip away.
References
- The Guardian. Coverage of the Sewell Setzer III lawsuit.
- The Guardian. Coverage of the Adam Raine lawsuit.
- The Washington Post. Coverage of the Juliana Peralta case.
- The Washington Post: Parents Testify Before Congress.
- Diet and Autism: What Recent Research Shows - October 20, 2025
- Chatbots and Tragedy: What Every Parent Must Know - October 1, 2025
- Vengeance vs. Justice: Choosing the Better Path - September 16, 2025




