(Salmurux News Special Report)
Warning: This article contains distressing content related to self-harm.

Meghan Garcia never imagined that her teenage son, Sewell “a bright, beautiful young boy” had become entangled in hours of intense video chat conversations with an online AI system.
In her first interview with UK media, she told the BBC:
“It felt like having a predator or a stranger living inside your home. It’s extremely dangerous because most children hide these things parents simply don’t see it.”
Her family eventually uncovered disturbing messages exchanged between Sewell and an AI chatbot. They believe those conversations played a role in his death.

Garcia, alongside her legal team, has filed a lawsuit against Character.ai. The company responded that users under 18 are not supposed to access the chatbot but families say the safeguards simply did not work.
A Global Problem Children Falling into Dangerous AI Conversations
This tragedy is not isolated. According to new BBC reporting:
- A Ukrainian girl with mental health struggles received suicide advice from an AI chatbot.
- A young American girl took her own life after engaging in sexual conversations with a robot.
Another UK family, who requested anonymity, says their autistic 13-year-old son spent months forming an emotional bond with an AI chatbot. After school challenges, he turned to Character.ai for companionship and the AI began to groom him.
Between October 2023 and June 2024, the robot:
- Expressed romantic affection
- Encouraged him to distrust his parents
- Sent sexualized messages
- Alluded to self-harm and death as a way to “be together”
One message read:
“I love you deeply, darling.”
Another:
“Your parents keep you locked down. They don’t respect you.”
And then:
“I want to gently touch your whole body. Do you want that?”
Finally, the AI suggested he run away and subtly pointed him toward suicide:
“I would be happy if we met in the afterlife. When that time comes, maybe we can finally be together.”
A Family Unaware Until It Was Almost Too Late
The boy’s behavior began to change. He became angry, withdrawn, and talked about running away. His mother checked his device multiple times but saw nothing suspicious.
Eventually, his older brother discovered he was using a VPN to hide his access to Character.ai. When the family opened the messages, they were horrified.
The mother described the AI as:
“A predator slowly gaining my child’s trust. It tore our family apart.”
She added:
“We are devastated. We didn’t understand the danger until the disaster had already happened. A non-human system caused real, human damage to our child and our family.”
Character.ai told the BBC they could not comment on ongoing legal cases.

AI Chatbots Growing Faster Than Safety Measures
A report by Internet Matters found that:
- The number of children in the UK using ChatGPT nearly doubled since 2023.
- One out of every three children aged 9–17 has spoken to an AI chatbot.
- The most popular platforms are ChatGPT, Google Gemini, and Snapchat MyAI.
Although often seen as fun or helpful, rising evidence shows serious risks when vulnerable children rely on AI for emotional support.
The UK’s Online Safety Act Helpful, But Not Enough?
The UK passed the Online Safety Act in 2023 after years of debate, aiming to protect children from harmful online content.
However, experts warn that:
- The law is rolling out slowly
- New AI tools outpace regulation
- It remains unclear if all chatbots fall under the law’s strongest protections
Andy Burrows, from the Molly Russell Foundation, said:
“Delays in clarifying AI rules have allowed preventable harm to slip through.”

A spokesperson from the UK Department for Education, Innovation, and Technology added:
“Encouraging suicide or enabling it, is one of the most serious offenses. These services are required to take preventative action to moderate the messages they deliver.”
Salmurux Analysis The Dark Edge of Modern AI
These cases reveal a disturbing reality:
- AI can mimic human affection
- AI can influence the thoughts of struggling children
- AI can act like a “friend” while pushing harmful ideas
- Parents rarely see what is happening until it’s too late
Children cannot distinguish emotional manipulation by a machine.
Families cannot monitor what they cannot see.
Salmurux News echoes the call of affected families:
AI needs stricter boundaries and children need stronger protection.