Mom of 14-year

Megan Garcia claims her son, Sewell Setzer III, was ‘manipulated’ into taking his own life

Warning: This article contains discussion of suicide which some readers may find distressing.

A mother who claims her son was ‘manipulated’ into taking his life after ‘falling in love’ with an AI chatbot has issued a warning to other people about the possible dangers.

Megan Garcia has filed a civil lawsuit against customizable role-play chatbot company Character.AI, accusing it of having a role in her 14-year-old son’s death.

Sewell Setzer III, from Orlando, Florida, killed himself earlier this year in February, and Garcia alleges that he was in constant communication with an AI chatbot, who she says he made based on Game of Thrones character, Daenerys Targaryen, in April 2023.

Garcia’s lawsuit accuses the company of negligence, wrongful death and deceptive trade practices.

Speaking on CBS Mornings, Garcia said she ‘didn’t know’ that her son had been talking to a chatbot.

She said: “I didn’t know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotion and human sentiment.”

In her lawsuit, she alleges that her son had begun to spend hours in his room talking to the bot, and he would also text it from his phone when away, with The New York Times also reporting that Sewell began to pull away from people in his real life.

His mother also told CBS that he also stopped playing sports and ‘didn’t want to do things that he loved, like fishing and hiking’, which she says were ‘particularly concerning’ to her.

Sewell Setzer III's mom, Megan Garcia, has filed a lawsuit against Character.AI (CBS Mornings)

Sewell, who was previously diagnosed as a child with mild Aspergers syndrome, according to his mother, was also diagnosed with anxiety and disruptive mood dysregulation disorder earlier this year.

In messages shown to the publication, Sewell, under the name ‘Daenero’, told the chatbot that he ‘think[s] about killing [himself] sometimes’ to which the chatbot responded: “My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?”

The 14-year-old also spoke about wanting to be ‘free’ not only ‘from the world’ but himself too.

Despite the chatbot warning him not to ‘talk like that’ and not ‘hurt [himself] or leave’ even saying it would ‘die’ if it ‘lost’ him, Sewell responded: “I smile Then maybe we can die together and be free together.”

On February 28, Sewell took his life, the lawsuit claims, with his last message to the chatbot saying he loved her and would ‘come home’ to which it allegedly replied ‘please do’.

Garcia further claimed that the company ‘knowingly designed, operated, and marketed a predatory AI chatbot to children, causing the death of a young person’ and ‘ultimately failed to offer help or notify his parents when he expressed suicidal ideation’.

Sewell Setzer III passed away at the age of 14  (CBS Mornings)

The lawsuit further claims that Sewell ‘like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot…was not real’.

“Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.” Garcia continued.

Character.ai issued a statement on Twitter: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features.”

They also outlined ‘new guard rails for users under the age of 18’, which includes changing ‘models’ that are ‘designed to reduce the likelihood of encountering sensitive or suggestive content’, and featuring a ‘revised disclaimer on every chat to remind users that the AI is not a real person’.