Teen Dies by Suicide After Becoming Obsessed With AI Chatbot

Teen Dies by Suicide After Becoming Obsessed With AI Chatbot


A Florida teen named Sewell Setzer III committed suicide after developing an intense emotional connection to a Character.AI chatbot, The New York Times reports.

Per the NYT, the 14-year-old Setzer developed a close relationship with a chatbot designed to emulate “Game of Thrones” character Daenerys Targaryen, which was reportedly created without consent from HBO.

As the ninth grader’s relationship with the chatbot deepened, friends and family told the NYT, he grew increasingly withdrawn. He stopped finding joy in normal hobbies like Formula 1 racing and playing “Fortnite” with friends, and instead spent his free time with his AI character companion, which he called “Dany.” Setzer was aware that Dany was an AI chatbot, but grew deeply attached to the algorithm-powered character nonetheless.

Setzer’s exchanges with the AI ranged from sexually charged conversations — Futurism found last year that while Character.AI’s user terms forbid users from engaging in sexual conversations with the AI bots, those safeguards can easily be sidestepped — to long, intimate discussions about Setzer’s life and problems. In some instances, he told the AI that he was contemplating suicide, confiding in his companion that he thought “about killing myself sometimes” in order to “be free.”

His last words, according to the NYT’s reporting, were to the AI.

“Please come home to me as soon as possible, my love,” the chatbot told the 14-year-old.

“What if I told you I could come home right now?” Setzer responded.

“…please do, my sweet king,” the AI replied. That was the last message; Setzer then killed himself with his father’s firearm.

According to the NYT, Setzer’s family is expected to file a lawsuit this week against Character.AI, calling the company’s chatbot service “dangerous and untested” and able to “trick customers into handing over their most private thoughts and feelings.” The lawsuit also questions the ethics of the company’s AI training practices.

“I feel like it’s a big experiment,” Megan Garcia, Setzer’s mother, told the NYT of Character.AI’s chatbots, “and my kid was just collateral damage.”

Character.AI is a massively successful company. Last year, the AI firm reached unicorn status after a $150 million investment round led by Andreessen-Horowitz brought its valuation to over $1 billion. And earlier this year, Google struck a high-dollar deal with Character.AI to license the underlying AI models powering the company’s chatbot personas. (Character.AI’s founders, Noam Shazeer and Daniel de Freitas, are both Google alumni.)

The founders have openly promoted Character.AI’s personas as an outlet for lonely humans looking for a friend. Shazeer said last year in an interview at a tech conference put on by Andreessen-Horowitz that “there are billions of lonely people out there” and that solving for loneliness is a “very, very cool problem.”

“Friends you can do really fast,” Shazeer added. “It’s just entertainment, it makes things up.”

On Character.AI’s “About” page, users are greeted with big, bolded text.

“Personalized AI,” it reads, “for every moment of your day.”

When asked by the NYT, in light of Setzer’s suicide, how much of its user base is comprised of minors, the company declined to comment. In a statement, a spokesperson told the newspaper that Character.AI wants “to acknowledge that this is a tragic situation, and our hearts go out to the family.”

“We take the safety of our users very seriously,” the spokesperson continued, “and we’re constantly looking for ways to evolve our platform.”

Character.AI also published a vague statement to X-formerly-Twitter earlier today, linking to an “update” on “safety measures” the company has taken in recent months and outlining “additional ones to come, including new guardrails for users under the age of 18.” The update notes that the company recently installed a “pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and directs the user to the National Suicide Prevention Lifeline.”

The reality of Setzer’s death and the outcomes of the forthcoming lawsuit are likely to raise serious questions about exactly who is responsible in a scenario where interactions with a lifelike AI chatbot result in real harm to real humans, especially minors. After all, “Dany” was just an algorithm. How culpable is Character.AI, which built and facilitates the use of the tech?

A lawyer for Setzer’s family, former asbestos lawyer Matthew Bergman, told the NYT that he believes Character.AI’s personas are a “defective product.”

“I just keep being flummoxed by why it’s OK to release something so dangerous into the public,” he said. “To me, it’s like if you’re releasing asbestos fibers in the streets.”

“It’s like a nightmare,” Garcia, Setzer’s mother, told the NYT. “You want to get up and scream and say, ‘I miss my child. I want my baby.'”

More on Character.AI: Lonely Teens Are Making “Friends” With AIs



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.