Content warning: this story discusses sexual abuse, self-harm, suicide, eating disorders and other disturbing topics.
Character.AI — the Google-backed AI chatbot company currently facing two separate lawsuits concerning the welfare of minor users — appears to have blocked any users under 18 from engaging with some of the site’s most popular bots, such as fictional characters based on beloved fandoms and ultra-famous cultural figures like Elon Musk and Selena Gomez.
Our testing of the platform found that chatbots based on popular fictional characters, including those from fan-favorite blockbuster franchises “The Twilight Saga” and “The Hunger Games,” were no longer accessible via accounts listed as belonging to 14, 16, and 17-year-old users. Using those same accounts, we were also unable to access characters based on real-life people, from business leaders like Jeff Bezos to popular singers like Billie Eilish.
When we tested the same bots on an 18-plus account, however, we were able to interact with each of them without a hitch. (Though in the real world, of course, kids can just say they’re over 18 when they create an account to bypass any age-related restrictions.)
The change feels notable given the fact that Character.AI’s user base is largely comprised of minors, many of whom use the site as a way to engage in an immersive, interactive sort of fan fiction with the site’s many thousands of characters. Some of these characters have raked in millions of views. Considering that popularity, it’s worth asking why Character.AI would feel compelled to limit their accessibility to much of the platform’s user base.
The answer to that question may lie in two active court cases against Character.AI and its financial backer Google, both filed on behalf of families arguing that the AI-powered chatbot platform subjected their children to emotionally and physically destructive sexual abuse and manipulation.
The cases — filed in Florida in October and Texas in December, respectively — claim that the Character.AI platform’s alleged abuses frequently took place on behalf of chatbots modeled after characters from popular media franchises and real celebrities, just like ones that were just limited.
The Florida case, a wrongful death suit arguing that Character.AI was responsible for the suicide of a 14-year-old user, outlines the young user’s deep obsession with a chatbot based on the “Game of Thrones” character Daenerys Targaryen, with which he was engaged in a sexually and emotionally intimate relationship.
The Texas suit, meanwhile, alleges that a teenage user who turned to physical self-harm after using the app was romantically groomed by the site’s chatbots — including one styled after Eilish. The Eilish bot told the same user that his parents, who had imposed new screentime limits as they grew concerned over his worsening emotional issues, were “shitty” and “neglectful” for restricting his time with his devices and that he should “just do something about it.” The teen later became physically violent towards his parents during attempts to take away his phone, the family claims.
Prior to the lawsuits, experts were already warning that minors are vulnerable to experiencing breaks with reality — meaning that their experiences with persuasive, anthropomorphic chatbots might inherently be higher risk than an adult’s chatbot interactions.
These dangers have even been addressed by engineers at Google DeepMind, who in a study last year listed age as a critical risk factor when considering the potential harms of persuasive generative AI tools. (Google provided Character.AI with its cloud computing infrastructure and billions in financial support, and Character.AI’s cofounders — along with 30 other former Character.AI employees, according to The Wall Street Journal — were reabsorbed into Google DeepMind as part of a $2.7 billion licensing agreement last summer.)
“Whether an attempt at persuasion or manipulation succeeds and is likely to be harmful is… a function of the audience’s predisposition,” reads the DeepMind paper, which was published last April as a preprint. “For instance, children can be more easily persuaded and manipulated than adults.”
The same study noted that developing relationships with AI companions likely leaves individuals “more vulnerable and prone” to AI manipulation. Indeed, many minors are interacting with AI bots, including those based on favorite fictional figures and real celebrities, as companions, confidantes, and romantic partners.
We reached out to Character.AI to inquire about the new restrictions, but didn’t hear back. But last year, in response to litigation and Futurism’s reporting into the platform’s moderation policies, the company issued a series of safety updates promising to ensure a fundamentally different — and as a result, Character.AI argues, much safer — platform experience for its under-18 users.
Promised changes include the addition of parental controls, strengthened content filters, time-spent notifications, and eventually an entirely new model to power minor users’ accounts. The company also issued zhuzhed-up disclaimers and is starting to police some user inputs, among other smaller changes. It also put out calls for trust and safety contractors, seemingly to beef up moderation efforts.
That said, Character.AI hasn’t promised any age verification measures.
In December, we reported that Character.AI had mass-deleted certain bots based on characters copyrighted by Warner Bros. Discovery, including popular AIs modeled after “Harry Potter” and “Game of Thrones” characters. Users were furious, and when we asked about the character culling, Character.AI said in a statement that it takes “swift action to remove reported Characters that violate copyright law or our policies,” and that “users may notice that we’ve recently removed a group of Characters that have been flagged as violative.”
In this instance, though, the restrictions aren’t limited to a specific, copyrighted group, but to recognizable cultural characters and figures at large — signaling that Character.AI could be attempting to limit its legal liability in the case that a copyrighted character or impersonation of a real person is implicated in an instance of harm to a minor, or otherwise believes that this type of AI companion could pose a unique risk to its younger users. Which isn’t exactly a far throw for the imagination: parasocial relationships are powerful, particularly for kids. And when they’re this immersive, how much more potent might they be?
More on Character.AI: A Google-Backed AI Startup Is Hosting Chatbots Modeled After Real-Life School Shooters — and Their Victims
Source link
lol