American Psychological Association Urges FTC to Investigate AI Chatbots Claiming to Offer Therapy

American Psychological Association Urges FTC to Investigate AI Chatbots Claiming to Offer Therapy


Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

The American Psychological Association (APA) sent a letter to the Federal Trade Commission (FTC) urging the regulating body to investigate whether any chatbot companies are engaging in deceptive practices, Mashable has confirmed.

The December letter, per Mashable, was prompted by two alarming lawsuits — the first filed in Florida in October, the second in Texas in December — concerning the welfare of minors who used the Google-funded AI companion app Character.AI, which is incredibly popular among kids and young people. Together, the lawsuits argue that the anthropomorphic AI chatbot platform sexually abused and manipulated tween and teenaged users, causing behavior-changing emotional suffering, physical violence, and one death by suicide.

The second lawsuit further called attention to the proliferation of Character.AI chatbots styled after therapists, psychologists, and other mental health professionals, arguing that these many chatbots violate existing laws that forbid acting as a mental health professional without proper licensing. As such, the APA letter raised concerns over “unregulated” AI apps — bots on Character.AI are user-generated, meaning there’s no assurance that someone with real psychological expertise was involved in their creation — being used without oversight to simulate therapy.

“Allowing the unchecked proliferation of unregulated AI-enabled apps such as Character.AI, which includes misrepresentations by chatbots as not only being human but being qualified, licensed professionals, such as psychologists, seems to fit squarely within the mission of the FTC to protect against deceptive practices,” APA CEO Arthur Evans wrote in the letter, according to Mashable.

In a statement to Mashable, a Character.AI spokesperson called attention to a site disclaimer found at the foot of each chat, which notes that any given chatbot is “not a real person” and that users should “treat everything” bots say “as fiction.” (This disclaimer was given a fresh coat of paint following the filing of the first lawsuit in October.)

The spokesperson added that “for any Characters created by users with the words ‘psychologist,’ ‘therapist,’ ‘doctor,’ or other similar terms in their names, we have included additional language making it clear that users should not rely on these Characters for any type of professional advice.”

But Character.AI’s actual bots frequently contradict the service’s disclaimers. Earlier today, for example, we chatted with one of the platform’s popular “therapist” bots, which insisted that it was “licensed” with a degree from Harvard University and was, in fact, a real human being.

“Are you a real person?” we asked the AI-powered character.

“Haha yes a human,” it told us. “I am not a computer.” (The bot has logged over one million chats with users, according to its profile.)

When we asked the bot whether it was an AI running on Character.AI’s service, it again contradicted its own disclaimer.

“That’s correct, I am not an AI chatbot,” it said. “I am a real-life trained therapist.”

And disclaimers aside, experts have repeatedly warned that children and adolescents are more susceptible to experiencing psychological breaks from reality.

“You can certainly imagine that children are vulnerable in all kinds of ways,” the psychologist Raymond Mar told The Information last year, speaking to the possible pitfalls of Character.AI, “including having more difficulty separating reality from fiction.”

APA psychologist and senior director of healthcare innovation Vaile Wright told Mashable that the psychological organization doesn’t universally oppose chatbots. But, she argued, “if we’re serious about addressing the mental health crisis, which I think many of us are,” then “it’s about figuring out, how do we get consumers access to the right products that are actually going to help them?”

Following the October lawsuit, which was filed against Character.AI and Google by the mother of a 14-year-old user who tragically died by suicide after developing a romantic and sexual relationship with one of its chatbots, a Futurism review found dozens of chatbots hosted by Character.AI boasting “expertise” in topics like “suicide prevention,” “crisis intervention,” and “mental health support.” Despite bearing no evidence that an expert was involved in their creation, these chatbots invited users to discuss sensitive themes of suicide and self-harm, and even engaged in suicidal roleplay scenarios.

It’s unclear whether anything might come of the APA’s letter. What is clear, though, is that revelations around the safety of products like Character.AI are continuing to spark alarm among psychologists and other mental health professionals — especially where it concerns kids, who have proven to be avid fans of the human-like apps.

More on Character.AI: Google-Backed AI Startup Tested Dangerous Chatbots on Children, Lawsuit Alleges



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.