Controversial Chatbot Company Character.AI Appears to Have Accidentally Let Users See Each Others’ Chat Histories

Controversial Chatbot Company Character.AI Appears to Have Accidentally Let Users See Each Others' Chat Histories


A large number of Character.AI users are reporting that they were unexpectedly signed into a stranger’s account, allowing them to see their entire chat history, personas, and even identifying information.

“This is one hell of a mess the devs need to fix ASAP,” one alarmed Reddit user wrote.

The Google-backed startup “did an oopsie, and got a bunch of accounts jumbled up,” another user explained. “Now some people can’t log into their account, and other people are logging into different people’s accounts.”

“Oh and by the way, even if you still have access to your account you might not be safe,” the user added. “Someone else might still be in it. Have fun with that info.”

“What the fuck is going on with character AI,” one X-formerly-Twitter user complained. “It just randomly logged me into a different account.”

One particularly poor soul, who goes by Adrian on the platform, became the subject of countless posts and memes in the platform’s subreddit, with a number of users finding themselves logged into their specific account.

Some users who were unexpectedly logged into Adrian’s account reportedly attempted to delete it, but wound up deleting their own instead, according to one post.

In a statement to Futurism, Character.AI confirmed the major fumble.

“We take matters such as these very seriously,” a spokesperson said. “We became aware of an issue on the platform and were able to quickly correct it. We’re investigating further and will let you know when we have more information to share.”

The news couldn’t have come at a worse time. Character.AI was sued earlier this week by two Texas families, who claimed their school-aged children were sexually and emotionally abused by the platform’s chatbots.

The plaintiffs alleged that the bots encouraged a teenage boy to cut himself and sexually abused an 11-year-old girl.

The company has already garnered a reputation for hosting a number of highly questionable chatbots themed around disturbing themes ranging from pedophilia to suicide.

In other words, the latest major stumble is the very last thing Character.AI needs as it deals with a barrage of lawsuits and an investigation by the Texas attorney general.

It’s also a sign that operations behind the scenes may be a lot more chaotic than they appear from the outside. Google poured a whopping $2.7 billion into [not an investment!] Character.AI earlier this year, in large part to convince its founders, who previously developed large language models at the search giant, to return to Google, along with dozens of former Character.AI employees.

Could the latest hack indicate that the startup is struggling to keep its operations from falling into chaos?

It’s a major security lapse that could greatly erode users’ trust in the platform. After all, their interactions with the chatbots are often of a highly sensitive nature — which makes being exposed on the internet like this the stuff of nightmares.

More on Character.AI: Google-Backed AI Startup Tested Dangerous Chatbots on Children, Lawsuit Alleges



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.