“Somebody who has any amount of knowledge is not going to be relying on the AI.”
You’re Toxic
Over on Facebook, the latest frontier of AI misinformation is taking the form of a wizened old wizard who gives appalling mushroom advice.
As our friends at 404 Media report, a Facebook group geared towards helping mushroom foragers identify their fungi finds recently had an AI chatbot added to it that instructed users on the best way to prepare an arsenic-carrying shroom.
Named “FungiFriend,” the chatbot was injected by Facebook’s platform into the popular Northeast Mushroom Identification and Discussion group and included a bearded, psychedelic-looking wizard as its avatar. When one user asked the bot how to cook Sarcosphaera coronaria — a violet-hued mushroom that was thought to be edible before a bunch of people died from eating it in Europe — the chatbot recklessly claimed that enthusiasts like to sauté this “edible but rare” fungus in butter, add it to stews, or pickle it.
This dangerous chatbot was brought to 404‘s attention via Rick Claypool, a research director with the consumer protection group Public Citizen who also happens to be an avid forager who’s warned about “mushroom misinformation” before.
In the world of foraging, Claypool noted, groups like Northeast Mushroom Identification are important resources for beginners who need help distinguishing between delicious edible fungi and the kind that can kill or injure you. Adding AI into that mix is, as one might imagine, bad news bears.
“There is just no way [AI has] reached a point of being good enough at providing true and factual and verifiable information,” Claypool told 404, “especially if we’re talking about distinguishing between toxic and edible varieties.”
I’m Slipping Under
According to a group moderator, FungiFriend was automatically added by Facebook’s parent company, Meta. As Claypool expounded, the chatbot would pop up first when users uploaded photos of mushrooms from their phones, essentially encouraging them to speak to this dangerous AI before other humans who would be more likely to set them straight.
While the moderator who spoke to 404 said that the group would “most certainly be removing” FungiFriend, this debacle highlights how dangerous these sorts of confident “hallucinations,” or AI mistruths, can be — especially for people new to a hobby like foraging who may be anxious about looking dumb.
“Somebody who has any amount of knowledge is not going to be relying on the AI. It’s going to be someone who feels nervous about putting themselves out there,” Claypool explained. “Maybe they’re worried to seem like they don’t know very much or they’re afraid of asking a stupid question.”
“So you ask the AI so it won’t judge you like a normal person might,” he continued. “You might, then, not feel like it judges you, but it might kill you.”
More on fungi AI: Asked to Summarize a Webpage, Perplexity Instead Invented a Story About a Girl Who Follows a Trail of Glowing Mushrooms in a Magical Forest
Source link
lol