AI technology is becoming so advanced that researchers argue in a new paper that there needs to be a better way to verify that a person online is human and not an AI bot.
The researchers, from Ivy League universities and companies including OpenAI and Microsoft, propose a “personhood credential” (PHC) system for human verification in a yet-to-be-peer-reviewed paper to replace existing processes like CAPTCHAs.
But to anybody concerned about privacy and mass surveillance, that’s a hugely imperfect solution that offloads the burden of responsibility onto end users — a common tactic in Silicon Valley.
“A lot of these schemes are based on the idea that society and individuals will have to change their behaviors based on the problems introduced by companies stuffing chatbots and large language models into everything rather than the companies doing more to release products that are safe,” surveillance researcher Chris Gilliard told The Washington Post.
In the paper, the researchers proposed the PHC system because they’re concerned that “malicious actors” will leverage AI’s mass scalability and its propensity to convincingly ape human actions online to flood the web with non-human content.
Chief among their concerns: AI’s ability to spit out “human-like content that expresses human-like experiences or points of view”; digital avatars that look, move and sound like real humans; and AI bots’ increasing skillfulness at mimicking “human-like actions across the Internet” such as “solving CAPTCHAs when challenged.”
That’s why the idea of PHCs is so attractive, the researchers argue. An organization that offers digital services, such as a government, can issue one unique personhood credential to each human end user, who would verify they’re human via zero-knowledge proofs, a borrowed cryptographic technique in which a human being provides specific information without revealing the details of the data.
End users would store their credentials digitally on their personal devices, which would help preserve anonymity online, according to the researchers.
The credentialing system could replace or augment human verification processes on the Internet such as the aforementioned CAPTCHAs and biometrics like fingerprints.
It seems like a great solution on paper, but the researchers concede a PHC system still has pitfalls.
For one thing, it seems inevitable that many people would sell their PHC to AI spammers, giving automated content an air of credibility and undercutting the project’s goals.
Any organization that issues these type of credentials could become too powerful, while the whole system can still be vulnerable to attacks from hackers, according to the paper.
“One significant challenge for a PHC ecosystem is how it may concentrate power in a small number of institutions—especially PHC issuers, but also large service providers whose decisions around PHC use will have large repercussions for the ecosystem,” the paper reads.
A credentialing system can also be a source of friction for less internet-savvy users such as elderly people, who are often the target of online scams.
That’s why the researchers argue that governments should investigate the use of PHC through a pilot program.
But the PHC sidesteps a crucial issue: this kind of system just puts another digital burden on end users, who have to contend with spam and other gunk in their crowded digital lives. Tech companies are the ones who have unleashed this problem, so they should be the ones to solve it.
One step they can take is issuing watermarks on content their AI models produce or by developing a process that can detect the tell tale signs that a piece of data was AI generated. Neither is foolproof, but they do shift the burden of responsibility back on the source of the AI bot problem.
And if tech companies absolve themselves of this responsibility altogether, that’s just another black mark on Silicon Valley, which has made a habit of unleashing problems nobody asked for while monetizing their impacts.
That’s similar to how tech companies have gobbled up precious electricity and water to power up AI data centers, while communities — especially in drought-stricken areas — suffer from this allocation of resources.
And PHC, while shiny and attractive on paper, passes the buck once again.
More on AI: The US Government Just Banned Fake AI-Generated Reviews
Source link
lol