Can humans purge the bots without sacrificing our privacy?

Can humans purge the bots without sacrificing our privacy?


Sign up for the Freethink Weekly newsletter!

A collection of our favorite stories straight to your inbox

There is a popular theory that the internet is “dead.” The claim is that the vast majority of the internet’s content and traffic is now generated by artificial intelligence and bots, talking aimlessly to each other, rendering the online world effectively lifeless – devoid of human creation.

This dystopian view stands in stark contrast to the early internet of the 1990s. Back then, it seemed like an infinite canvas onto which humanity was painting their creativity, humor, and personal experiences. In the 2000s, we filled forums with Lolcats and “O RLY?” owls. We played colorful, cheeky online games at Candystand, Newgrounds, and Addicting Games. We posted videos of ourselves lip-syncing while chair-dancing. The internet seemed vibrantly fun and undeniably human.

But then came the bots, computer programs automated to perform certain online tasks, such as crawling webpages, posing as real people on websites, leaving reviews, boosting or denigrating products, and attempting to break into the accounts of genuine users. The online world transformed from a hopeful place where online connections could cement earthly ones into something cheap, synthetic, and untrustworthy. (How many times have you read online comments or reviews and wondered whether a real person actually wrote them?) Artificial intelligence advances have sent this metamorphosis into overdrive, manufacturing realistic images, videos, voices, and text. Online, you cannot believe your eyes or ears.

“There is a substantial risk that, without further mitigations, deceptive AI-powered activity could overwhelm the internet.”

Adler et al.

So the “dead internet” theory is no longer so implausible. According to the 2024 Imperva Bad Bot Report, the proportion of internet traffic generated by bots hit almost 50% this year, growing 2% from the year prior. It’s hard to get a handle on the share of essentially fake websites, social media accounts, comments, reviews, and emails being churned out by bots, but it is surely vast.

In August, an international group of technologists, policymakers, and academics – hailing from institutions including Harvard, MIT, Oxford, and Berkeley, and companies such as OpenAI and Microsoft – outlined a plan to halt the bots’ online takeover (or at least create human redoubts). Their solution? Personhood credentials, “digital credentials that empower users to demonstrate that they are real people—not AIs—to online services, without disclosing any personal information.”

Passport, driver’s license, Social Security number, birth certificate: These all-important signifiers of identity have been required for certain services and activities for years. Could an online “personhood credential” (PHC) join them? And what kind of unintended consequences might spill out from it?

An increasingly deceptive online world

“There is a substantial risk that, without further mitigations, deceptive AI-powered activity could overwhelm the internet,” the authors write.

“A future internet without PHCs means anyone you meet online is assumed a bot until proven otherwise,” Dr. Sean McGregor, a co-author on the report and Director of Advanced Testing Research at the Digital Safety Research Institute, told Freethink. “Effectively, either PHCs are widely adopted or people become responsible for doing the personhood verification themselves.”

Two forms of deception that threaten the internet’s future are “sockpuppets” and bot attacks. “Sockpuppets” here can mean bots or AIs purporting to be people. These are behind the Facebook and X profiles that spread deliberate disinformation. They are also the kind of bots used to malign (or boost) certain businesses, perpetrate scams, and try to alter financial markets with misleading information. They also regularly overwhelm US government public comment processes for new laws and regulations. Millions of fake comments were submitted to the Federal Communications Commission’s 2017 public request for comment on net neutrality, for example.

“[Bad] actors’ increased access to sophisticated and inexpensive AI tools may make their attacks far more effective.”

Adler et al.

Bot attacks pose a different threat. Malicious actors create groups of bots that send mass email spam, attempt brute-force break-ins to online customer accounts, and overload internet services with requests (DDOS), triggering major outages. DDOS attacks are now regular occurrences. Microsoft’s Azure cloud service was disrupted for eight hours in July. Recently, “hacktivists” targeted over 50 organizations in France after the French government arrested Telegram founder Pavel Durov.

Tools like CAPTCHAs and AI content detectors are trying to keep the internet predominantly human. Charging fees, utilizing document and appearance-based verification, and asking for phone numbers and email addresses are additional strategies to limit bot proliferation. However, the authors worry that these methods will grow increasingly obsolete in the looming AI era.

“Although bad actors have perpetrated deceptive attacks for decades, actors’ increased access to sophisticated and inexpensive AI tools may make their attacks far more effective—harder to distinguish and also more prevalent,” they wrote.

Personhood credentials to the rescue?

AIs might be able to look and sound like us in the digital world, but until they take convincing corporeal form with the help of (as yet) sci-fi robotics, they are merely pixels on a screen. So acquiring personhood credentials will necessitate an errand, perhaps something like visiting the department of motor vehicles (DMV).

“To get a personhood credential, you are going to have to show up in person or have a relationship with the government, like a tax ID number,” Tobin South, a graduate student in MIT’s Media Lab and one of the report’s authors, told MIT News. “There is an offline component. You are going to have to do something that only humans can do. AIs can’t turn up at the DMV, for instance.”

The authors envision various organizations issuing personhood credentials (PHCs). These might be state or federal government entities or private providers regulated by governments. “Governments have a history of proving who is human, so they are a great place to start,” McGregor argues.

Individuals could visit any of these issuers in person and provide proof of identification. They might choose to use existing documents like a passport or government-issued ID. Or they could opt for a biometric scan, submitting their palm, iris, or fingerprint for measurement. Supplying this information ensures that only one PHC is given to one person — but, importantly, the person’s identity is not linked to the specific credential that’s given out. An anonymous PHC is then issued, stored digitally on one’s devices, and managed via built-in applications. It could then be used across the internet to prove humanity where necessary. Every few years, a PHC would need to be re-authenticated in order to ensure it still represents a real, unique person.

While a PHC proves your humanity, it would not be identification — users would maintain anonymity.

If the idea of visiting the DMV to gain access to online services hasn’t turned you off, you may still be raising an eyebrow at the privacy implications.

Crucially, the authors insist, no other information ought to be shared or connected to the credential. While a PHC proves your humanity, it would not be identification — users would maintain anonymity, and it could not be used to trace their digital activity across the web or link it back to their real world identity.

The authors say that public key cryptography would likely serve as the foundation for this system of anonymous personhood credentials.

“A PHC issuer could maintain a list of public keys (each related to a valid credential), each of which has a paired secret private key. When a new person successfully enrolls in the PHC system, the issuer lets this person add exactly one public key to the list of valid keys—the private key is known only to the user enrolling,” they described.

Now, how does a user prove they are the holder of a valid PHC key to an online service provider, without revealing the key itself (thus risking its theft)? Here, the authors suggest zero-knowledge proofs. 

“Zero-knowledge proofs are cryptographic protocols that enable a ‘prover’ to convince a ‘verifier’ of a statement’s truth, without revealing any additional information beyond the validity of the statement,” the authors explained. “The user proves to the service provider that the statement ‘I hold a valid PHC’ is true, without revealing which PHC—for instance, by proving ‘I hold a secret private key that pairs with some public key on the issuer’s list.’”

Clear benefits, but real difficulties

Some benefits of PHCs are obvious. Sites and online services that require PHCs could practically rid themselves of many kinds of bots, becoming far less susceptible to attacks and making their ecosystems safer for customers. Websites that merely provide an option to verify users’ PHCs could label real humans, allowing users to know who is a confirmed person.

But there are genuine risks and challenges. For one, in a system where PHCs become widely adopted and almost a necessity for using the internet, PHC issuers will hold a lot of power. Imagine a country where a repressive government is the only issuer of PHCs. It could choose who to give them to based upon a person’s politics, personal characteristics, or impose other bogus requirements. Free speech might be stifled. In democratic societies, multiple competing issuers of equally valid PHCs are an essential check on the power given to issuers.

This raises its own issue. With multiple PHC issuers available, people looking to make a quick buck could acquire multiple PHCs and sell them to unscrupulous actors seeking to make bots that pose as real humans. This could be surmounted by PHC issuers sharing a universal database of identity information to ensure one person doesn’t receive multiple PHCs. However, this then presents further risks around privacy and centralization. The authors recommend a middle ground, where each person can only obtain a bounded number of credentials in the overall ecosystem. Issuers will only share basic information with each other about how many PHCs a unique person has been issued.

“The technological issues are solved for PHCs, but we need humans to demand the solution.”

Sean McGregor

Still, some people might also find it challenging to acquire PHCs, whether due to time constraints or difficulty getting out. Others might not want to attain a PHC for privacy concerns — even if the system conceived by the researchers really is foolproof, that’s no guarantee that governments or private PHC providers would necessarily implement it that way. Hackers are clever, and human errors are common. Whatever the reason, people who end up without a PHC might find their contributions online discounted, or they might be excluded from major services altogether.

Third, PHC issuers themselves could be subject to cyberattacks, endangering their clients’ credentials.

Of course, for these hypothetical concerns to materialize, PHCs must first see wide adoption, and it’s tough to see this happening without major pushes from politicians, key online service providers, and everyday internet users.

“PHCs are a multi-sided market requiring collective adoption to be useful,” McGregor said. “The technological issues are solved for PHCs, but we need humans to demand the solution.”

Ready or not, PHCs are already here

These challenges haven’t stopped one startup from trying to set up its own PHC. In 2023, Tools for Humanity, founded four years earlier by OpenAI chief executive Sam Altman, Max Novendstern, and Alex Blania, launched World ID, the first mass personhood credential. To attain one, a human must have their iris scanned by one of the company’s orbs. In exchange, they receive a few coins of the company’s cryptocurrency, WLD. The orb then creates a unique code and links it to a user-created ID held on the World app. The company’s system is blockchain-based and utilizes zero-knowledge proofs to prove a holder’s personhood to service providers.

Tools for Humanity has taken in more than a quarter-billion dollars of investor funds and has currently signed up more than 6 million users, mostly in South America. The company, however, started running into serious roadblocks this year as various government agencies in countries ranging from Kenya and India to Brazil and Spain ordered it to halt operations out of concerns for their citizens’ privacy.

The saga showcases how governments are hesitant to cede identity controls to private companies. But do their concerns supersede the threats that AI and bots pose to online denizens? That is a choice that the people around the internet will increasingly have to make in the years ahead.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!

A collection of our favorite stories straight to your inbox



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.