OpenAI Insiders Say They’re Being Silenced About Danger

OpenAI Insiders Say They're Being Silenced About Danger


A coalition of current and former employees of OpenAI — as well as a handful of colleagues from Anthropic and Google DeepMind — are pushing for a “Right to Warn” AI leadership and the public about AI safety concerns, citing fears over accountability, company overreach, and the silencing of AI workers.

The group is seeking robust whistleblower protections, safe anonymous reporting pathways, and the abolition of the restrictive non-disclosure and non-disparagement agreements that quiet current and former AI staffers. The AI workers are also asking that AI companies work to “support a culture of open criticism,” as they write in an open letter, so long as trade secrets are protected.

According to the letter and a press release, the Right to Warn demands have been cosigned by AI “godfathers” Yoshua Bengio and Geoffrey Hinton, as well as fellow renowned AI scientist Stuart Russell.

Per the cohort’s website, they firmly believe that AI will “deliver unprecedented benefits to humanity.” But they also urge that tech certainly doesn’t come without its risks, including the concentration of power within the industry and the silencing of concerned staffers.

“OpenAI CEO Sam Altman has said, ‘you should not trust one company and certainly not one person [to govern AI],’ and we agree,” said Willliam Saunders, a former OpenAI employee and coalition member, in a statement. “When dealing with potentially dangerous new technologies, there should be ways to share information about risks with independent experts, governments, and the public.”

“Today, the people with the most knowledge about how frontier AI systems work and the risks related to their deployment,” he continued, “are not fully free to speak because of possible retaliation and overly broad confidentiality agreements.”

The announcement of the Right to Warn initiative comes on the heels of a damning Vox report revealing that ChatGPT creator OpenAI was threatening to claw back employees’ vested equity — which many Silicon Valley workers will accept in lieu of a higher salary — if they didn’t sign heavily restrictive NDAs.

In response to that initial report, OpenAI CEO Sam Altman claimed he had no knowledge of the vested-equity-for-silence clause, saying he was “genuinely embarrassed” it existed. A follow-up Vox piece, however, showed that Altman and other OpenAI executives signed paperwork implying their direct knowledge of the deeply unconventional provision.

“In order for OpenAI and other AI companies to be held accountable to their own commitments on safety, security, governance and ethics,” wrote Jacob Hilton, a former OpenAI employee and currently a researcher at the Alignment Research Center, in a Twitter thread, “the public must have confidence that employees will not be retaliated against for speaking out.”

Hilton added that, as it stands, the “main way for AI companies to provide assurances to the public is through voluntary public commitments.” But as the AI researcher noted, this is inherently pretty flimsy, as there’s “no good way for the public to tell if the company is actually sticking to these commitments, and no incentive for the company to be transparent.”

On that note, it’s also worth mentioning that OpenAI recently disbanded its “Superalignment” safety team entirely, and saw several high-profile researchers exit. Which, of course, doesn’t exactly sow confidence in the firm’s prioritization of safety and ethics efforts. Elsewhere, Google’s demonstrably unsafe search AI has had a rough few weeks, too.

In response to the Right to Warn letter, a spokesperson for OpenAI told The New York Times that the AI company is “proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk.”

“We agree that rigorous debate is crucial given the significance of this technology,” the spokesperson added, “and we’ll continue to engage with governments, civil society and other communities around the world.” They also noted that OpenAI has an anonymous integrity “hotline.”

A Google spokesperson, according to the NYT, declined to respond. Futurism has also reached out to Anthropic.

Safety and ethics are important considerations for any burgeoning technology. Given that the leaders of AI companies often talk about the fact that their tech could possibly destroy the entire world, sway elections, or otherwise wreak some short-to-long-term havoc on humanity, that feels especially true in the lucrative and concentrated AI bubble.

But safety may also mean going slower, something that’s very much disincentivized in Silicon Valley’s AI race — and that reality, this letter suggests, is reflected behind the walls of AI companies, where real and productive dialogue about AI safety and the freedom to speak out about possible AI risks and harms might not always be a given.

Anyway… you think Meta’s mad they didn’t get a shoutout here?

More on AI: Sam Altman Admits That OpenAI Doesn’t Actually Understand How Its AI Works



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.