OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity

OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity


“The world isn’t ready, and we aren’t ready.”

Getting Warner

After former and current OpenAI employees released an open letter claiming they’re being silenced against raising safety issues, one of the letter’s signees made an even more terrifying prediction: that the odds AI will either destroy or catastrophically harm humankind are greater than a coin flip.

In an interview with The New York Times, former OpenAI governance researcher Daniel Kokotajlo accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) because its decision-makers are so enthralled with its possibilities.

“OpenAI is really excited about building AGI,” Kokotajlo said, “and they are recklessly racing to be the first there.”

Kokotajlo’s spiciest claim to the newspaper, though, was that the chance AI will wreck humanity is around 70 percent — odds you wouldn’t accept for any major life event, but that OpenAI and its ilk are barreling ahead with anyway.

MF Doom

The term “p(doom),” which is AI-speak for the probability that AI will usher in doom for humankind, is the subject of constant controversy in the machine learning world.

The 31-year-old Kokotajlo told the NYT that after he joined OpenAI in 2022 and was asked to forecast the technology’s progress, he became convinced not only that the industry would achieve AGI by the year 2027, but that there was a great probability that it would catastrophically harm or even destroy humanity.

As noted in the open letter, Kokotajlo and his comrades — which includes former and current employees at Google DeepMind and Anthropic, as well as Geoffrey Hinton, the so-called “Godfather of AI” who left Google last year over similar concerns — are asserting their “right to warn” the public about the risks posed by AI.

Kokotajlo became so convinced that AI posed massive risks to humanity that eventually, he personally urged OpenAI CEO Sam Altman that the company needed to “pivot to safety” and spend more time implementing guardrails to reign in the technology rather than continue making it smarter.

Altman, per the former employee’s recounting, seemed to agree with him at the time, but over time it just felt like lip service.

Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had “lost confidence that OpenAI will behave responsibly” as it continues trying to build near-human-level AI.

“The world isn’t ready, and we aren’t ready,” he wrote in his email, which was shared with the NYT. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”

Between the big-name exits and these sorts of terrifying predictions, the latest news out of OpenAI has been grim — and it’s hard to see it getting any sunnier moving forward.

More on OpenAI: Sam Altman Replaces OpenAI’s Fired Safety Team With Himself and His Cronies



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.