OpenAI’s former head of ‘AGI readiness’ says that soon AI will be able to do anything on a computer that a human can

OpenAI's former head of 'AGI readiness' says that soon AI will be able to do anything on a computer that a human can


  • Miles Brundage left OpenAI to pursue policy research in the nonprofit sector.
  • Brundage was a key figure in AGI research at OpenAI.
  • OpenAI has faced departures amid concerns about its approach to safety research.

There is a lot of uncertainty about artificial general intelligence, a still hypothetical form of AI that can reason as well — or better — than humans.

According to the researchers at the industry’s cutting edge, though, we’re getting close to achieving some form of it in the coming years.

Miles Brundage, a former head of policy research and AGI readiness at OpenAI, told Hard Fork, a tech podcast, that over the next few years, the industry will develop “systems that can basically do anything a person can do remotely on a computer.” That includes operating the mouse and keyboard or even looking like a “human in a video chat.”

“Governments should be thinking about what that means in terms of sectors to tax and education to invest in,” he said.

The timeline for companies like OpenAI to create machines capable of artificial general intelligence is an almost obsessive debate among anyone following the industry, but some of the most influential names in the field believe it’s due to arrive in a few years. John Schulman, OpenAI cofounder and research scientist who left OpenAI in August, also said AGI is a few years away. Dario Amodei, CEO of OpenAI competitor Anthropic, thinks some iteration of it could come as soon as 2026.

Brundage, who announced he was leaving OpenAI last month after a little more than six years at the company, would have as good an understanding of OpenAI’s timeline as anyone.

During his time at the company, he advised its executives and board members about how to prepare for AGI. He was also responsible for some of OpenAI’s biggest safety research innovations, including external red teaming, which involves bringing outside experts to look for potential problems in the company’s products.

OpenAI has seen a string of departures from several high-profile safety researchers and executives, some of whom have cited concerns about the company’s balance between AGI development and safety.

Brundage said his departure, at least, was not motivated by specific safety concerns. “I’m pretty confident that there’s no other lab that is totally on top of things,” he told Hard Fork.

In his initial announcement of his departure, which he posted to X, he said that he wanted to have more impact as a policy researcher or advocate in the nonprofit sector.

He told Hard Fork that he still stands by the decision and elaborated on why he left.

“One is that I wasn’t able to work on all the stuff that I wanted to, which was often cross-cutting industry issues. So not just what do we do internally at OpenAI, but also what regulation should exist and so forth,” he said.

“Second reason is I want to be independent and less biased. So I didn’t want to have my views rightly or wrongly dismissed as this is just a corporate hype guy.”





Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.