OpenAI’s plan to become a for-profit company could encourage the artificial intelligence startup to cut corners on safety, a whistleblower has said.
William Saunders, a former research engineer at OpenAI, told the Guardian he was concerned by reports that the ChatGPT developer was preparing to change its corporate structure and would no longer be controlled by its non-profit board.
Saunders, who flagged his concerns in testimony to the US Senate this month, said he was also concerned by reports that OpenAI’s chief executive, Sam Altman, could hold a stake in the restructured business.
“I’m most concerned about what this means for governance of safety decisions at OpenAI,” he said. “If the non-profit board is no longer in control of these decisions and Sam Altman holds a significant equity stake, this creates more incentive to race and cut corners.”
OpenAI was founded as a non-profit entity and its charter commits the startup to building artificial general intelligence (AGI) – which it describes as “systems that are generally smarter than humans” – that benefits “all of humanity”. However, the potential power of an AGI system has alarmed experts and practitioners including Saunders amid fears that the competitive race to build such technology could lead to safety concerns being overridden.
Saunders said in written testimony to the Senate that he left the company because he “lost faith” that OpenAI would make responsible decisions about AGI. Saunders was a member of staff on OpenAI’s superalignment team, a now-dissolved group tasked with ensuring that powerful AI systems adhere to human values and aims.
Saunders has now said that a switch to a for-profit entity could undermine the aims of the structure in place, under which OpenAI has a profit-making entity that caps returns to investors and employees. All profit generated above that cap is returned to the non-profit for “the benefit of humanity”.
Saunders said a purely for-profit entity may not distribute its proceeds back to society if, for instance, it developed technology that made a significant number of jobs obsolete.
“OpenAI was supposed to only allow limited profit for investors and employees, and give the rest to the non-profit,” he said. “Then if OpenAI made AI technology that caused large-scale unemployment, they wouldn’t be just pocketing the profits themselves and would give back to society. Switching to a for-profit suggests this is no longer a priority.”
OpenAI has been contacted for comment. Its charter states that the company is “committed to doing the research required to make AGI safe” and it recently made its safety and security committee an independent entity, which also involved Altman stepping down as one of its members.
OpenAI is reportedly considering restructuring as a public benefit corporation, which has no cap on its profits but is committed to making a positive impact on society. Reuters also reported this week that the non-profit entity would hold a stake in the new business.
OpenAI has declined to comment on the specifics of the restructuring but has said the non-profit organisation would continue to exist. In a statement issued on Thursday, OpenAI’s chair, Bret Taylor, said the board had discussed giving Altman a stake in the business but no specific figures had been discussed. It has been reported that Altman may be given a 7% stake in OpenAI, which is seeking $6.5bn of investment in fundraising that could lead it to be valued at $150bn.
“The board has had discussions about whether it would be beneficial to the company and our mission to have Sam be compensated with equity, but no specific figures have been discussed nor have any decisions been made,” he said.
Source link
lol