Time’s almost up! There’s only one week left to request an invite to The AI Impact Tour on June 5th. Don’t miss out on this incredible opportunity to explore various methods for auditing AI models. Find out how you can attend here.
ChatGPT-maker OpenAI this morning announced it has begun training its new “frontier model” and formed a new Safety and Security Committee led by current board members Bret Taylor (OpenAI board chair and co-founder of customer service startup Sierra AI, former Google Maps lead and former Facebook CTO), Adam D’Angelo (CEO of Quora and AI model aggregator app Poe), Nicole Seligman (former Executive Vice President and global General Counsel of Sony Corporation), and Sam Altman (current OpenAI CEO and one of its co-founders).
As OpenAI writes in a company blog post:
“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.“
90 day timer starts now
Notably, the company outlines the role this new committee will play in steering the development of the new frontier AI model, stating:
“A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.“
Therefore, it is reasonable to conclude that whatever the new frontier model is — be it called GPT-5 or something else — it won’t be released for at least 90 days, to give the new Safety and Security Committee the time necessary to “evaluate and further develop OpenAI’s processes and safeguards” and to issue the recommendations to the “full Board” of directors.
That means OpenAI’s board should receive the new Safety and Security Committee’s recommendations by no later than August 26, 2024.
Why 90 days as opposed to longer, shorter, or some other time? OpenAI doesn’t really specify, but the metric is a common one used in business to evaluate and provide feedback on processes, and seems to be about as good as any other — not too short, not too long.
The new safety board isn’t independent
The news was immediately criticized, with some noting the fact that the new committee was comprised entirely of OpenAI’s “own executives,” meaning the evaluation of the company’s safety measures will not be independent.
OpenAI’s board of directors, many of which are now on the new Safety and Security Committee, has been a source of controversy before.
The old board of directors of OpenAI’s nonprofit holding company fired Altman as CEO and from all duties at the company back on November 17, 2023, just five days prior to the 1-year-anniversary of ChatGPT, citing that he was “not consistently candid” with them, only to face a staunch revolt by employees and major pushback by OpenAI investors including big backer Microsoft.
Altman was ultimately reinstated as OpenAI CEO on November 21, 2024, and that board stepped down to be replaced by Taylor, Larry Summers, and Adam D’Angelo, with Microsoft joining as a non-voting observer. At the time, the board was criticized for being entirely male dominated.
On March 8 of this year, additional members Sue Desmond-Hellmann (former CEO of the Bill and Melinda Gates Foundation), Fidji Simo (CEO and Chair of Instacart), Seligman and Altman were also named to the new board.
A bumpy time for OpenAI (and the AI industry)
OpenAI released its latest AI model — GPT-4o — earlier this month and has been on a bit of a public relations meltdown since then.
Despite the new model being the first of its kind from OpenAI (and possibly the world) to be trained on multimodal inputs and outputs from the get-go, rather than stringing together several models trained on text and media (as with the prior GPT-4), the company was criticized by actor Scarlett Johansson who accused it of approaching her about voicing its new assistant and showcasing a voice she and others sounded like hers (specifically, her AI assistant character from the sci-fi movie Her). OpenAI countered it commissioned a voice separately and did not ever intend or instruct the voice actor to imitate Johansson.
OpenAI’s chief scientist and co-founder Ilya Sutskever resigned along with the co-leader of its superalignment team dedicated to safeguarding against superintelligences, and the latter criticized OpenAI on his way out the door for prioritizing “shiny products” over safety. The entire superalignment team was disbanded.
In addition, OpenAI was criticized for having a restrictive non-disparagement separation agreement for outgoing employees and clawback provision for equity in the event it and other terms were violated, but the company has since said it will not enforce and release employees from that terminology (it is also, I might add, not actually unusual from my experience working in tech at Xerox and a self-driving startup Argo AI).
However, it has also found success signing up new partners in mainstream media for training data and authoritative journalism to be surfaced in ChatGPT, and has seen interest among musicians and filmmakers in its Sora video generation model — as Hollywood reportedly eyes AI to streamline and reduce costs in film and TV production.
Meanwhile, rival Google took heat for its AI Overview answers in search, suggesting a wider backlash among regular users to the entire gen AI field.
We shall see if OpenAI’s new safety committee can placate or at least assuage some critics and, perhaps more importantly, regulators and potential business partners, ahead of the launch of whatever it is cooking up.
Source link
lol