Leaked OpenAI Documents Show Sam Altman Was Clearly Aware of Silencing Former Employees

Leaked OpenAI Documents Show Sam Altman Was Clearly Aware of Silencing Former Employees


OpenAI’s credibility — and the credibility of its CEO, Sam Altman — is crumbling.

Last week, amid a surprise string of high-profile executive and safety team departures, Vox revealed that the ChatGPT creator had pressured employees into signing draconian non-disclosure and non-disparagement agreements by threatening to claw back exiting OpenAI employees’ vested equity in the multibillion-dollar AI company.

Clawing back vested equity — in short, the amount of company ownership that an employee has gained through their months or years of working there — is a highly unusual practice to begin with. This is especially true in the startup-powered Silicon Valley, where tech workers often forgo high salaries in favor of equity agreements based on the hope that they’ll get rich later when a successful startup like OpenAI eventually goes public. For OpenAI to play bizarre contractual take-backsies in exchange for narrative control over former employees would be an awful look for any company — let alone a supposedly “open” venture claiming it’s the best one to build the imagined all-knowing AI that OpenAI’s leaders say will power humanity’s future.

In response to the Vox report, Altman apologetically took to X-formerly-Twitter to admit that yes, “there was a provision about potential equity cancellation in our previous exit docs.” But according to the CEO, though the clause was there, the company never actually clawed anything back. Most importantly, he further claimed that he had no knowledge of the provision.

“This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI,” Altman continued in a tone that can only be described as sheepish, “I did not know this was happening and I should have.”

But according to Vox’s latest follow-up, Altman wasn’t in the dark about the equity clauses, as he claimed in his tail-between-legs tweet.

Documentation reviewed by Vox reveals that several company leaders — including OpenAI chief strategy officer Jason Kwon, who reportedly told staffers following the initial Vox report that OpenAI leadership “caught” the provision “~month ago” — signed documents that plainly outlined the stifling clawback provision. The list of executives includes Altman, whose signature, according to Vox‘s reporting, is on “incorporation documents” for the holding company that manages OpenAI equity; these documents contain “multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees” or, if employees choose not to actually return the equity, “block them from selling it” altogether.

In other words, unless someone forged the CEO’s signature, he gave express permission for these clauses to exist. Outside of forgery, there are only two plausible reasons for Altman’s alleged lack of knowledge: either he didn’t fully read the employment contracts he was signing or he was lying.

Complicating the denials further is the way employees were reportedly treated on their way out. According to the Vox report, the equity provisions were no secret to OpenAI representatives handling departures, who in some cases gave outgoing employees just seven days to make the incredibly complicated decision about their future — all the while, in instances reviewed by Vox, emphasizing possible clawbacks.

“We want to make sure you understand that if you don’t sign, it could impact your equity,” one rep told an outgoing employee, according to Vox. “That’s true for everyone, and we’re just doing things by the book.”

But again, as the report reiterates, this is not “by-the-book” behavior.

“For a company to threaten to claw back already-vested equity is egregious and unusual,” Chambord Benton-Hayes, a California employment law attorney, told Vox.

When Vox asked OpenAI to explain how the provisions could have possibly wound up in documents signed by Altman without Altman actually knowing about them, Kwon non-answered that “we are sorry for the distress this has caused great people who have worked hard for us.”

“We have been working to fix this as quickly as possible,” Kwon — who, again, also signed papers delineating this provision — continued in his statement. “We will work even harder to be better.”

But that’s getting harder and harder to believe. Last year, when Altman was briefly forced out of OpenAI in what was pretty much a corporate coup, those who voted to oust the CEO — many of whom departed after losing said coup — claimed that Altman had been “inconsistently candid” in his communications with the board. Altman and OpenAI are also caught up in a brewing legal storm with actress Scarlett Johansson, who claims that Altman copycatted her voice for OpenAI’s new “Sky” AI assistant after she had explicitly turned Altman and OpenAI down. (Altman chalked Johansson’s allegations up to simple miscommunication.)

Meanwhile, recent departures have ground OpenAI’s “Superalignment” safety team — the ones tasked with making sure a killer AI doesn’t obliterate humankind — into dust. Great stuff.

On its website, OpenAI features a “charter” declaring that “OpenAI’s mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity.”

“We will attempt to directly build safe and beneficial AGI,” it continues, “but will also consider our mission fulfilled if our work aids others to achieve this outcome.”

The document then lists a series of “principles,” which the company claims it’ll follow to achieve this mission. The word “transparency” is notably absent.

More on OpenAI: Sam Altman Ignoring Scarlett Johansson’s Lack of Consent Shows Us Exactly What Type of Person He Really Is



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.