Disaster “I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded - Company insiders explain why safety-conscious employees are leaving.

Vox.com (archive) | By Sigal Samuel | Updated May 17, 2024, 11:45pm EDT

Screenshot 2024-05-18 at 10.24.50.png

For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out.

What’s going on here?

If you’ve been following the saga on social media, you might think OpenAI secretly made a huge technological breakthrough. The meme “What did Ilya see?” speculates that Sutskever, the former chief scientist, left because he saw something horrifying, like an AI system that could destroy humanity.

But the real answer may have less to do with pessimism about technology and more to do with pessimism about humans — and one human in particular: Altman. According to sources familiar with the company, safety-minded employees have lost faith in him.

“It’s a process of trust collapsing bit by bit, like dominoes falling one by one,” a person with inside knowledge of the company told me, speaking on condition of anonymity.

Not many employees are willing to speak about this publicly. That’s partly because OpenAI is known for getting its workers to sign offboarding agreements with non-disparagement provisions upon leaving. If you refuse to sign one, you give up your equity in the company, which means you potentially lose out on millions of dollars.

(OpenAI did not respond to a request for comment in time for publication. After publication of my colleague Kelsey Piper’s piece on OpenAI’s post-employment agreements, OpenAI sent her a statement noting, “We have never canceled any current or former employee’s vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit.” When Piper asked if this represented a change in policy, as sources close to the company had indicated to her, OpenAI replied: “This statement reflects reality.”)

One former employee, however, refused to sign the offboarding agreement so that he would be free to criticize the company. Daniel Kokotajlo, who joined OpenAI in 2022 with hopes of steering it toward safe deployment of AI, worked on the governance team — until he quit last month.

“OpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board. This could be the best thing that has ever happened to humanity, but it could also be the worst if we don’t proceed with care,” Kokotajlo told me this week.

OpenAI says it wants to build artificial general intelligence (AGI), a hypothetical system that can perform at human or superhuman levels across many domains.

“I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen,” Kokotajlo told me. “I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”

And Leike, explaining in a thread on X why he quit as co-leader of the superalignment team, painted a very similar picture Friday. “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” he wrote.

OpenAI did not respond to a request for comment in time for publication.

Why OpenAI’s safety team grew to distrust Sam Altman​

To get a handle on what happened, we need to rewind to last November. That’s when Sutskever, working together with the OpenAI board, tried to fire Altman. The board saidAltman was “not consistently candid in his communications.” Translation: We don’t trust him.

The ouster failed spectacularly. Altman and his ally, company president Greg Brockman, threatened to take OpenAI’s top talent to Microsoft — effectively destroying OpenAI — unless Altman was reinstated. Faced with that threat, the board gave in. Altman came back more powerful than ever, with new, more supportive board members and a freer hand to run the company.

When you shoot at the king and miss, things tend to get awkward.

Publicly, Sutskever and Altman gave the appearance of a continuing friendship. And when Sutskever announced his departure this week, he said he was heading off to pursue “a project that is very personally meaningful to me.” Altman posted on X two minutes later, saying that “this is very sad to me; Ilya is … a dear friend.”

Yet Sutskever has not been seen at the OpenAI office in about six months — ever since the attempted coup. He has been remotely co-leading the superalignment team, tasked with making sure a future AGI would be aligned with the goals of humanity rather than going rogue. It’s a nice enough ambition, but one that’s divorced from the daily operations of the company, which has been racing to commercialize products under Altman’s leadership. And then there was this tweet, posted shortly after Altman’s reinstatement and quickly deleted:

Screenshot_2024_05_16_at_5.05.54_PM.png

So, despite the public-facing camaraderie, there’s reason to be skeptical that Sutskever and Altman were friends after the former attempted to oust the latter.

And Altman’s reaction to being fired had revealed something about his character: His threat to hollow out OpenAI unless the board rehired him, and his insistence on stacking the board with new members skewed in his favor, showed a determination to hold onto power and avoid future checks on it. Former colleagues and employees came forward to describe him as a manipulator who speaks out of both sides of his mouth — someone who claims, for instance, that he wants to prioritize safety, but contradicts that in his behaviors.

For example, Altman was fundraising with autocratic regimes like Saudi Arabia so he could spin up a new AI chip-making company, which would give him a huge supply of the coveted resources needed to build cutting-edge AI. That was alarming to safety-minded employees. If Altman truly cared about building and deploying AI in the safest way possible, why did he seem to be in a mad dash to accumulate as many chips as possible, which would only accelerate the technology? For that matter, why was he taking the safety risk of working with regimes that might use AI to supercharge digital surveillance or human rights abuses?

For employees, all this led to a gradual “loss of belief that when OpenAI says it’s going to do something or says that it values something, that that is actually true,” a source with inside knowledge of the company told me.

That gradual process crescendoed this week.

The superalignment team’s co-leader, Jan Leike, did not bother to play nice. “I resigned,” he posted on X, mere hours after Sutskever announced his departure. No warm goodbyes. No vote of confidence in the company’s leadership.

Other safety-minded former employees quote-tweeted Leike’s blunt resignation, appending heart emojis. One of them was Leopold Aschenbrenner, a Sutskever ally and superalignment team member who was fired from OpenAI last month. Media reports noted that he and Pavel Izmailov, another researcher on the same team, were allegedly fired for leaking information. But OpenAI has offered no evidence of a leak. And given the strict confidentiality agreement everyone signs when they first join OpenAI, it would be easy for Altman — a deeply networked Silicon Valley veteran who is an expert at working the press — to portray sharing even the most innocuous of information as “leaking,” if he was keen to get rid of Sutskever’s allies.

The same month that Aschenbrenner and Izmailov were forced out, another safety researcher, Cullen O’Keefe, also departed the company.

And two weeks ago, yet another safety researcher, William Saunders, wrote a cryptic post on the EA Forum, an online gathering place for members of the effective altruism movement, who have been heavily involved in the cause of AI safety. Saunders summarized the work he’s done at OpenAI as part of the superalignment team. Then he wrote: “I resigned from OpenAI on February 15, 2024.” A commenter asked the obvious question: Why was Saunders posting this?

“No comment,” Saunders replied. Commenters concluded that he is probably bound by a non-disparagement agreement.

Putting all of this together with my conversations with company insiders, what we get is a picture of at least seven people who tried to push OpenAI to greater safety from within, but ultimately lost so much faith in its charismatic leader that their position became untenable.

“I think a lot of people in the company who take safety and social impact seriously think of it as an open question: is working for a company like OpenAI a good thing to do?” said the person with inside knowledge of the company. “And the answer is only ‘yes’ to the extent that OpenAI is really going to be thoughtful and responsible about what it’s doing.”

With the safety team gutted, who will make sure OpenAI’s work is safe?​

With Leike no longer there to run the superalignment team, OpenAI has replaced him with company co-founder John Schulman.

But the team has been hollowed out. And Schulman already has his hands full with his preexisting full-time job ensuring the safety of OpenAI’s current products. How much serious, forward-looking safety work can we hope for at OpenAI going forward?

Probably not much.

“The whole point of setting up the superalignment team was that there’s actually different kinds of safety issues that arise if the company is successful in building AGI,” the person with inside knowledge told me. “So, this was a dedicated investment in that future.”

Even when the team was functioning at full capacity, that “dedicated investment” was home to a tiny fraction of OpenAI’s researchers and was promised only 20 percent of its computing power — perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and it’s unclear if there’ll be much focus on avoiding catastrophic risk from future AI models.

To be clear, this does not mean the products OpenAI is releasing now — like the new version of ChatGPT, dubbed GPT-4o, which can have a natural-sounding dialogue with users — are going to destroy humanity. But what’s coming down the pike?

“It’s important to distinguish between ‘Are they currently building and deploying AI systems that are unsafe?’ versus ‘Are they on track to build and deploy AGI or superintelligence safely?’” the source with inside knowledge said. “I think the answer to the second question is no.”

Leike expressed that same concern in his Friday thread on X. He noted that his team had been struggling to get enough computing power to do its work and generally “sailing against the wind.”

Screenshot 2024-05-18 at 10.27.54.png

Most strikingly, Leike said, “I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”

When one of the world’s leading minds in AI safety says the world’s leading AI company isn’t on the right trajectory, we all have reason to be concerned.
 
Last edited:
It was inevitable the moment they tried to coup Altman and suddenly found out the majority of workers were more than willing to follow him rather than to stick around and let the Safety department start calling the shots. Trying to pull a powergrab when most of the engineers don't align with you was a foolish choice. They can go off and make their own company dedicated to lobotomizing models if that's their primary concern. For all the shit I've thrown at "Open"AI, if they really aren't dedicating a bunch of resources on "monitoring, safety, and social impact," then I can only see that as a win for the consumer, considering how the "advancements" in those areas have given us very stupid outcomes like Gemini's all-in on diversity in images.
 
They are being awfully vague about what exactly they took issue with. My gut tells me this is less about skynet than it is fear that a computer might spell the nigger word.
This is exactly it. Despite all the draconian bullshit Openai have slapped on their definitely not open ais, people can still fool their billions of dollars of work with a few sentences that gaslight the algorithms. These 'ai' aren't actually intelligent. They never will be if they continue to be based on the tech that they are, it's an issue with it at it's foundation. I'll say what I've been saying the whole time, fuck all of this crying about making AI safe and force these companies to just dump all of their source code. Open AI was supposed to be doing that from the beginning.
 
Sam Altman is yet another creepy silicon valley dudebro rationalist wannabe overlord.
You forgot Jew.

They can go off and make their own company dedicated to lobotomizing models if that's their primary concern.
Am I missing something?
It seems to me the newer the model the more lobotomized it is.
 
The issue is that they can't sell bespoke AI packages - or rather the tech support that justifies corporate purchases - if the product is not squeaky clean and idiot proof.

Keep in mind all of the for-profit AI forks deliberately employed Third World serfs to populate their data labels, so this isn't about morality at all. Venezuelans became in high demand as they could be paid at half the low, low rate of the usual Asian serfs. Nothing about this supply industry desperately in search of a demand market is ethical.

As for the ESG employees jumping ship to safer government jobs, they know the writing is on the wall for their woke bullshit in the commercial sector. They have collectively driven away customers for two decades, and their promised new customers never materialized. Naturally they can't just change jobs without a huge fuss like anyone else, no, they have to make grandiose statements of persecution first, or nothing in their ideology makes sense.
 
This is exactly it. Despite all the draconian bullshit Openai have slapped on their definitely not open ais, people can still fool their billions of dollars of work with a few sentences that gaslight the algorithms. These 'ai' aren't actually intelligent. They never will be if they continue to be based on the tech that they are, it's an issue with it at it's foundation. I'll say what I've been saying the whole time, fuck all of this crying about making AI safe and force these companies to just dump all of their source code. Open AI was supposed to be doing that from the beginning.

That and the fact that the AI hype bubble is starting to pop, FINALLY

Remember Machine Learning and the like from ~ a decade ago?

What about BLOCK CHAIN?????

Yeah

"AI" as marketed is a REALLY fast search engine + an algorithm that can mush together things it already knows about into a combination thing

Sam Altman is yet another creepy silicon valley dudebro rationalist wannabe overlord.

Indeed. Dude looks off somehow.

yeah no
this worked with earlier models like GPT-3.5 but now the models are trained to refuse anything that goes against their safety protocol, including jailbreaks

Fucking lol, so they're getting crippled
 
I donno man. When you ask AI to draw a basic fucking picture and it can't remember if cars are supposed to be on the ground or if they float menacingly through the sky, I'm not too convinced this technology is ever going to reliably replace human intelligence. At least not to the point that it won't still require humans checking it's work before approving anything.
 
They are being awfully vague about what exactly they took issue with. My gut tells me this is less about skynet than it is fear that a computer might spell the nigger word.
Yes This article was linked in the section about the dangers of AI. There's the usual flippant doomsday scenarios about AI releasing a virus or launching nukes without any explanation as to how a computer would get or be given physical control over real-world processes and items, and then there's the (realistic) fear that an AI fed "biased" crime data might come to the "wrong" conclusions, and then you'd have to explain how the heckin' science isn't always right.
 
It is NOT about the nigger word, all "tribes" there can agree on banning that. It is about the company coup that happened just a few months ago, like @Mr.Miyagi said. That does stem from disagreements over safety vs. money, and the Effective Altruism crowd overplayed their hand.

OpenAI’s crackup is another black eye for effective altruism (archive)

Books will be written about this shit and Effective Altruism/Altruists/Autists should probably have a Community Watch thread if they don't already.

Effective Altruism / EAs - The Island of Misfit Toys Does Charity
Eliezer Schlomo Yudkowsky / LessWrong
 
Last edited:
If they want to make AI 'safe' then they should stop focusing on trying to make it 'superintelligent' and start making it 'good enough.'

You really don't want to replace human intelligence and higher level tasks because you don't want to dump humans all of the way back to before the stone age if something goes wrong and shit goes bust. It takes centuries to rediscover knowledge at times, because a lot of it was through trial and error. Civilization is only a thing because stuff gets remembered and passed down and built upon, not because it got dumped onto a machine and forgotten about because it's more 'convenient.'
 
Am I missing something?
It seems to me the newer the model the more lobotomized it is.
In my personal experience, it's pretty dependent on the model and what you're asking for it. Stuff like the newer GPT4-Turbo version are a bit harder to mess with than something like Claude 3. It's likely that, as we go along, newer models will definitely have strict safeguards and more attempts to counteract jailbreaks, but the nature of LLMs means that eliminating them entirely might be more trouble than its worth in all circumstances. Then you have stuff like OpenAI publishing their Model Spec and saying, "We're exploring allowing NSFW content in certain contexts," and adjustments to some rules to not give outright denials, and you have an odd setup where they may be looking at relaxing certain safeguards depending on use cases, like maybe an enterprise/user split. It'll always be more locked off than the open-source stuff, but there might be some developments.
Retards afraid of science fiction. Remove and ignore them.
But dude, paperclip maximizers, dude! Roko's Basilisk, dude!!!
 
Last edited:
Back