OpenAI forced to give safety reassurances as top leaders exit
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Natalie Peeples/Axios
New high-level exits from OpenAI last week caused the firm's leaders to publicly reaffirm their commitment to safety over the weekend.
Why it matters: The company behind ChatGPT began with the nonprofit aim of responsibly building advanced AI. But it has seen boardroom battles and noisy resignations as it races to keep ahead of competitors.
Driving the news: OpenAI CEO Sam Altman and president Greg Brockman posted a note on X, formerly known as Twitter, on Saturday in response to criticism by departing exec Jan Leike.
- "Figuring out how to make a new technology safe for the first time isn't easy," they wrote, and "the future is going to be harder than the past."
- "We need to keep elevating our safety work to match the stakes of each new model," they said. "As models continue to become much more capable, we expect they'll start being integrated with the world more."
Brockman and Altman countered criticism that OpenAI was rushing new products to market.
- They said that as they deliver more capable AI models, "we're not sure yet when we'll reach our safety bar for releases, and it's ok if that pushes out release timelines."
Catch up quick: Leike and OpenAI co-founder Ilya Sutskever, who'd both led the company's superalignment team — dedicated to foreseeing and preventing long-term disaster stemming from advanced AI — left OpenAI last week.
- Their team has disbanded, though OpenAI says its work is being redistributed across the company.
What they're saying: After leaving, Leike posted a thread criticizing his ex-employer for underinvesting in the team's work:
- "I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there."
"Over the past few months my team has been sailing against the wind," Leike added. "Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done."
- "Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products."
The big picture: "Existential risks" and doomsday fears have long clouded work in AI.
- The worry is that a future AI might first achieve the broad human or superhuman level of capability known as artificial general intelligence, or AGI, and then slip out of human control and cause potentially catastrophic harm.
- Such concerns motivated the creation of OpenAI in 2015. Its founders believed that a nonprofit organization was the most trustworthy structure to build and safeguard AGI, in contrast to the for-profit environment of the AI field's then-leading lab, at Google.
- But with the advance of generative AI's word- and image-spinning abilities, Altman and OpenAI decided that getting the technology into many people's hands so they could get used to it and expose its flaws quickly was the best road to a safe future.
- They also reorganized OpenAI's structure to make it possible for the firm to raise tens of billions of dollars to fund increasingly costly generative AI projects.
The other side: OpenAI's critics suggest that the company has never provided a clear definition of what AGI is or how it will know when it has achieved its goal.
- The fiercest skeptics believe that the AI research community is deluded in thinking that its current path of building exponentially larger generative models will ever lead to something like AGI.
- Other critics have long maintained that unrealistic doomsday scenarios bolster AI researchers' egos — we're building something so incredible it could wipe out humanity! — while the real dangers from AI today are more concrete problems like discrimination, scams and employment disruptions.
What we're watching: Last week's departures could indicate there's a larger divide within OpenAI. But it's at least as likely that they represent the last gasp of dissent over the firm's strategy of speed.
