Sam Altman is starting to look a lot like Mark Zuckerberg
Add Axios as your preferred source to
see more of our stories on Google.

Photo illustration: Sarah Grillo/Axios. Photos: Joel Saget/AFP and Christophe Morin/IP3 via Getty Images
OpenAI CEO Sam Altman — pursuing a "move fast and break things" strategy while dodging PR disasters and spewing apologies — is looking more and more like Facebook's Mark Zuckerberg.
Why it matters: AI's potential dangers loom extra large, but Silicon Valley continues to reward leaders who embrace an "ask forgiveness, not permission" stance.
Driving the news: OpenAI Tuesday said it was establishing a new safety and security committee — and also that it's begun to train its next big language model, GPT-5.
- The two-step is right out of the classic Zuckerberg playbook: making amends for a troubling incident by applying a bureaucratic patch while simultaneously charging forward to the next move-fast project.
Catch up quick: Altman made public apologies last week both for a "breakdown in communication" with Scarlett Johansson over the use of her voice for ChatGPT and for equity clawback provisions in OpenAI employees' exit agreements.
- Those controversies arose as OpenAI saw several high-profile departures from its safety team.
- In both cases, the apologies didn't stop the news cycle. OpenAI provided documents to the Washington Post defending its process in hiring a female actor to speak for ChatGPT.
- The ChatGPT-maker confirmed to Bloomberg that it was releasing former employees from the restrictive non-disclosure agreements they'd previously signed after Altman's signature appeared on documents he claimed to have not known existed.
The big picture: Zuckerberg has been apologizing for Facebook's missteps for almost as long as there has been a Facebook.
- In 2006, he apologized for how the news feed got launched. In 2007, he apologized for an ill-considered ad initiative called Beacon.
- In 2010, the CEO apologized for calling users "dumb" (years before, when he was at Harvard); in 2011, it was for lax privacy settings.
- In 2017, it was for Facebook's role in spreading misinformation in the 2016 election, in 2018 for the Cambridge Analytica scandal. In 2024 at a Senate hearing, he apologized to the families of children harmed by Meta's social media apps.
- And yet each time the company plowed ahead with its policies and plans — and saw its user totals, engagement numbers and profits climb.
Similarly, despite the recent rough patches, Altman's OpenAI has been able to ride out controversies while speedily unveiling dazzling new AI demos and capabilities.
Between the lines: Both CEOs have been criticized for over-optimism about the potential of their companies to do good, and naivete about their potential for harm.
- "Facebook was not originally created to be a company. It was built to accomplish a social mission — to make the world more open and connected," Zuckerberg wrote in a 2012 letter to investors before the Facebook IPO.
- OpenAI launched as a nonprofit in 2015 to build and safeguard artificial general intelligence (AGI) to benefit all of humanity, then switched gears a few years later to raise enormous sums from Microsoft and other funders.
How it works: Startup founders embrace "ask forgiveness, not permission" and "move fast and break things" because, in their world, recklessness often pays off.
- "The one thing that we've learned in the Valley with tech companies is that the first mover who builds the maximum user base of people adopting a certain technology is going to command the market," Subramaniam Vincent, director of journalism and media at Santa Clara University's Markkula Center for Applied Ethics, told Axios.
- "It's called 'rapid iteration,'" Vincent said, and it makes sense when you're building a software product or a feature designed to fill a certain need.
- Rapid iteration was designed as a tool for engineers and middle managers trying to solve specific technical and operational problems before users' needs changed. It wasn't intended for C-suite executives, Vincent argued, because of the stakes involved.
Yes, but: Altman has seemed to learn from Zuckerberg's mistakes, too.
- He has adopted a more forthcoming communication style and regularly addresses the risks inherent in OpenAI's mission.
- In Altman's first testimony before Congress in May 2023, he came off as less robotic and more willing to work with lawmakers than Zuckerberg had been recently.
What we're watching: Some of Facebook's biggest mistakes have led to real harm in the offline world. But missteps in deploying AI could cause even greater nightmares.
