OpenAI speeds ahead as it shifts to for-profit
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Allie Carl/Axios
OpenAI's developer release on Tuesday of a powerful new speech technology highlights the rapidly changing dynamics within the ChatGPT creator — a shift that some say is putting the release of products above safety concerns.
The big picture: OpenAI is in the midst of a transformation from a nonprofit lab to an increasingly product-focused company seeking to attract and please investors.
Driving the news: OpenAI used a developer event in San Francisco on Tuesday to announce several capabilities, the most provocative of which allows developers to make use of the same real-time speech capabilities recently added to ChatGPT.
- In showing off the product to the press on Monday, OpenAI chose a demo of the capability paired with Twilio's cloud communications platform.
- In the demo, OpenAI showed its Realtime API powering an AI agent to make a call on its own to a fictional candy store and order 400 chocolate-covered strawberries.
Zoom in: The demo highlighted the tantalizing possibilities of AI taking action in the real world — but also prompted a flurry of questions focused on the potential for havoc if the technology were used with malicious intent.
- OpenAI executives seemed surprised that press questions focused less on the capability itself and more on what guardrails were put in place.
- Asked about safeguards, OpenAI said it wasn't watermarking its AI voice to allow it to be detected, or mandating how people disclose their use of AI.
Yes, but: OpenAI said it would enforce its terms of service, which prevent spam and fraud, and noted it could add new rules as needed.
- A spokesperson added that its rules also "require developers to make it clear to their users that they are interacting with AI, unless it's obvious from the context."
Zoom out: In the 10 months since the firing and rehiring of Sam Altman, debates about speed and safety within the company have grown louder and more frequent. The product teams seeming to be winning most battles.
- And some of these disputes are only now coming to light.
- The Wall Street Journal reported last week that OpenAI released its GPT-4o model earlier this year despite concerns that the model had not received sufficient testing and was too risky to deploy safely.
- Meanwhile, Fortune reported on Tuesday that another fierce debate took place within OpenAI over whether the company's o1 reasoning model (previously code-named Strawberry) was ready for release.
State of play: These debates are taking place amid a massive shift in the structure, personnel and focus at the San Francisco company.
- OpenAI began as a nonprofit research institution. Over time, Microsoft and others have invested in a for-profit subsidiary under the auspices of the nonprofit. In the wake of Altman's ouster and rehiring, Microsoft and others demanded changes to the company's governance.
- Now OpenAI is raising a massive new round of financing. A source told Axios that the company has promised investors that it will further convert its business to a for-profit entity in the next two years — or they can get their money back.
Between the lines: The company has grown tremendously, with more than 1,000 employees having joined over the past year.
- Meanwhile, fewer and fewer of the old guard remain.
- Most of OpenAI's co-founders are gone, and a number of leading voices from the research and safety side have left, including Ilya Sutskever and Jan Leike, who co-led OpenAI's safety work.
- Just this past week Bob McGrew, chief research officer, and Barret Zoph, VP of research, both announced their departures on the same day as CTO Mira Murati.
- Of the company's old board, only Quora CEO Adam D'Angelo remains, along with Altman, who was added back to the board in March.
The other side: OpenAI, for its part, rejects the idea that safety is taking a back seat. "We are deeply committed to our mission and are proud to release the most capable and safest models in the industry, as evidenced by o1," an OpenAI spokesperson told Axios.
- On 4o, OpenAI said in a statement that it "followed a deliberate and empirical safety process."
- "GPT-4o was determined safe to deploy under our Preparedness Framework, rated medium on our cautious scale and did not reach high risk levels," OpenAI said.
- OpenAI tells Axios, "Since its release in May, it has been safely used by hundreds of millions of people, millions of developers, and enterprises worldwide to help in daily life and solve problems ... affirming our confidence in its risk assessment."
- OpenAI safety and security oversight committee board members Zico Kolter and Paul Nakasone said in a statement that their collaboration with OpenAI's safety and security teams has show them that the company can "safely deliver AI that can solve harder problems."
Editor's note: This story's headline has been corrected to say that OpenAI is shifting to for-profit (not nonprofit) status.
