Axios AI+

June 20, 2025
It was fun watching Caitlin Clark last night — and even more fun to watch the Valkyries win! Today's AI+ is 1,074 words, a 4-minute read.
1 big thing: OpenAI flags bioweapons risk
OpenAI cautioned this week that upcoming models will head into a higher level of risk when it comes to the creation of biological weapons — especially by those who don't really understand what they're doing.
Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents.
Driving the news: OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk under the company's preparedness framework.
- As a result, the company said in a blog post, it is stepping up the testing of such models, as well as including fresh precautions designed to keep them from aiding in the creation of biological weapons.
- OpenAI didn't put an exact timeframe on when the first model to hit that threshold will launch, but head of safety systems Johannes Heidecke told Axios "We are expecting some of the successors of our o3 (reasoning model) to hit that level."
Reality check: OpenAI isn't necessarily saying that its platform will be capable of creating new types of bioweapons.
- Rather, it believes that — without mitigations — models will soon be capable of what it calls "novice uplift," or allowing those without a background in biology to do potentially dangerous things.
- "We're not yet in the world where there's like novel, completely unknown creation of biothreats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with."
Between the lines: One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm.
- Heidecke acknowledged that OpenAI and others need systems that are highly accurate at detecting and preventing harmful use.
- "This is not something where 99% or even one in 100,000 performance is sufficient," he said.
- "We basically need, like, near perfection," he added, noting that human monitoring and enforcement systems need to be able to quickly identify any harmful uses that escape automated detection and then take the action necessary to "prevent the harm from materializing."
The big picture: OpenAI is not the only company warning of models reaching new levels of potentially harmful capability.
- When it released Claude 4 last month, Anthropic said it was activating fresh precautions due to the potential risk of that model aiding in the spread of biological and nuclear threats.
- Various companies have also been warning that it's time to start preparing for a world in which AI models are capable of meeting or exceeding human capabilities in a wide range of tasks.
What's next: OpenAI said it will convene an event next month to bring together certain nonprofits and government researchers to discuss the opportunities and risks ahead.
- OpenAI is also looking to expand its work with the U.S. national labs, and the government more broadly, OpenAI policy chief Chris Lehane told Axios.
- "We're going to explore some additional type of work that we can do in terms of how we potentially use the technology itself to be really effective at being able to combat others who may be trying to misuse it," Lehane said.
- Lehane added that the increased capability of the most powerful models highlights "the importance, at least in my view, for the AI build-out around the world, for the pipes to be really U.S.-led."
2. Meta debuts its first Oakley smart glasses
Meta today announced its first set of Oakley-branded smart glasses, touting upgraded camera and battery performance over its current crop of Ray-Bans.
Why it matters: Smart glasses could be the next big platform for AI interaction.
Between the lines: Meta is expected to face increasing hardware competition in the coming months from Google, Apple and possibly others.
Driving the news: The new glasses, dubbed the Oakley Meta HSTN (pronounced HOW-stuhn), come in a variety of lens and frame color combinations and start at $399, though initial sales will be for a special-edition version that costs $499.
- The improved camera can capture 3K (Ultra HD) video, and Meta says the battery will give users up to eight hours of typical use, recharging to 50% capacity in 20 minutes. The case offers up to 48 hours of additional power.
- Like the Ray-Bans, the glasses can take photos and videos, play audio and connect to Meta's AI assistant.
Meta has a broad partnership with Italian eyewear giant EssilorLuxottica, which controls both the Oakley and Ray-Ban brands.
The big picture: Google has shown prototype Android XR glasses with a small augmented reality display in the lens, aiming for commercial release next year. Apple is also rumored to be developing smart glasses.
- Meta is reportedly working on a version of its Ray-Bans that have a small display, expected to go on sale later this year.
3. Small businesses use AI but don't spend much
Small business leaders are beginning to embrace generative AI, but not enough to pay much for it, per a new survey of 1,000 businesses by U.S. Bank.
By the numbers: 36% of these small business owners say they're already using generative AI, and another 21% say they expect to start doing so over the coming year, the survey found.
Yes, but: Most small business use right now is in the free or low-cost entry level tiers — meaning AI leaders like OpenAI, Anthropic and Google can't expect a wave of revenue growth from this market just yet.
- 68% of the small business owners surveyed who are using generative AI say they're spending less than $50 a month for AI services.
- The bank's survey defined "small business" as $25 million or less annual revenue and 99 or fewer employees.
Our thought bubble: The tech industry has been using a "hook them on free samples" tactic since the 1990s advent of the web.
- AI makers are betting that today's free-tier user will graduate to higher-priced levels over time.
4. Training data
- Sources: Mark Zuckerberg is trying to recruit Daniel Gross, co-founder of Ilya Sutskever's Safe Superintelligence, after the startup refused to sell to Meta. (CNBC)
- Exclusive: Google released a playbook for U.S. mayors to build their AI strategies. (Axios Pro)
- For all the talk about "AGI," there is no consensus on what it means. (Financial Times)
- Google is using YouTube to train its video AI models. (CNBC)
5. + This
Photographer Andrew McCarthy captured this incredible image of the International Space Station in the foreground and, behind it, the sun's surface erupting in a flare.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+





