Axios AI+

August 26, 2024
In order to examine Samsung's new Galaxy Ring without destroying it, the folks at iFixit used a CT scan.
π¨ Situational awareness: Here's what you need to know about Pavel Durov, the CEO of Telegram, who was arrested in France this weekend because of the platform's looser content moderation policies and an unwillingness to cooperate with law enforcement.
Today's AI+ is 1,132 words, a 4-minute read.
1 big thing: Inflection caps its free chatbot amid enterprise pivot
Inflection AI announced rate limits for Pi β its consumer chatbot β and is supporting a new tool it hopes will become an industry standard for exporting chatbot data.
Why it matters: The move comes as Inflection pivots to business after CEO Mustafa Suleyman and other employees left for Microsoft.
Driving the news: Inflection announced today that will start imposing a rate cap for free usage of Pi.
- CEO Sean White told Axios he believes Inflection can meaningfully lower its GPU usage without impacting too many of its users.
- "There are some folks who are doing many, many messages a minute and doing that for hours at a time," he said. "At least for a free product, those will be the ones that are capped and limited."
Inflection is also working with the Data Transfer Initiative to develop a mechanism for exporting data from chatbots, something both organizations hope could pave the way for an eventual industry standard.
- Starting immediately, Inflection said Pi users will be able to export their conversation history for their own archives, to use with another LLM or with Inflection's own upcoming enterprise product.
Catch up quick: Microsoft announced in March that it had hired Suleyman to run its consumer AI effort along with Inflection co-founder KarΓ©n Simonyan and a number of other employees.
- Inflection said at the time it would "lean into" its custom generative AI model business, with a new focus on enterprise customers.
Yes, but: While the rate limits may not be welcome news for those who have been accustomed to using Pi as much as they want, it's still a better outcome for Pi enthusiasts than it could have been.
- Until very recently, Inflection was considering phasing out the consumer version of Pi entirely.
- "We were thinking about how we might at least take down certain areas or markets or make different adjustments," White told Axios. "Right now, I don't think we need to."
- The company is smaller than the 50 to 70 people it had at the beginning of the year, White said. "We're not quite up at the size that Inflection was back in January, but we're starting to get closer," he said.
The big picture: Inflection's effort to chart a path forward as a smaller independent company represents an important test for a new type of deal that has become something of a trend β hiring key talent and licensing technology and giving a startup's investors a return on their investment, all without purchasing the company itself.
- Earlier this month, Google hired two of Character.AI's co-founders β Noam Shazeer and Daniel De Freitas β along with two dozen of its researchers. It also bought a non-exclusive license to Character.AI's technology and guaranteed a healthy return for investors in the startup.
- Similarly, Amazon said in June it was hiring several leaders from AI agent startup Adept and licensing that company's technology. Adept said it would continue operating with a more narrow mission.
- Regulators, including the FTC, have said they are looking into these deals to determine whether they've been designed to avoid regulatory scrutiny.
What's next: Inflection is working on just what its proposition will be for businesses. In addition to allowing more access to its API, the company is exploring ways to allow businesses to run Inflection's technology on premise.
2. Biosecurity experts call for new AI regulations
Biosecurity experts are calling on governments to set new guardrails in an effort to limit the risks posed by advanced AI models applied to biology.
Why it matters: AI models trained on genetic sequences can help scientists design new medicines and vaccines, but can also be used to create new or enhanced pathogens.
Experts from OpenAI and RAND, among other institutions, say today's large language models don't increase the risk of the creation of bioweapons, in part because there's not enough data to train them.
- There's an impression today that "producing biological weapons with all of the information that AI can provide is easy, straightforward," Sonia Ben Ouagrham-Gormley, deputy director of the Biodefense graduate program at George Mason University, said at a Center for a New American Security event on Wednesday.
- But "producing biological weapons is very complex, very complicated."
Yes, but: Many researchers say it's only a matter of time before AI models improve.
- "I think we need to listen to the AI developers about the potential capabilities of the next generation," Anita Cicero, deputy director of the Johns Hopkins Center for Health Security, said at the event.
- She also raised the possibility of automated labs and labs running remotely through the cloud using trial-and-error to try to make a new pathogen, which could mean less expertise is needed to conduct experiments.
What they're saying: AI developers committing to evaluate models is "important but cannot stand alone," Cicero and her co-authors wrote in the journal Science.
- They call on governments to evaluate models trained on large amounts of biological data or especially sensitive data before they are released.
- That could address potential risks without hampering academic freedom, they argue.
- The group also wants companies and institutions that synthesize nucleic acids β turning genetic sequence information into physical molecules β to screen their customers and their orders.
Friction point: Some researchers say more work should be done first to understand what AI models can do in laboratory settings.
- Ouagrham-Gormley argued that before AI biological models are regulated, "we absolutely need to better understand the capability itself, and then that will allow us to assess better the risks associated with those [models]."
Between the lines: A lot of work developing AI biological models is done in the private sector.
- Government should focus on "advanced biological models that pose potential high-consequence threats, whether or not they were developed with federal funding assistance," Cicero and her co-authors wrote.
The big picture: It's incumbent on the U.S. and other leaders in the field to set up their own governance systems, Tom Inglesby, director of the Johns Hopkins Center for Health Security, told Axios.
- "Ultimately, we do think that we should be aiming for international harmonization, just like we do for other kinds of safety and security issues around science," he said.
3. Training data
- The Department of Energy could create its own AI testbed to balance growing electricity demand needs with emissions goals. (Axios)
- Vimeo says it's adding generative AI features that can replicate a creator's storytelling. (Axios)
- Andrew Ng is stepping back from his CEO role at Landing AI, shifting to executive chairman, as he raises a new fund. (TechCrunch)
- Midjourney has opened up its web interface to all, after previously limiting it to those who had already created 10,000+ images via Discord. (ZDNET)
- The marketing consultant responsible for using AI to generate fake critic quotes for the "Megalopolis" movie trailer has been let go. (Deadline)
4. + This
Check out this air ball β that actually still made it in the hoop.
Thanks to Megan Morrone for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+




