February 27, 2024

Hi, it's Ryan, with another hello from Mobile World Congress in Barcelona.

Today's AI+ is 1,248 words, a 4.5-minute read.

1 big thing: Microsoft tries to move past Sam Altman

Illustration: Annelise Capossela/Axios

Microsoft announced a deal with France's Mistral AI and launched its "AI access principles" Monday, signaling years of major investments to come and a determination to spread its AI bets as antitrust scrutiny grows.

Why it matters: Microsoft's new principles codify how the company will pursue a "broad array of AI partnerships" beyond its $13 billion investment in OpenAI, the maker of ChatGPT.

  • Brad Smith, the company's president, tells Axios that Microsoft can't win at AI alone — heralding a "big change" by pivoting towards support for open source developers.
  • Smith also tells Axios that big tech's money will be essential to provide the "fundamental infrastructure for AI to advance."

Driving the news: Smith announced a "strategic partnership" with Mistral, a French open source AI model developer, making its models available to Microsoft's Azure cloud customers.

  • Microsoft will also take a small stake in Mistral — about $16 million, per Bloomberg.

The intrigue: Microsoft is keen to move the AI focus beyond Sam Altman.

  • "I think Sam is brilliant," Smith said, "but just as the printing press quickly evolved beyond Gutenberg, AI is quickly evolving beyond any single individual."

By the numbers: Microsoft now offers 1,500 open source models out of around 1,600 "models as a service" available through Microsoft Azure cloud.

  • "We're on a path to spend somewhere in the ballpark of $50 billion this year (on AI), compared to the Chips Act, which is $52 billion over five years," Smith says, while rejecting the idea that the world needs $7 trillion of AI infrastructure in coming years, a reported fundraising target of Altman's.

Between the lines: Microsoft wants regulators to view the AI economy through a wide lens — and to see its OpenAI investment as a fraction of its overall AI spending.

  • Microsoft defines the AI economy as nine layers of technology, a complex web of partnerships and a wide diversity of AI models — all of which would be hard to dominate in ways that raise antitrust concerns.
  • But since Google runs a popular app store and "mobile platforms are the most popular gateway to consumers," Microsoft is hinting that Google could be the one company at risk of hurting AI competition — a suggestion DOJ prosecutors explored in court in 2023, and that Google denies.

The big picture: Before officials turned up the heat on social media companies, it was Microsoft that had spent decades tangling with antitrust regulators.

2. In Swisher's "Burn Book," AI regulation tips

Image: Simon & Schuster

It is said that those who fail to learn the lessons of history are doomed to repeat them and there are a lot of lessons in "Burn Book," Kara Swisher's new memoir, which hits store shelves today, Ina writes.

Why it matters: Swisher's book is worth reading just for its inside scoop on so many of those who have built the modern tech industry — but its account of their bold, brazen and often juvenile antics also provides insights for navigating the AI era that's now upon us.

The tech industry's missteps are well known, but Swisher's account helps us understand why these men — "boy kings," as she calls them — do what they do.

  • Swisher's lens is valuable because in many cases, she knew today's leaders before they were who they are today.

Figures like Elon Musk, Sam Altman and Marc Andreessen didn't arrive on the tech scene fully formed, like a bot from character.ai. Rather, they evolved over time — in some cases transforming radically.

  • Marc Andreessen: Long before he began unfollowing and ranting at most of the media, Andreessen used to answer texts and calls at all hours. "Texts from Marc were indicative of a restless and vaguely disgruntled mind," Swisher writes, adding that she saved the texts "because I had a sense even then that these boy men would try hard to reinvent themselves and erase their former selves."
  • Elon Musk: Musk's transformation has been very public, but Swisher's proximity offers particular insight into just how fast he can turn. The author proudly quotes Musk's Oct. 17, 2022 "You're an asshole" email on the book's back cover. But, she writes, that email came just a week after Musk was asking her thoughts on how to improve Twitter.

Zoom in: Swisher raises the usual list of all the ways that AI might be used for good and bad, from improving health and solving climate challenges to empowering killer robots.

  • But her most important point on AI is that it is our thoughts, our art and our posts that are powering this technology, "All this information that is now digitized is actually us," she writes.

Ina's thought bubble: Listen to the audiobook. As I told Mike Allen, you want to hear Kara's message and all her dish in her own distinctive voice.

Disclosure: Ina worked for Kara from 2010 to 2017, first at Dow Jones' All Things Digital and then at Recode, later owned by Vox Media.

3. Stanford studies open source AI's pros and cons

Illustration: Annelise Capossela/Axios

Researchers at Stanford's Human-Centered AI Institute have published a paper aimed at creating a more precise understanding of open source AI's risks and benefits.

Why it matters: The availability of open AI models affects everything from global geopolitics to domestic AI competition — and regulators are trying to understand their implications.

Context: Common definitions of open source software often don't match the reality of how AI is built.

  • Stanford used the White House definition of open foundation models as those with "widely available model weights."
  • Government reactions to the rise of open models have ranged from alarm in the White House over bioterrorism threats, to Beijing blacklisting certain types of generative AI training data and the EU offering regulatory exemptions to open models due to their greater transparency.

What they did: The researchers examined open foundation models including Llama 2 and Stable Diffusion XL.

The main benefits are distributing decision-making power, reducing market concentration, increasing innovation, accelerating science and enabling transparency.

  • Open models "allow for greater diversity in defining what model behavior is acceptable" and because they're easily customizable, they "better support innovation across a range of applications."

Threat level: The researchers argue that there often isn't proof that theoretical risks have materialized and that risks such as disinformation, scams and bioterrorism all existed before generative AI.

  • The researchers contend that AI may amplify or accelerate those risks, but does not create them.
  • Any attempt to impose conditions on users of open models are "easy for malicious actors to ignore," the team concluded.

The intrigue: Ousted OpenAI board member Helen Toner gave "extensive feedback" to the paper's authors, who include Alondra Nelson, former director of the Office of Science and Technology at the White House and Rumman Chowdhury, who led Twitter's machine learning team.

4. Training data

Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter.