February 06, 2024

Hi, it's Ryan. Today's AI+ is 1,043 words, a 4-minute read.

1 big thing: Big and small AI vie for the future

Illustration: Lindsey Bailey/Axios

AI's evolution will be shaped by conflicting bets on whether the technology's fate is to keep growing more gigantic, Axios' Scott Rosenberg writes.

Why it matters: "Big AI" and "small AI" chart two different futures for the tech that's likely to dominate business and society over the next decade.

  • A big AI win could lock in the power of today's tech giants for decades to come, while a small AI victory could have more unpredictable and uncontrollable consequences.

Big AI means betting that if you just keep adding more synapse-like nodes, your model will keep getting better — and eventually, maybe, produce human-matching or -beating skills, known as artificial general intelligence (AGI).

  • This is how ChatGPT and the generative AI wave began in 2022.
  • OpenAI, allied with Microsoft, is big AI's standard-bearer, but you can be sure that every tech giant is also in this fantastically expensive game.

The small AI approach predicts we'll get better, faster and more efficient results by deploying a wider range and number of AI models fine-tuned for specific tasks or subject areas.

  • Many small AI proponents believe that big AI will hit a wall before achieving its goal of AGI.
  • Small AI efforts are also much more likely to be made available via open-source (or open-source-like) licensing, which allows for broad distribution and wider research. Big AI's price tag has already made developing its most advanced models too costly for academic institutions.
  • Meta has been the highest-profile promoter of smaller and freely distributed models — but again, many of the giants are playing here and lots of startups, too.

The big picture: The tech industry is perpetually pulling in two directions.

  • Its most powerful companies are always scaling up hardware and software, devices and services to do more and reach more people.
  • But tech is also in the long-term game of miniaturization and personalization.
  • That is how today's iPhone can fill your pocket with way more computing power than the room-sized supercomputers of yore.

The experts are divided over the big and small AI approaches.

  • AI is still very much a work in progress, and the underlying science is still being discovered.

Be smart: Google's "transformers" paper, which kicked off the current era, was only published in 2017 — and similarly disruptive breakthroughs could be happening right now.

  • That means today's winners are always in danger of being displaced.

Yes, but: Small AI advocates argue that only the tech giants will be able to afford to develop big AI-style models and if Microsoft and Google dominate, Big Tech's power and profits will just keep growing.

  • Big AI backers believe small AI opens the door to more dangerous AI outcomes, since smaller and open-source models could be easier to hijack for malicious purposes.

What we're watching: The outcome of this conflict will depend as much on regulation from Washington as on what comes out of the industry's labs.

  • A looser regulatory environment would make it easier for small AI to flourish, while tighter rules and restraints from D.C. are more likely to benefit big AI, since only the largest firms will have the resources to run the bureaucratic gauntlet.

Go deeper: The push to make big AI small

2. Meta rolls out AI labeling fix

Illustration: Sarah Grillo/Axios

Meta announced Tuesday that it plans to start applying labels to Facebook, Instagram and Threads posts that contain images the company has identified as generated by AI.

Why it matters: Meta-owned platforms host more than 5 billion active accountsand every one of its apps will be subject to the labeling policy in all supported languages.

Details: Meta's technical solution for automatically labeling AI-generated images is not yet ready — users can expect that in "coming months," the company said.

  • Meta will use metadata to identify AI-generated images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock.
  • Additionally, Meta is "working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers," Nick Clegg, president for global affairs, wrote in a blog post.
  • Ahead of automatic labeling, Meta is adding a feature that lets users disclose when they share AI-generated or digitally altered "photorealistic" video or audio and Meta will label that content, Clegg wrote.
  • "We'll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so," per Clegg.

The big picture: Meta's independent Oversight Board on Monday criticized the company's manipulated media policy as "incoherent, lacking in persuasive justification" and recommended revisions.

Details: The board's comments were attached to a ruling that upheld Meta's decision to allow a manipulated video of President Joe Biden to remain posted.

  • The video originally showed Biden exchanging "I Voted" stickers with his adult granddaughter in the 2022 midterm elections and was later altered to look like he touched his granddaughter's chest inappropriately.
  • The video didn't violate Meta's current policies because it was not altered with AI. The Oversight Board suggested Meta's policies should address content showing people doing or saying things they did not do, regardless of how it was created.

Between the lines: Meta's new labeling approach — with its inclusion of "organic content with a photorealistic video or realistic-sounding audio" — addresses the Oversight Board's criticisms.

3. Training data

4. + This

2,000 years ago, papyrus scrolls were "flash-fried" in the Vesuvius volcanic eruption that buried two Roman towns — and many of the charred texts were destroyed in failed attempts to open and decode them. But the recent competition to decode them using AI that Ina reported last year is over — and now you can read them online.

Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter.