February 01, 2024

Hi, it's Ryan. I thought Groundhog Day was this Friday, but senators and tech CEOs pranked us early — more on that below. Today's AI+ is 1,160 words, a 4-minute read.

1 big thing: Senate CEO hearing looks backwards

Shou Chew, CEO of TikTok, Linda Yaccarino, CEO of X, and Mark Zuckerberg, CEO of Meta, testify before the Senate Judiciary Committee. Photo: Alex Wong/Getty Images

Wednesday's Senate hearing about protecting kids on social media focused on regulating yesterday's and today's technology — while the rise of AI is already generating new problems that legislators and executives failed to address.

What's happening: Members of the Senate Judiciary Committee spent nearly four hours grilling Meta CEO Mark Zuckerberg and his counterparts from TikTok, X, Snap and Discord on the sexual exploitation of children on their platforms, as Axios' Maria Curi, Ashley Gold and Scott Rosenberg report.

  • Senators used the hearing to try and win commitments on specific bills aimed at reducing social media harms to minors.
  • But this latest in a long line of "protect the children" hearings in Congress had many legislators admitting their bipartisan campaign has so far been fruitless.

Be smart: This hearing, like those that preceded it, looked for solutions to longstanding problems involving failures of content moderation, age verification, protection of teens' mental health and enforcement of laws against child sexual abuse material (CSAM).

  • But the tech industry is always inventing new services that end up being put to bad use.

AI-based techniques power most of the filters and screens that Meta and other giant platforms use to proactively identify and remove CSAM material.

  • The latest AI image generators, video engines and chatbots, however, promise a vast wave of new challenges for parents and youngsters.
  • Aside from a brief mention by X CEO Linda Yaccarino, AI barely came up.

Of note: A trove of 2021 Meta emails released before the hearing shows Zuckerberg and others at Meta were focused on how user safety concerns might play out in the company's plan to build the metaverse.

  • In one email to Zuckerberg, Meta president of global affairs Nick Clegg called additional investments in the company's "external narrative of well-being" an "opportunity to be proactive for the metaverse."

The big picture: Senators used Wednesday's event to demand the CEOs endorse their proposed bills — right now.

  • The tech executives ran down a laundry list of policies they're implementing to help protect children.
  • But lawmakers remained unsatisfied and called for further regulation, highlighting a half-dozen bills that have remained stalled.
  • "I just want to get this stuff done. I'm so tired of this. It's been 28 years. And the reason they haven't passed is because of the power of your companies," Sen. Amy Klobuchar (D-Minn.) said.

Between the lines: The crowd in the room was tense and loud, and played a key role in applying pressure on the witnesses.

  • The families of children whose parents say they were exploited online and activists filled the room, audibly reacting to exchanges between lawmakers and executives.

The hearing unfolded in the shadow of X's response to the circulation of sexually explicit AI-generated images of Taylor Swift.

  • Those images were viewed millions of times, and X blocked users from searching for the singer's name for at least several hours.
  • "Someone as powerful as her, someone as strong as her, was impacted by Big Tech's negligence," Arielle Geismar, a team member of Design It For Us, said at a press conference by the activist group after the hearing. "As a young woman, I am horrified by the potential this could happen to me."

Zuckerberg played defense and tangled more spiritedly with lawmakers than he had in many past appearances on Capitol Hill.

  • When Sen. Marsha Blackburn (R-Tenn.) said Meta was trying to become "the premier sex trafficking platform," Zuckerberg interrupted to call that "ridiculous."
  • But when Sen. Josh Hawley (R-Mo.) demanded Zuckerberg deliver a personal apology to families who filled the seats behind him, the CEO stood up, turned around, and — with his mike either turned off or not working — did just that.

Our thought bubble: Senators kept blaming the tech executives for the Hill's failures. Tech lobbying is powerful in Washington — but lawmakers are ultimately responsible for transcending the influence of money in politics and getting legislation passed.

2. Allen Institute releases fully open-source LLM

Image: Allen Institute for AI

The Allen Institute for AI (AI2) released a fully open-source large language model today designed to help researchers better understand what's taking place under the hood, Ina reports.

Why it matters: The move comes as some have argued open-source alternatives are needed to avoid concentration of power, while others worry that such models could be harder to regulate.

Details: OLMo 7B, as the model is known, was created with support from AMD, Databricks, and researchers at Harvard and the University of Washington.

  • AI2 (created by Microsoft co-founder Paul Allen) is going further than most companies offering open-source models, releasing not only the model and its weights, but also its full training data and pre-training code.
  • OLMo 7B will be available for direct download on Hugging Face and via GitHub.
  • OLMo stands for "open language model."

Between the lines: Releasing so much data, along with the models themselves, allows researchers to better understand how such systems work, the Allen Institute says.

  • For example, one can see whether a model has truly learned a new set of skills or just memorized the answers to a particular test.
  • More open models, which include the training and evaluation systems, could also eliminate redundancies that drive up environmental impacts of AI, the Allen Institute says, noting that a typical training run for a large language model is equivalent to full-year emissions from nine U.S. homes.

What they're saying: AI2 CEO Ali Farhadi says that having a truly open, state-of-the-art large language model will "fundamentally change how researchers and developers learn about and build AI."

  • "Access to truly open models has never been more critical for the development of AI," Farhadi said in a statement to Axios.
  • While commercialization isn't inherently bad, Farhadi said the limited transparency offered by many large language models made it hard to assess performance as well as issues related to toxicity and bias.
  • Executives from Meta and AMD praised the Allen Institute's effort. "Open foundation models have been critical in driving a burst of innovation and development around generative AI," Meta chief AI scientist Yann LeCun said in a statement.
  • Meta has been a vocal advocate for open-source AI and has released its Llama 2 model for commercial use. "The vibrant community that comes from open source is the fastest and most effective way to build the future of AI," LeCun said.

3. Training data

4. + This

It's Taylor Drift's snowstorm, and Minnesotans are just living in it.

Thanks to Scott Rosenberg and Meg Morrone for editing this newsletter.