December 20, 2023

Hi, it's Ryan and this is the final AI+ of 2023. Have a great break! We'll see you again on Jan. 2. Today's newsletter is 1,264 words, a 5-minute read.

🏔️ Axios House @ Davos will be returning in 2024 as a hub for news-driven conversations and networking opportunities. Check out our week's programming and register to join here.

1 big thing: AI's colossal puppet show

Illustration: Sarah Grillo/Axios

Here's an early New Year's resolution for anyone who works with, deals with or writes about artificial intelligence: Stop saying, "AI did this" or "AI made that."

Why it matters: AI doesn't do or make anything on its own. It's a software tool that people imagined and invented — the only capabilities and goals it has are those that people give it, Axios' Scott Rosenberg writes.

  • The more we ascribe independence and autonomy to technology that's actually been designed and directed by specific people, the easier we make it for those people to shirk responsibility for its impacts and errors.

Be smart: Throw away your pictures of AI as a robot — and start imagining the technology as a big puppet instead.

  • The strings aren't always visible and even AI developers themselves can't always find the connections between their intentions and the AI's behavior.
  • But everything an AI program does or says starts with the instructions and data that people have given it.

Driving the news: Many social media experts believe 2024 will see an explosion of generative-AI-produced synthetic media colliding with pivotal elections in the U.S. and around the world.

  • AI doesn't drive political conflict, but it can accelerate the production of misinformation and erode public trust.
  • Think of the flood of crappy AI-generated images of chainsaw-carved wooden dog sculptures inundating Facebook that 404 Media's Jason Koebler has chronicled — and then imagine a similar tactic deployed on behalf of a political campaign.
  • This is where you have to remind yourself not to think, "AI is filling up Facebook with crud!" In every instance, someone is using AI to create and spread that flood.

How it works: The urge to view AI as a human actor is inevitable and reinforced by our species' wiring, which is tuned to identify human faces and personalities even where the world only provides us with hints.

  • Users have always eagerly anthropomorphized digital technology in a phenomenon known as the Eliza Effect, named after a simple 1960s chatbot that played therapist and prompted users to share personal secrets.
  • Then LLMs got good at mimicking human conversation and ChatGPT made that talk available to millions of users for free, setting us up for a mass uncontrolled experiment in the projection of human agency onto software.

The intrigue: AI makers find it convenient to create the impression that AI has a mind of its own because it provides cover for their perplexing inability to fully understand or explain the output of the tools they've invented.

  • As long as we're captivated by the cute, clever or unpredictable responses an AI program provides to our prompts, we're less likely to wonder why AI developers haven't done a better job of understanding why and how their tools arrive at their answers.

Between the lines: Arguing that AI has no goals of its own may remind us of the familiar gun-debate argument that "guns don't kill people, people kill people."

  • Similarly, AI doesn't tell lies. People use AI to tell lies.
  • In both cases, technology speeds up an existing human propensity for harm and society has a legitimate interest in limiting that harm.
  • AI's special twist is that, unlike guns, we're inclined to see the technology as something autonomous.

The other side: Many advocates believe AI's capacity for good — in the form of universal education and health care delivered by personalized digital tutors and doctors — is so vast and urgently needed that any hesitation in developing the technology is foolish, or even criminal.

The bottom line: The AI debate needs less mysticism and magic and more rigor and clarity.

2. Senate AI forums: What really mattered

Photo illustration: Shoshana Gordon/Axios. Photo: Tom Williams/CQ-Roll Call, Inc via Getty Images

Senate Majority Leader Chuck Schumer ran a four-month series of AI Insight Forums, but we're not much closer to knowing how Congress will legislate AI, report Axios Pro's Ashley Gold and Maria Curi.

Catch up fast: The forums focused on AI innovation; copyright and IP; uses and risk management of AI; workforce; national security; guarding against doomsday scenarios; AI's role in our "social world"; transparency, explainability and alignment; and privacy and liability.

What they're saying: Axios' sources in the forum rooms agree they were helpful, boosted momentum for legislation and got people together who wouldn't normally be in the same room. Beyond that, tangible results aren't here yet.

  • Joseph Hoefer, AI policy lead at Monument Advocacy, said the forum organizers "deserve credit for bringing together stakeholders to foster the dialogue."
  • NAACP president Derrick Johnson wants 2024 election protections in place: "Under no circumstances should AI be allowed to participate in electioneering."

The other side: "It's disappointing that it hasn't come to much more than conversation at this point," said UnidosUS public policy senior director Laura MacCleery, who attended the first forum featuring tech's biggest players.

  • Some stakeholders found the forums exclusive, despite Schumer's efforts to host a variety of interests.

What we're watching:

  • Immigration: The AI industry wants tweaks to immigration laws to allow for more AI talent to live and work in the U.S., but that's close to impossible in the current Congress.
  • Money: Congress has yet to dole out the money needed for AI agencies to carry out President Biden's AI executive order.
  • Committee jurisdiction: All committees are going to want a piece of the pie on AI legislation and we predict petty jurisdictional battles.
  • Open vs. closed source AI: Lawmakers are clearly still figuring out the differences.
  • Doomsayers vs. realists: The fight is close and continues. Stay tuned!
  • Regulatory capture risk: Is Congress dancing to Sam Altman's tune?

The bottom line: "The only way we are going to manage the risks and opportunities presented by AI — and other emerging technologies — is if we consider all possibilities and consider multiple perspectives," Chang said.

A version of this story was published first on Axios Pro. Unlock more news like this by talking to our sales team.

3. FBI seizes BlackCat ransomware site

Screenshot: Law enforcement seizure notice on the BlackCat ransomware gang's dark web site.

Federal law enforcement officials announced Tuesday that they took down the online infrastructure belonging to the BlackCat ransomware gang and offered victims a decryption key, reports Sam Sabin.

Why it matters: The takedown disrupts the "second most prolific ransomware-as-a-service variant," per the Justice Department.

The big picture: BlackCat, also known as ALPHV or Noberus, is estimated to have U.S. critical infrastructure and targeted more than 1,000 victims in 2022 and 2023.

Details: The FBI worked with European and Australian authorities to spare victims a total of $68 million in ransoms, per the DOJ.

Yes, but: BlackCat is believed to be a rebrand of the DarkSide ransomware gang, which was behind the 2021 attack on Colonial Pipeline and it's nearly impossible to arrest Russia-based BlackCat members.

3. Training data

  • The FTC banned Rite Aid from using AI-based facial recognition for 5 years after the company used low-quality images and sent alerts that mistagged thousands of Black and Latino shoppers as shoplifters. (Bloomberg)
  • AI startup Suno unveiled a new music-making tool focused on creating original works rather than imitating real artists. (Axios)
  • Sony's Insomniac Games suffered a hack of more than 1 million files including game roadmaps, character art, budgets and details about the highly anticipated Wolverine game release. (Axios)
  • Trading places: Election specialist Katie Harbath is joining Duco Experts as global affairs officer, ending contracts with the Bipartisan Policy Center, International Republican Institute and the Integrity Institute.

4. + This

A Chevrolet dealer in Watsonville, California, put a ChatGPT-based bot on its website, and pranksters quickly persuaded it to sell them a Chevy Tahoe for $1.

Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter.