Stories

Report: The U.S. is unprepared for the AI future

Binary digits rise out of the bottom half of a man's profile
Illustration: Rebecca Zisser/Axios

Advances in artificial intelligence are supercharging propaganda, espionage, and cybercrime, threatening "the end of truth," says a new report from the Center for a New American Security, shared first with Axios.

Why it matters: Cybercriminals and governments are stocking up on the AI capabilities that will define the next generation of conflict. At the same time, automation and the rise of fake information are stirring up unrest. Together, these forces can turn society upside down.

The biggest coming danger is so-called "deepfakes," or AI-doctored videos falsely showing people saying or doing something, according to a co-author of the report, published today by the Center for a New American Security and shared first with Axios.

  • "We're moving into an era where seeing is no longer going to be believing," says Paul Scharre, director of the center's technology and national security program.
  • Deepfakes could be used as propaganda, for misinformation campaigns, or to derail diplomacy, Scharre says.
  • "I don't think we as a society are prepared for this."

The details: The report is an abridged encyclopedia of the good and ill that AI could bring to national security. Some scenarios show the potential upside of AI tools, but others would result in chaos if not challenged by smart AI countermeasures.

  • AI development is an arms race that will be won by the cleverest, best-funded side.
  • One example: AI can create fake but convincing audio, photos, and video. Algorithms developed in response can detect doctored images and videos, removing them from online platforms — but they're far from infallible.

The impact: If this wave is left unchecked, the report warns, the world could face "the end of truth" — a grisly fate the public has already had a taste of, thanks to misinformation released on social media sites and by the White House.

Convincing fakery will extend to targeted online scams, the report says:

  • Social engineering — the practice of tricking someone into giving up valuable information by using specific information about that person — will become much easier with AI.
  • "Right now, the one saving grace is that the sheer volume of information [about a person] makes it very difficult to do anything with it at scale," Scharre said. But algorithms can reassemble the data trail each of us leave behind into profiles, and use those to target us automatically.
  • Corporate and government surveillance will also benefit from AI tools that can piece together accurate approximations of people's habits, preferences and locations.

The report's examples vary from the current to the far-off, like mind-reading AI. But they're not prophecies that humanity is doomed to suffer. "We’re not passive bystanders here," Scharre says. He urges the U.S. government to act.

The United States government does not have a plan to remain a global leader in AI. I fear that U.S. policymakers take America’s technological advantages for granted. We’re in a race and we need to compete to stay ahead.
— Paul Scharre, director of the technology and national security program, CNAS

Among the report's suggestions:

  • The U.S. government can work toward positive outcomes, and to counter the negative forecasts.
  • Private AI engineers must consider the repercussions of their technology, too, Scharre says. But he says he disagrees with Google's decision to walk away from a Defense Department program called Project Maven.
  • The U.S. could fall behind in its preparations for the AI-driven world, Scharre says. "China is doing this and the U.S. isn’t."

Go deeper:

  • How a power shift in AI funding could hobble the U.S. (Axios)
  • A Twitter bot that sounds just like the person it's spamming (The Atlantic)
More stories loading.