
Illustration: Sarah Grillo/Axios
Today's great powers are sliding toward a new arms race, this time on the battleground of lethal computer code, but experts say that rushing to develop autonomous weapons — which can be erratic and easily stolen — will make violent conflict more likely and yield no winners.
What's happening: The countries leading in artificial intelligence research (U.S., China, Russia, U.K., France, Israel and South Korea) are all developing weapons that hand-off increasing portions of the killing process to computers.
In a new report coming out tomorrow and reported first by Axios, PAX, a Dutch nonprofit, describes "clear signs of the start of an AI arms race."
- "Automated turrets" guard South Korea's border, choosing targets autonomously but requiring a human go-ahead before they fire, says PAX, which lobbies companies not to make lethal AI.
- China sells stealth drones advertised to be capable of autonomous airstrikes.
- Israel uses autonomous drones to patrol its border with Gaza.
Unlike nuclear weapons, which can be guarded in bunkers and require deep expertise to manufacture, autonomous weapons are not easily locked away. "Once you develop them, these weapons will proliferate widely," Daan Kayser, the PAX report's lead author, tells Axios. "They'll also be used against you."
The reflex to treat AI as an arms race is creating a self-fulfilling prophecy, Paul Scharre, an autonomous weapons expert at the Center for a New American Security, warns in Foreign Affairs. In a security dilemma that resembles the Cold War, as countries build up their defenses, the world becomes more dangerous.
- "There are strong institutional incentives within national security bureaucracies to stay ahead of others," Scharre tells Axios. But chasing breakneck speed can push aside essential research into AI safety and cybersecurity.
- That could result in AI weapons that aren't thoroughly tested, make inexplicable decisions in the heat of the moment, or interact in unforeseen ways with other systems. "Some of the most powerful [AI] methods are quite alien to human intelligence," Scharre said.
But, but, but: The phrase "arms race" conjures images of the Cuban Missile Crisis, while a bigger conflict is likely to play out on the economic battlefield, says Amy Webb, an NYU professor and founder of the Future Today Institute.
- "When we talk about an arms race, we tend to think about the wars that have already been fought," Webb tells Axios. "We're never talking about the wars of the future."
- Autonomous weapons are one small slice of next-generation conflict, she says. The greater threat: a Chinese economy made hyper-efficient by AI and backed by allied emerging economies that could "cripple" U.S. markets.
What's next: The U.S., unlike most countries, has a clear-cut policy against using weapons that target and kill people without any human input.
- But it has blocked attempts at the United Nations to establish a global ban on fully automated weapons and continues to develop increasingly autonomous fighting machines that still ask for a soldier's go-ahead before shooting.
- Later this year, a group of outside advisers called the Defense Innovation Board will recommend a set of ethical "AI principles" to the Pentagon.