How AI is rising up the ranks of the military
From programs that can process a vast amount of data for intelligence gathering to the future of autonomous weapons, AI is becoming key to our operations — and our international competition.
Why it matters: Military dominance in the future won't be decided just by the size of a nation's army, but the quality of its algorithms.
- The U.S. still leads on integrating AI into defense, but some competitors like China have advantages of their own — and they're catching up.
Driving the news: The National Counterintelligence and Security Center said in a new paper published Friday that China and Russia are using legal and illegal methods to undermine and overtake U.S. dominance in critical industries including AI and autonomous systems, my Axios colleague Zach Basu writes.
- Officials warned in the paper that China has "the might, talent, and ambition" to surpass the U.S. in AI in the next decade.
- "I don't necessarily agree that China is ahead," says Eric Schmidt, former Google CEO and a co-author of the new book "The Age of AI." "But they want to be ahead and they're investing heavily to do so."
Yes, but: So is the U.S., particularly in defense.
- The Defense Department plans to spend $874 million for AI-related technologies. It also aims to increase the number of AI-related projects to more than 600, up 50% from current efforts.
- "I think you should think about this as us taking science, research, innovation and bringing it to the warfighter, to the marine, to the airmen, the sailor and the soldier so we can maintain that superiority," Sen. Mark Kelly( D-Ariz.) said at a recent Axios event.
Between the lines: Intelligence gathering and analysis is one of the fields where AI can make the biggest difference now for defense, says George Hoyem, managing partner at In-Q-Tel (IQT), the venture investment unit for the U.S. intelligence community.
- IQT, in partnership with the U.S. Air Force and U.S. Special Operations Command, invested in an AI company called Primer.
- Its tool, Primer Command, uses natural language processing and computer vision to capture and analyze vast content from news and social media to help analysts quickly identify novel info and filter out duplications or suspected misinformation.
- "This is about deploying the best machines that we've got to find a signal in the noise that we as humans can work with," Primer CEO Sean Gourley says.
What's next: The big question facing the U.S. and other advanced militaries is how far they should go in the development of autonomous weapons — systems that could theoretically pick out and fire on targets on their own.
- Flashback: UN experts reported that last year drones under the control of the Libyan government appeared to automatically target and attack opposing forces, in what may be one of the first documented uses of autonomous weapons.
- U.S. military officials have stressed the importance of keeping human oversight — but the faster and smarter AI becomes, the thinner the leash of human control may become.
- Meanwhile, competitors are making their own advancements.
Context: Thousands of AI scientists and a growing number of countries have called for a ban on the development of these systems, citing what Max Tegmark, the head of the Future of Life Institute, says is the risk of proliferation beyond the battlefield.
- "These could very quickly become weapons of mass destruction, but they'd be much less expensive and harder to restrict than nuclear bombs," Tegmark says.
- "All they require is a quadcopter drone, facial recognition and lightweight weapons, all of which are cheap and accessible."
The other side: In a report released earlier this year, the National Security Commission on Artificial Intelligence called on President Biden to reject an international ban.
- "It's not practical to ban autonomous weapons because we can't define them," says Schmidt, who co-chaired the commission. "The government should enter into conversations about where the limits should be set."
The bottom line: The AI military race has begun.