
Robotics competition, funded in 2015 by DARPA, the Pentagon's radical innovation lab. Photo: Chip Somodevilla/Getty
Amid the global race for supremacy in artificial intelligence, two more tech companies have joined Google in refusing to work on military and police surveillance projects, a sign of the brewing rift between tech players and the government.
Why it matters: Some experts worry that, to the degree AI-focused companies go their own way, the field may lose the long-term, fundamental focus of government-funded programs that have produced some of the world's most hallowed inventions.
Over the decades, numerous foundational technologies have emerged from U.S. military-funded research: among them, semiconductors, cryptography, the internet, GPS and mobile phones. "They arose out of war — or the fear of war" that characterized the Cold War era, says Will Carter of the Center for Strategic and International Studies.
- But recently Google, facing an internal rebellion by employees, bowed out of work on a Pentagon contract called Project Maven.
- Over the last week, facial-recognition company Kairos and Affectiva said they, too will shun such contracts.
- This has coincided with a different pathway for AI development: The large majority of AI funding in the U.S. is coming from impatient private investors, not the federal government.
This shift in the balance of power between AI funding and development means the private sector is leading in an area with "massive national security implications," says Gregory Allen, an adjunct fellow at the Center for a New American Security.
- “Fundamental, long-term, deep technical research and development: That’s always been the province of government,” Carter said.
- But private actors typically want results in three or so years. Carter says the private horizon is too short to create meaningful AI advances.
- One risk: While some longer-range, patient government funding for basic AI research continues, Carter says private money will focus on low-hanging fruit, such as new applications for existing deep-learning concepts that can turn a quick profit.
The backstory: This dynamic reflects a general lack of government leadership on AI, some experts say. Unlike China, Japan, South Korea, the U.K., France, Canada, and several other countries, the U.S. has not outlined a clear national vision for AI development.
- The Obama administration started down the path in late 2016 when it published a "strategic plan" for AI R&D.
- And that's what jolted China into action, said Jeff Ding, a researcher at Oxford University's Future of Humanity Institute. Thinking it was playing catch-up, China published an AI strategy in 2017 — parts of which looked suspiciously familiar.
- The Trump administration has taken small steps toward solidifying its own AI strategy, such as convening officials, business leaders, and academics for a D.C. summit in May. But the consensus from researchers and companies is that the White House is not doing nearly enough.
The U.S. could learn a thing or two from other countries, both smart and unwise.
- Ding says there was a "huge wave" of innovation in China's private sector after the government announced its 2017 plan.
- But, but, but: Much of what China is doing isn't directly transferrable to the U.S. context, because of the Chinese government's more direct control over research. And some of its strategy might better serve as a cautionary tale than a gold standard.
- Close collaboration between the private sector and the security apparatus runs the risk of "incubating a surveillance state with public funds."
"The US only has a small advantage over China and some of the other nation-states. Without a national strategy I think we're at risk of falling behind."— Josh Elliot, director of machine intelligence, Booz Allen Hamilton
Go deeper:
- Why the U.S. needs a "Sputnik moment" in technology (Axios)
- What should a national AI strategy look like? (MIT Tech Review)
- The U.S. looked on as China made AI a national priority (NYT)