2. Primitive AI
Artificial intelligence, it is said, will solve many of our most vexing problems once it is fully developed. However, despite advances, it's still in a rudimentary state compared with the ambitions, according to a report published recently by Stanford University.
Where it stands: Some of the best progress has been in the narrow realm of games. In a paper released yesterday, Alphabet's DeepMind, a top AI lab, revealed AlphaZero. This is the algorithm that in just 24 hours learned chess and its Japanese version Shogi, and went on to beat world champions in both.
Why it matters: The advances in games are important but life isn't a game. AI progress outside of these areas has been harder to define.
"The most important thing for AI is to go from exceptional promise to use in actual everyday life," Martial Hebert, director of the Robotics Institute at Carnegie Mellon University, tells Axios.
Here are some key takeaways from the AI Index report:
- AI investment is through the roof: Between 2000 and 2017, the number of AI startups in the U.S. grew 14 times — from 47 to 649. Annual venture capital investment in AI rose 6 times last year, to $3.5 billion (see chart above).
- Skills are in demand: The number of U.S. job postings listing AI as a required skill was about 11,100 last year. So far this year, the number is 31,000.
- A race is on between the U.S. and China: Kai-fu Lee, CEO of Beijing-based VC firm Sinovation, said in the report that China is making impressive progress in AI, and has far more data — the main building block of robust AI — to work with than anyone else. "In this age of AI, I predict that the United States-China duopoly is not only inevitable. It has already arrived," he said.
- Experts worry about hype: In expert comment included in the report, Michael Wooldridge, head of computer science at Oxford University, said exaggeration has created an AI bubble and that there is the potential for a third "AI winter," a period of disillusion and a drawdown of investment as the field regroups.
"There are plenty of charlatans and snake oil salesmen out there, who are quite happy to sell whatever they happen to be doing as AI, and it is a source of great personal frustration that the press are happy to give airtime to views on AI that I consider to be ill-informed at best, lunatic fringe at worst," Wooldridge wrote.
Read the full piece here, written with Science editor Alison Snyder.