Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Illustration: Rebecca Zisser/Axios
AVs have to interpret sensor data, determine their next moves and follow through on them, all of which requires exceedingly complex AI. To meet those demands in real time, computing for AVs will have to happen onboard the vehicle.
Why it matters: The alternative to onboard computing for driving functions would be vehicles relying on unstable network bandwidth for cloud computing while cruising at highway speeds. A specialized AI chip market has emerged to create platforms that can perform these complex computations almost instantaneously, while using as little power as possible.
Where it stands: More than 70 companies have entered the AI chip market in the last few years, and another 100 chip startups have been announced. The next generation of chips could have an array of uses, beyond advancing AV computing power.
- Nvidia is dominant in the GPU space. It offers a computing platform with a plethora of chip offerings for deep learning that take advantage of GPUs' massive parallel computation abilities. Next-generation technology, however, will introduce more efficient chip architectures designed expressly for deep neural network computation rather than graphics processing.
- Google has developed a circuit called a Tensor Processing Unit (TPU) that has run common neural networks 15–30 times faster than a comparable GPU, and used far less power to do so.
- Graphcore, a startup, develops accelerators and software framework together, which arguably combine to make the fastest and most flexible platform for AI applications. Graphcore has developed an Intelligence Processing Unit (IPU), which offers promise for AV use.
What to watch: Roughly 12–15 companies are pulling ahead with their next-generation AI chips, but solidifying a lead will require producing and scaling them as well, which could favor established players or motivate smaller companies to merge with larger ones, as Nervana Systems and Intel did.
Bibhrajit Halder is the CEO of an early-stage AV startup and has worked on autonomous vehicles at Ford, Caterpillar and Apple. He is also a member of GLG, a platform connecting businesses with industry experts.