Illustration: Lazaro Gamio/Axios
The cripplingly high computational costs of top-notch AI research increasingly means that only a handful of big companies are able to do top-flight AI research.
Why it matters: AI will do more than any other technology to shape our future. If only the Googles and the Microsofts of the world have the resources needed to move the field forward, it will solidify their power — and possibly strangle innovation.
By the numbers: It likely cost the Microsoft-funded research group OpenAI more than $10 million to train GPT-3, its cutting-edge, new natural language processing algorithm, according to the annual State of AI Report published Thursday.
- That's because these models are created by essentially throwing ever-increasing amounts of data at the thorny problems of AI. Processing all of that data takes lots of computational power — and compute costs money.
Of note: OpenAI was originally founded as a nonprofit with the purpose of pursuing AI research for the benefit of all humanity.
- But last year it set up a for-profit arm and accepted a billion-dollar investment from Microsoft.
- Last month Microsoft announced it would be exclusively licensing GPT-3.
Context: In the future the costs of developing these massive models may become prohibitive even for the richest tech companies.
- The report found that without major research breakthroughs, reducing the error rate for ImageNet — a massive database used for visual recognition research — from 11.5% to 1% could cost $100 billion billion. (Yes that's two "billions.")