The growing inequality of doing AI research
A new report shows that cripplingly high computational costs mean just a handful of big companies are able to do top-flight AI research.
Why it matters: AI will do more than any other technology to shape our future. If only the Googles and the Microsofts of the world have the resources needed to move the field forward, it will solidify their power — and possibly strangle innovation.
By the numbers: It likely cost the Microsoft-funded research group OpenAI more than $10 million to train GPT-3, its cutting-edge, new natural language processing algorithm, according to the annual State of AI Report published Thursday.
- That's because these models are created by essentially throwing ever-increasing amounts of data at the thorny problems of AI. Processing all of that data takes lots of computational power — and compute costs money.
- What this means "is that a handful of well-capitalized entities are now in control of artificial general intelligence research," says Ian Hogarth, a visiting professor at University College of London and one of the co-authors of the report.
Of note: OpenAI was originally founded as a nonprofit with the purpose of pursuing AI research for the benefit of all humanity.
- But last year it set up a for-profit arm and accepted a billion-dollar investment from Microsoft.
- Last month Microsoft announced it would be exclusively licensing GPT-3.
What they're saying: "This is a direct reflection of the cost of doing frontier research in compute and talent," says Hogarth.
Context: In the future the costs of developing these massive models may become prohibitive even for the richest tech companies.
- The report found that without major research breakthroughs, reducing the error rate for ImageNet — a massive database used for visual recognition research — from 11.5% to 1% could cost 100 billion billion dollars. (Yes that's two "billions.")
- All that compute requires lots of energy, which in turn means that AI research has a growing environmental footprint.
The bottom line: It doesn't make sense scientifically or ethically for high-level AI research to be done only by those companies that can afford it, but changing the paradigm won't be easy.