Meta's "general intelligence" quest
Meta has now joined Google and the Microsoft/OpenAI alliance in the quest for "artificial general intelligence," the AI industry's holy grail.
Yes, but: None of these companies can clearly define what AGI means — or how they will know they've achieved it.
Why it matters: Meta is now in a race with both Google and OpenAI, which is closely allied with Microsoft, to shovel cash, brains and energy at the daunting challenge of reproducing human-level reasoning in chips and code.
- ChatGPT maker OpenAI was founded as a nonprofit in 2015 specifically to develop AGI and make sure that it would "benefit humanity" — and not run amok.
- Google's DeepMind unit says its mission is to "solve intelligence." The DeepMind lab, which long operated as an independent research outfit, got merged with Google's Brain team last year, moving it closer to the firm's product side.
- A year ago, Microsoft researchers claimed they'd found "sparks of artificial general intelligence" in OpenAI's latest large language model.
The big picture: The AI industry has been trying to match human-level reasoning, knowledge and creativity since the term artificial intelligence was first coined in the 1950s.
- The goal proved much harder than envisioned by the field's pioneers, who thought it might take just a few years.
- Machine learning techniques powered by neural networks finally began to notch some wins over the past two decades, like mastering chess and Go.
- Researchers began using the phrase "artificial general intelligence" in the 2010s to describe the broader aim of endowing computers with human-level thinking.
- The label also aimed to distinguish that broad goal from smaller targets — like beating a Go master or passing the Turing test — that machine learning was already hitting.
The catch: No one agrees on how to define AGI and no one has a testable way of determining whether any given AI project meets the bar.
- Zuckerberg waved his hands when The Verge asked him what he meant by "general intelligence."
- "I don't have a one-sentence, pithy definition... You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence."
- "But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition," Zuckerberg told The Verge.
The other side: At a side event at the World Economic Forum in Davos last week, Yann LeCun, Meta's top AI scientist, threw cold water on the notion, increasingly common in Silicon Valley, that AGI is just around the corner.
- "Why is it that a cat can plan a complex trajectory to jump on a piece of furniture, and yet we cannot do this with robots? Why is it that a 10-year-old can learn in one shot to clear out the dinner table and put it in the dishwasher, and we're nowhere near being able to do this with robots? It's not because we can't build the robots. It's because we can't make them smarter."
- "Obviously, we're missing something really, really big. So we're nowhere near human level intelligence, despite what you might hear from the most optimistic people who tell you AGI is just around the corner."
- "I'd be happy if by the end of my career, we can get something as smart as a cat."
Between the lines: One reason tech leaders are having trouble nailing down what AGI means is that humans continue to have enormous difficulty defining what "intelligence" itself means.
- Tests from IQ to the SAT have never reliably measured more than a fraction of what we might think of as human brainpower, and have always faced criticism over embedded cultural biases.
- Human intelligence takes myriad forms, with psychologists and neurologists, engineers and philosophers, diplomats and artists each viewing the concept from a different angle.
- Senses of perception, time and self-awareness all shape the human experience in ways that may or may not ever be duplicated by the creations of technology.
Today's tech leaders are approaching the problem of defining AGI the way Supreme Court justice Potter Stewart famously described how to identify pornography: You'll know it when you see it.
- That sounds sensible but could prove treacherous, since society has a way of speedily assimilating tech breakthroughs, and today's magic miracle gets taken for granted tomorrow. That could make AGI a mirage-like goal that's perpetually just over the horizon.
What we're watching: With Google, Microsoft and Meta all on board the AGI train, Apple's absence is increasingly notable.
- The notoriously secretive iPhone maker could, of course, pursue the same quest in its own hush-hush labs.
- Or — as it often does — it could wait for the raw tech to mature and then swoop in to perfect and popularize AGI for the affluent mass market.