Axios AI+

November 13, 2024
Approximately a quarter of the bones in the human body are in the feet.
Today's AI+ is 1,070 words, a 4-minute read.
1. AI's "bigger is better" faith begins to dim
The generative AI revolution — built on the belief that models will keep getting wildly better as they grow crazily bigger — faces new fears that it might plateau out.
Why it matters: Two years after ChatGPT launched, the tech industry, led by OpenAI, has bet billions on a scaling strategy — assemble mountains of chips and data, make tomorrow's large language models even larger than today's, and watch the technology advance. Those bets, always risky, could go bad.
Driving the news: Some OpenAI employees are saying the company's next flagship model, called Orion, will not improve on its predecessor, GPT-4, as impressively as GPT-4 excelled over GPT-3, both The Information and Reuters reported over the weekend.
- Since GPT-4 was released in March 2023, industry observers have debated whether OpenAI can top it — and how long this next generation would take.
- Both Google and OpenAI competitor Anthropic are also encountering setbacks and delays in efforts to advance the next generation of their key foundation models, Bloomberg reported today.
OpenAI CEO Sam Altman has repeatedly affirmed his faith in the "just make it bigger" approach.
- In his "The Intelligence Age" manifesto earlier this year, Altman wrote that deep learning gets "predictably better with scale": "To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems."
Yes, but: Computing power is not infinite, and it burns fast through dollars and electricity, while today's models have already been fed most of the quality data that's available (often used without a clear legal right to do so).
- Efforts to train models with synthetic data — data that's AI-generated — have yet to prove dependable.
The other side: When engineers find that one strategy stops working, they look for another.
- The industry has already begun researching and implementing alternative techniques to the "just make it bigger" approach that could continue to improve the performance of generative AI models.
- Researchers are trying to shrink models so they consume less computing energy but perform well on specialized tasks.
- OpenAI has rolled out a new "reasoning model," called o1 (formerly "Strawberry"), that improves its performance by using more computing resources, and taking more time, as it answers users' questions.
What they're saying: Skeptics have regularly warned of limits to the just-make-it-bigger approach to improving LLMs.
- A year ago, Bill Gates said he believed GPT-4's successor would disappoint.
- AI critic Gary Marcus, who has long predicted a plateauing of generative AI advances, took a victory lap over the new reports.
Between the lines: The prospect of LLMs hitting a wall touches on an even bigger debate about how the field might reach its coveted goal of human-like intelligence (also known as artificial general intelligence, or AGI).
- Some researchers believe the path to AGI lies through data-heavy and power-hungry generative AI.
- But others are working on different AI techniques, including combining neural networks that underpin generative AI and hard-wired knowledge. The approach was used by DeepMind to build an AI that can solve sophisticated math problems.
Our thought bubble: Moore's Law — the principle predicting regular doubling of chip performance every 18 months to two years — eventually hit a wall, too (as its namesake, Intel cofounder Gordon Moore, had predicted).
- But that took many decades, while generative AI's evolution has been speedier.
- The Moore's Law wall meant it became a lot harder for semiconductor makers to boost performance by squeezing more transistors into the space on a chip.
- That just pushed the industry to find other ways to speed up computing, including the use of new materials and new kinds of lithography.
What we're watching: Wall Street is fretting about returns on the $200 billion Big Tech is spending on AI this year.
- The scaling approach is running up the technology's price tag to prohibitive levels.
- Applications outside of a few fields like software programming and customer service remain speculative. And consumer adoption may already be slowing down.
What they're saying: "[T]here's such an appetite and a yearning for something practical, real and not the pie-in-the-sky 'AI can do everything for you,'" says Karthik Dinakar, cofounder and CTO of Pienso, which helps people build custom AI models.
- "You can't GPT your way out of this," he says.
2. How tech will fare in Congress' lame duck weeks
AI bill negotiations are likely to be pared back during the lame duck legislative session as lawmakers shift their focus to the next Congress.
Why it matters: Lofty bipartisan ambitions to regulate AI are all but dead as Washington prepares for sweeping changes post-election.
State of play: There are just 23 legislative days left in Congress this year.
- Republicans, having won the Senate and on track to win the House, don't have a strong incentive to strike deals with Democrats when they can just wait until they have control.
- The only driving force for AI action is the National Defense Authorization Act (NDAA), a must-pass defense policy bill.
What we're watching: Negotiations between the House and Senate on which AI measures to include in the NDAA are ongoing, a House GOP leadership aide says.
- Top contenders include the bills passed by the House Science Committee in September, such as the CREATE AI Act to authorize the National AI Research Resource.
The House bipartisan AI working group's report is still on track to come out at the end of the year, Rep. Jay Obernolte's spokesperson Connor Chapinski says.
If you need smart, quick intel on federal tech policy for your job, get Axios Pro.
3. Training data
- President-elect Donald Trump announced that Elon Musk and Vivek Ramaswamy will lead a new Department of Government Efficiency (DOGE) to provide guidance "outside of government." (Axios)
- Former OpenAI CTO Mira Murati reportedly lured OpenAI researchers Mianna Chen, Barret Zoph and Luke Metz to her new, unnamed startup. (The Information)
- Meanwhile, Greg Brockman has returned from his sabbatical at OpenAI. His title is still president but the company is evaluating new duties that would see him more focused on specific technical challenges. (Bloomberg)
- Enterprise AI company Writer raised $200 million in a Series C round co-led by Premji Invest, Radical Ventures and ICONIQ Growth. (TechCrunch)
- Sources tell Mark Gurman that Apple is making a 6" wall-mounted tablet that will use AI to control apps to be released in March 2025. (Bloomberg)
- Trump tried to ban TikTok in his first term. His second term could be the app's saving grace. (Axios)
4. + This
Excited for this set coming out in January.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Anjelica Tan for copy editing it.
Sign up for Axios AI+







