Axios AI+

January 17, 2024
Hi, it's Megan Morrone, filling in for Ina and Ryan who are at The World Economic Forum in Davos, mostly sleeping when I work and working when I sleep and probably pretty darn cold. Today's AI+ is 1306 words, a 5-minute read.
1 big thing: At Davos, Altman sees "uncomfortable" choices
Sam Altman and Ina Fried at Axios House in Davos, Switzerland. Photo: Dani Ammann for Axios Events
OpenAI's next big model "will be able to do a lot, lot more" than the existing models can, CEO Sam Altman told Axios in an exclusive interview at Davos on Wednesday.
Why it matters: Altman told Axios' Ina Fried that AI is evolving much more rapidly than previous technologies that took Silicon Valley by storm. But he also conceded that the evolution and proliferation of OpenAI's technology will require "uncomfortable" decisions.
- Altman believes future AI products will need to allow "quite a lot of individual customization" and "that's going to make a lot of people uncomfortable," because AI will give different answers for different users, based on their values preferences and possibly on what country they reside in.
- "If the country said, you know, all gay people should be killed on site, then no...that is well out of bounds," Altman tells Axios. "But there are probably other things that I don't personally agree with, but a different culture might...We have to be somewhat uncomfortable as a tool builder with some of the uses of our tools."
- Asked if future versions of OpenAI products might answer a question differently in different countries based on that country's values, Altman said: "It'll be different for users with different values. The countries issue I think, is somewhat less important."
What's coming: We are headed towards a new way of doing knowledge work, Altman said.
- Soon, "you might just be able to say 'what are my most important emails today,'" and have AI summarize them.
- Altman says AI advances will "help vastly accelerate the rate of scientific discovery." He doesn't expect that to happen in 2024, "but when it happens it's a big, big deal."
Altman admitted he's "nervous" about AI's impact on elections around the world this year, but was defensive about OpenAI's investments in that area.
- Altman said he wanted to avoid "fighting the last war" on election misinformation.
- He didn't specify how many OpenAI staff would work on election trouble-shooting, but rejected the idea that simply having a large election team would help solve election problems. OpenAI has far fewer people devoted to election security than companies like Meta and TikTok.
- In recent weeks OpenAI has announced it would ramp up efforts to reduce misinformation and abuse of its models related to more than 60 elections taking place around the world in 2024.
Flashback: Altman was ousted as CEO last November before being swiftly reinstated. The tensions with the board had been driven by an internal debate over growth vs. guardrails on the company's powerful technology.
The intrigue: Altman said there's no update on whether his close associate and OpenAI co-founder Ilya Sutskever is coming back to the company in a senior role, after he resigned in the wake of the board debacle.
- Surprisingly, Altman admitted he "isn't sure on the exact status" of Sutskever's employment.
- Altman's interests and investments extend well beyond OpenAI — from nuclear fusion to chip-making — leaving many to wonder if he is paying enough attention to overseeing a technology he says could destroy humanity.
- Altman said "OpenAI is what I am doing" and that it was a "misrepresentation" to say he is engaged in projects that don't support OpenAI. He said he will continue to support startups he was funding prior to joining.
Driving the news: Altman defended content licensing deals signed by OpenAI with major publishers including AP and Axel Springer, and took a swipe at the New York Times, which is suing OpenAI for copyright infringement.
- Altman said OpenAI doesn't need NYT content to build successful AI models, but dodged when asked if he would oversee the creation of a model based only on licensed and truly public domain content: "I wish I had an easy answer," he said.
- "We can respect an opt-out" from companies like the NYT, he said, "but NYT content has been copied and not attributed all over the web" and OpenAI can't avoid training on that, he said.
What they're saying: Altman's advice for CEOs stuck figuring out the best use of AI for their company is: "How can I make my internal workflow more efficient?"
- Altman's wisdom after his 2023 experience of being fired and rehired as CEO: "Don't let important but not urgent problems fester."
2. DeepMind COO is optimistic about AI and science
Illustration: Maura Losch/Axios
AI is unlocking a "completely different understanding of what's out there" and shaking up materials science and biology, Google DeepMind chief operating officer Lila Ibrahim told Axios' Alison Snyder at the World Economic Forum in Davos.
Why it matters: In 2023, Google DeepMind revealed it had used an AI tool called GNoME to discover 2.2 million possible new materials, Ryan reports.
- The discovery of these potential new materials could offer shortcuts to new types of chips, batteries and solar panels, among other innovations.
- The company has also helped to speed computer coding and developed Alphafold, an AI tool that solved a decades-old biology problem: understanding and predicting the exact shape of proteins, which enable all living things to function.
What they're saying: Ibrahim said she is now "more optimistic" about AI than a year ago, when the arrival of ChatGPT dominated the World Economic Forum annual meeting.
- Last year saw rapid advances in AI developers collaborating with each other and government to manage the technology's risks, she said.
- Ibrahim thinks that it will be easier to teach young AI users an ethical framework for the technology than it will be to teach older generations, who went digital through the internet and social media.
What's next: Ibrahim's recipe for increasing AI trust is to reach out to those left behind by previous technical and economic advances.
3. The AI productivity boost guessing game
Illustration: Eniola Odetunde/Axios
When it comes to the economic impact of artificial intelligence, is 2024 going to be more like 1987 or more like 1995?
Driving the news: That, in a nutshell, is the question beneath much of the (abundant) AI discussions taking place among global leaders and top thinkers at the World Economic Forum this year, reports Axios' Neil Irwin.
Why it matters: In the 1990s and early 2000s, a revolution in information technology helped fuel a productivity boom — and with it, an environment of rapid growth, rising wages and low inflation.
- Many in the Davos crowd envision something similar — or more significant — emerging from AI advances. Less clear is when.
State of play: It takes time for companies to learn how to use technological innovations to their maximum effect to get more output from their workers. In Davos, many talks have centered on AI's leap from an interesting novelty to the core driver of business efficiency.
What they're saying: Clara Shih, the CEO of Salesforce AI, said at Axios House on Monday that the companies Salesforce works with "are already seeing productivity gains" from AI tools.
Yes, but: Companies usually don't rework their processes overnight. Software must be vetted for security and accuracy. Employees need to be retrained. That may paradoxically slow productivity growth during implementation.
4. Training data
- A security flaw in chips from Apple, AMD and Qualcomm GPUs could expose queries, responses and more from LLMs. (Wired)
- "All the training data has been stolen," Time magazine owner and Salesforce CEO Marc Benioff said at Davos. (Bloomberg)
- Elon Musk demanded voting control over a quarter of Tesla's stock to keep working on self-driving cars and humanoid robots inside the company. (Axios)
- Hackers held a private event to let lawmakers play with AI chatbots to show them how insecure they can be. (Axios)
- A year-long study showed that search engine results are getting overrun by SEO spam, probably boosted by AI. (404 Media)
- Uber incentivized more drivers into Teslas to meet its climate commitment. (Axios)
- After the U.S. Supreme Court refused to hear appeals from Apple or Epic Games in their fight over app store changes that could cost Apple billions of dollars, Apple updated its app store guidelines. (9to5Mac)
5. + This
McSweeney's Internet Tendencies offers a complete list of "all" of the types of science fiction. These are our favorites:
- Technology solves problems 🤩 (future good)
- Technology creates problems 😕 (future bad)
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter.
Sign up for Axios AI+

Scoops on the AI revolution and transformative tech, from Ina Fried, Madison Mills, Ashley Gold and Maria Curi.


