Axios Science

January 19, 2024
Thanks for reading Axios Science. I've spent the last few days at the World Economic Forum where AI was the big topic. This edition covers what some of the world's top researchers along with business and government leaders are saying and doing about AI.
- It's 1,610 words, about a 6-minute read.
- Send your feedback and ideas to me at [email protected].
- Sign up here to receive this newsletter.
1 big thing: The world's AI trust divide
Illustration: Natalie Peeples/Axios
The world is hurtling forward with developing AI tools and rolling out products and services. That's causing anxiety for some and excitement for others attending the World Economic Forum this week in Davos.
The big picture: The divide in optimism about AI echoes broader concerns about how innovations are implemented and the role of scientists.
- New data released this week from the Edelman Trust Barometer found 74% of people surveyed said they equally trust scientists and peers for the truth about innovations, compared to 66% for company technical experts, 51% for CEOs and 47% for journalists.
- But people who think new technologies are being poorly managed trust their peers more than scientists.
- The trend is putting "innovation in peril" and rapid innovation "risks exacerbating trust issues, leading to further societal instability and political polarization," according to the report.
Driving the news: AI dominated the storefronts rented out by corporations, governments and non-profits in Davos and featured prominently in the week's programs and discussions.
- The rush by companies to roll out AI-based products and services is in part driven by fear of getting left behind, says futurist Amy Webb, who advises companies on long-term risks. "Some don't even have a problem to solve."
- "Trust is an obligatory adjacent phrase to AI," says Webb, the CEO of the Future Today Institute. "It doesn't mean people aren't working on it. But in this and other contexts, if the word trust isn't said, it's expected that something else, something more nefarious, is happening."
What's happening: Some AI developers highlighted how they are finding ways to build trust and transparency in their technologies.
- ClimateGPT, a new AI tool for researchers, policymakers and business leaders, allows users to pose questions about climate change and trace the data sources for responses. Blockchain technology creates a public ledger of any changes made to the model. (More on that below.)
- Others are bringing users into the collection of data used to train their AI models. Pittsburgh-based Gecko Robotics, which sponsored an Axios event in Davos, deploys sensors on power plants, bridges, and other key pieces of infrastructure and trains algorithms on that data along with information from experts at the site where the data was collected.
- It creates "a system of record to understand and predict what's going to fail when to extend the use of infrastructure and ensure through both that we don't have a catastrophic failure with huge environmental impact," says Jake Loosararian, the company's founder and CEO.
But it isn't necessarily a technical fix: Gabriele Ricci, chief data and technology officer at pharmaceutical company Takeda, says the company is focused on creating an internal culture that is "looking constantly at the way we interact with customers to build trust. It's built on every single interaction."
- "It's not a tech strategy, but a business strategy for the digital world," he says. "It's a mindset shift."
2. Part II: AI optimisim
Not everyone in Davos was worrying about the dangers of AI. One trend came up repeatedly in discussions: the global divides in optimism about AI.
- Trust and enthusiasm about AI tend to be higher in emerging markets in the global south, according to a survey conducted in 31 countries last year by Ipsos.
- Those countries tend to have more young people (who the survey also found are more excited about AI than older generations).
- "They're the ones that are building the solutions, but they're also going to be the consumers of many of these solutions," Paula Ingabire, Rwanda's minister of information, communication technology and innovation, said in a panel discussion. "[T]here's an opportunity there that we cannot risk by waiting to first close the gap around digital adoption, but rather taking it in parallel."
Between the lines: People in Africa "see AI as an opportunity to access the global knowledge system and to form an identity," says Zeblon Vilakazi, vice chancellor and principal of Wits University in Johannesburg.
- Unlike other industrial revolutions that required a factory or other high capital costs to be in the game, this technological wave largely requires access to a device, he says. "We see it as an amazing enabler."
- "Our biggest fear is to be left out," he says. Some AI researchers on the continent are garnering recognition for their work, but there are no African countries in the world's top 50 measured by research output, according to one index.
- "When you're at the bleeding edge, you see dangers," Vilakazi says. "But if you are at the receiving end, all you see is opportunity."
3. ChatGPT for climate
Illustration: Tiffany Herring/Axios
An AI tool launched today at WEF aims to leverage generative AI to help users access climate change research and connect the dots on major research topics.
The big picture: AI tools powered by large language models — ChatGPT, Gemini, and others — captured the world's attention last year with their ability to generate human-like text on a wide range of topics.
- Now there is a push to optimize AI models in order to pump up their performance on specific tasks.
- There is a need to connect information from the different disciplines that touch on climate change but there is "a mountain of scientific research to traverse," says Jonathan Dotan, founder of EQTY Lab and a co-developer of ClimateGPT.
Background: Foundational models that underpin GPT-4 and other generative AI tools can contain information, like Shakespeare's sonnets, that is irrelevant to questions about climate science, Dotan says.
- They can also be trained on misinformation and disinformation that can politicize the responses, he adds.
- The models are frozen in time and don't capture new climate science data, which is updated continuously.
How it works: The team — from AppTek, EQTY Lab, Erasmus.AI and Aachen University — developed a family of AI models that respond to questions about climate change research. Some are built from scratch using open-source scientific data and others are based on Meta's Llama 2 foundational model, which was then fine-tuned on climate information.
- The topics included climate breakthroughs, regenerative agriculture, geoengineering, central bank policies and climate-related risks.
- They report the model built on Llama 2 is on par with or outperforms other Llama 2 models on climate tasks.
Details: ClimateGPT is aimed at climate researchers, policymakers and business leaders and is available in more than 20 languages.
- Some aspects of the model are being released today on the developer platform HuggingFace, and researchers can apply for access to a hosted version of the tool. It was trained and hosted using renewable energy.
What they're saying: ClimateGPT is "designed to bring trust and transparency to the pressing challenges of accurate and authenticated climate data," according to a press release from EQTY.
- The genesis of ChatGPT was a conversation at the 2022 WEF meeting, Dotan says.
The bottom line: "The pursuit of responsible AI systems is a critical aspect as important as, if not more than, the model performance itself," the researchers write.
4. Axios interview: Google DeepMind's Lila Ibrahim
Google DeepMind COO Lila Ibrahim and me at Axios House in Davos this week. Photo: Dani Ammann
AI is unlocking a "completely different understanding of what's out there" and shaking up materials science and biology, Google DeepMind chief operating officer Lila Ibrahim told me at an Axios event this week in Davos, covered by my colleague Ryan Heath.
Why it matters: In 2023, Google DeepMind revealed it had used an AI tool called GNoME to discover 2.2 million possible new materials, Ryan writes.
- The discovery could offer shortcuts to new types of chips, batteries and solar panels, among other innovations.
- The company has also helped to speed computer coding and developed AlphaFold, an AI tool that solved a decades-old biology problem: understanding and predicting the exact shape of proteins, which enable all living things to function.
What they're saying: Ibrahim said she is now "more optimistic" about AI than both a year ago, when the arrival of ChatGPT dominated the World Economic Forum annual meeting, and in 2018, when she joined DeepMind.
- Last year saw rapid advances in AI developers collaborating with each other and governments to manage the technology's risks, she said.
- Ibrahim thinks it will be easier to teach young AI users an ethical framework for the technology than it will be to teach older generations, who went digital through the internet and social media.
What's next: "We need to work with experts in their fields" to think about how to transform their sectors, she said.
- Ibrahim's recipe for increasing AI trust: Reach out to those left behind by previous technical and economic advances.
5. Worthy of your time
Long-COVID signatures identified in huge analysis of blood proteins (Miryam Naddaf — Nature)
The next generation of nuclear reactors is getting more advanced (Casey Crownhart — MIT Tech Review)
Where anteaters and anacondas roam, and ranchers are now rangers (Jennie Erin Smith — NYT)
6. Something wondrous
"Dataland: Rainforest" by Refik Anadol at the World Economic Forum in Davos. Photo: Alison Snyder/Axios
An owl morphs into a kingfisher and the splayed petals of a blue poppy become red ones packed together in a rose-like flower in the AI-generated worlds of an installation by artist Refik Anadol at the World Economic Forum this week.
The big picture: The work titled "Living Archive: Nature" (2024) is part of the artist's project to create a "large nature model" (LNM) that can provide insights about the environment and be used for education and advocacy.
- It "embodies the fusion of technology, art and science — a perspective that can shape powerful narratives to encourage collection action," according to text accompanying the installation.
How it works: Anadol and his team created LNM by training it on nature-related images, sounds, scents and text from the Smithsonian Institution, Cornell Lab of Ornithology and other institutions as well as data they collected around the world.
- The open-source model has a "singular focus: nature in its vast, beautiful expanse."
- "We have always blended science and technology," Anadol told Artnet. "[B]ut now we are literally working with scientists to make art."
Big thanks to Natalie Peeples on the Axios Visuals team and copy editor Carolyn DiPaolo.
Sign up for Axios Science

Gather the facts on the latest scientific advances



