AI's scientific path to trust
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
LONDON — Top researchers this week said scientific discoveries using AI, like new drugs or better disaster forecasting, offer a way to win people's trust in the technology, but they also cautioned against moving too fast.
Why it matters: Public trust in AI is eroding, putting the technology's wide adoption and potential benefits at risk.
Driving the news: At a forum in London hosted by Google DeepMind and the Royal Society, a roster of renowned scientists described how AI tools are transforming and turbocharging science.
- Efforts range from the search for beneficial new materials to the quest to build a quantum computer to the potential for self-driving labs.
Between the lines: In industry, the buzz around AI has largely centered on the technology's capacity to streamline business — along with the possibility that it might advance toward artificial "superintelligence."
- Experts at the London event highlighted AI as a scientific tool and argued that the scientific method will best serve researchers seeking to leverage advanced AI models and fathom their complexity.
But, the painstaking, thorough work of science can be at odds with the "move fast break things" ethos of the tech industry that is driving AI's development.
- Scientists in the U.S. also face a tide of skepticism about their work.
What they're saying: "I think the scientific method is, arguably, maybe the greatest idea humans have ever had," DeepMind CEO Demis Hassabis told the London gathering.
- "More than ever we need to anchor around the method in today's world, especially with something as powerful and potentially transformative as AI," he said, adding that he thinks neuroscience techniques should be used to analyze AI "brains."
- "I feel we should treat this more as a scientific endeavor, if possible, although it obviously has all the implications that breakthrough technologies normally have in terms of the speed of adoption and the speed of change."
Zoom in: A slew of recent papers show how scientists are trying to put AI to work on some of nature's most complex problems.
- Scientists from the Arc Institute built an AI model trained on the DNA sequences of microbes rather than words and sentences of text.
- This "genomic foundation model" can predict how a DNA change affects an organism and generate realistic genomes from scratch, which could one day help scientists to engineer biology with more ease and precision.
- Researchers have also developed an AI model of the chemical modifications that turn genes on and off.
An ambitious AI-driven effort is underway to map the human body's 37.2 trillion cells.
- It has yielded discoveries — including insights into the development of the human skeleton and the immune system — that were published last week in more than 40 papers from the Human Cell Atlas consortium.
"It's just dizzying. I've never seen anything like it in my life," Eric Topol, founder and director of Scripps Research Translational Institute, said at the event.
Yes, but: "We're moving so fast, we've got to be careful," said Alison Noble, a professor of biomedical engineering at Oxford University.
- "It's great to hear about all the excitement" around AI, she said, but researchers in the field need to re-commit to the basics of the scientific method, like being able to reproduce results from experiments.
- Scientists have expressed concern that some AI tools are being used without understanding the nuances of their abilities and their limits — and creating a reproducibility crisis that could undermine trust in the both the science and the tools.
There also needs to be a shift in how AI-enabled discoveries are described, Denis Newman-Griffis of the Centre for Machine Intelligence at Sheffield University told Axios.
- Statements like "AI discovered new protein structures" ignore that people designed the algorithms, chose the data to train models, interpreted the AI's output, and "built the entire research system those tools are operating in," they said.
- "[W]e cede all the agency that we have" and paint a picture of AI as "nebulous, difficult to control, impossible to understand, and so directly opposite to the things that would make its use trustworthy."
The big picture: Google's top executives in attendance — Hassabis and James Manyika, senior vice president of research, technology and society — said they're trying to increase trust in AI by using it to solve practical problems, including forecasting floods and predicting wildfire boundaries.
- "What could be a better use of AI than curing diseases? To me, that seems like the number one most important thing anyone could apply AI to," Hassabis said.
What's next: Next month, Hassabis and his colleague John Jumper will collect their Nobel Prize for developing AlphaFold, an AI system that can predict the structure of proteins and is used for drug discovery and other problems.
- The challenge "is a lot of those things, as useful as they are, people may not immediately think of them as AI," Manyika said.
