AI is speeding up scientific discoveries and helping to spot new ideas
- Alison Snyder, author of Axios Science

Illustration: Gabriella Turrisi/Axios
AI is propelling scientists down the path of discovery — and could point them toward novel ideas they wouldn't have considered otherwise.
Why it matters: The world's problems — among them pandemics, chronic disease and climate change — are calling for scientific innovations, and fast. At the same time, discoveries that push science in new directions aren't happening as often as they once did.
What's happening: Algorithms are becoming "indispensable tools for researchers" to automate collecting and processing data, and to explore "vast spaces of candidate hypotheses to form theories," researchers recently wrote.
- They're pointing mathematicians toward promising solutions, generating drug candidates that can then be tested experimentally, predicting the structures of proteins, and could help to filter massive amounts of data generated from one of the world's largest experiments.
Yes, but: There are still challenges to bringing AI to a wider range of scientific activities.
- Models have to be standardized for AI to be practically integrated into labs and experiments, and many AI methods still operate as black boxes, which can limit science moving from the lab to the world, says Marinka Zitnik, a biomedical informatics professor at Harvard University and co-author of the recent paper in Nature.
- Algorithms will also have to be improved so they can be applied to a broader range of scientific problems, but that raises the concern that a model built for one purpose can be misused for another, she says.
- There are also "many processes related to scientific ideation and other social factors that are beyond what the field can do," she says.
AI needs to be aware of one of those social factors — how scientists do science, a recent paper argues, finding that including information about how scientists work not only speeds up discoveries but pushes scientists toward new ideas.
- Most AI approaches for generating discovery are "ignorant of what scientists are doing," says James Evans, a professor of sociology and computational science at the University of Chicago and co-author of the paper in Nature Human Behavior.
- They typically use the work of scientists — published experimental results — to make predictions.
- But the testing, predicting, collaborating and communicating that takes place among scientists are also key drivers in discoveries and inventions.
How it works: To help AI predictions account for the culture of science, Evans and Jamshid Sourati first created "digital doubles," or simulations of the inferences scientists make. They mapped the relationship between a paper that mentions a desired property and other papers that mention the same property in a different context, a material in the paper or are written by the same scientist.
- They graphed millions of these links, which capture scientific conversations, interactions and collaborations, for different scientific problems: matching a desired electrochemical property to a compound, repurposing drugs, and finding therapies and vaccines for SAR-CoV-2.
- The researchers ran their "human-aware" AI model starting at different times in the past and found it could improve predictions of future discoveries by 50% (for biomedical fields where there is a lot of research) to 400% (for research around COVID vaccines that was occurring rapidly) compared to another AI prediction model trained with the same data but without "human awareness."
- It could also predict — with more than 40% precision — who would make those discoveries.
The big question: "Can we build more than a scoop machine?" Evans says.
- Instead of having AI predict discoveries and who will make them, the researchers tuned the model to "avoid the crowd" and it found scientific blindspots — surprising combinations and predictions that weren't likely to be found naturally by scientists.
- These "alien" ideas were unlikely to be discovered by scientists — at least not until far in the future, were plausible or strongly probable, and were typically better than human ideas.
- The latter is in part a reflection of an issue described by the findings in the first part of the study and previous research: Scientists are incentivized — by the pressure to publish papers — to not stray far from other scientists and their findings and to "wring the last drop of insight" from existing hypotheses and discoveries, Evans says.
- These are cultural aliens, not cognitive aliens, he says. "It's not like people couldn't come up with them if science education was organized in a different way."
But, but, but... The findings only matter if they deliver results, Evans says.
- He's now working with several labs to build experiments to test approaches and principles they described. Ultimately, he envisions being able to predict conflicts and collaborations between scientists or simulating conferences and harvesting ideas from them without attending the meetings.
What they're saying: The new paper brings some concrete evidence to the idea that there is power in combining how humans and machines think, says Iyad Rahwan, director of the Center for Humans and Machines at the Max Planck Institute for Human Development.
- But, he says his own research suggests that, even if a machine discovers alien solutions, people may not find them intuitive and their bias may prevent them from using the machine's solution.
What to watch: Like others, scientists are seeking an artificial generalized intelligence (AGI) for science.
- "We're trying to move beyond ... an [AI] that is trained on a dataset and makes a prediction on that dataset," says Rick Stevens, a computer science professor at the University of Chicago and researcher at Argonne National Laboratory, which recently published a report about how AI could be used to drive scientific discoveries in energy-related fields.
- "A general purpose AI system would be able to work on hundreds or thousands of general problems," Stevens says. But AI systems still need to incorporate causality and scientific laws that can be applied across problems, among other hurdles.
- "This is the beginning of the beginning of this."