Allen Institute CEO: How AI "broke trust" with the public
Add Axios as your preferred source to
see more of our stories on Google.

Ali Farhadi speaks at Axios' AI+ Summit in New York. Photo: DP Jolly/Axios
By deploying artificial intelligence "prematurely at scale," the tech industry has broken trust with the public, Ali Farhadi, CEO of the Allen Institute for AI, told Axios' Ina Fried at the Axios AI+ Summit in New York Wednesday.
Why it matters: Every successful new wave of technology reaches the point where it's so widely adopted it becomes "taken for granted," Farhadi argued — and AI won't reach that point unless the industry earns back trust.
Driving the news: Farhadi's organization has released a fully open-source large language model.
- Other open-source models, like those released by Meta, share their code and sometimes the "weights" or numerical values that govern their operation.
- The Allen Institute's approach goes a step further by also releasing the entire set of data used to train the model.
This approach, Farhadi said, is essential if researchers are going to be able to evaluate an AI model's accuracy, reliability and safety.
- "Without actual openness, it's hard to be scientific about the evaluation," said Farhadi, who is also professor of computer science and engineering at the University of Washington.
Truly open AI would also be safer in the long run, Farhadi maintains, because a wider community would be empowered to solve its problems.
- "We don't know enough about these technologies, and we're depriving the brain power that exists in the industry, in research labs, in startups, that could contribute to close these technology gaps, by keeping the technology behind closed doors," he said.
- Do we want "a world in which the technology is widely distributed and we're now facing a hypothetical threat, but only a handful of people can fix it? Or a world where we actually have millions of experts who can jump on the call and solve the problem?"
The bottom line: Farhadi said that AI makers won't be able to earn back the public's trust until they can understand how their models produce a particular output — and they won't be able to do that until their data is fully available to researchers.
Go deeper: Open software needs an AI rethink
