
Illustration: Sarah Grillo/Axios
Amid rising worries regarding the development of human-level machine intelligence, a prominent Berkeley research organization has become the first to stop openly publishing all of its findings.
Why it matters: The move by the Machine Intelligence Research Institute is a break from a traditional standard of openness in computer-science research.
We’ve reported before on researchers’ questions about the right amount of openness and transparency when discussing potentially dangerous work. This is the most extreme reaction we’ve seen yet.
- It comes as AI researchers are quietly deliberating how to react to the potential malicious use of AI.
- MIRI worries that open publishing could aid progress toward an unchecked super-intelligent machine.
- Today, AI researchers routinely first post their papers at Arxiv, an entirely free and open, non-peer-reviewed repository for scientific papers.
Details: MIRI — which has received funding from AI dystopians like Peter Thiel and Elon Musk’s Future of Life Institute — posted a strategy document on Thanksgiving outlining its new policy of "nondisclosed-by-default research."
- "Most results discovered within MIRI will remain internal-only unless there is an explicit decision to release those results, based usually on a specific anticipated safety upside from their release," wrote MIRI executive director Nate Soares.
"It does seem to me to be useful that an AI research organization has taken this step, if only so that it generates data for the community about what the consequences are of taking such a step."— Jack Clark, OpenAI policy director
OpenAI, another prominent AI research nonprofit, wrote in its charter that it expects that "safety and security concerns will reduce our traditional publishing in the future."
- Jack Clark, OpenAI’s policy director, said the organization is still in the early stages of fulfilling the goal, but that the questions MIRI is grappling with — when it's best to keep research private — are worth debating.