Illustration: Sarah Grillo/Axios
Sometimes, a computer science researcher produces a paper whose findings, if published, might lead to societal harm. Now, some experts are questioning the default course of action: publishing the paper anyway, potential damage be damned.
Why it matters: The call to suppress some research challenges decades-old principles in computer science and could slow work in a field that drives the economy, helps define the future of work and is the subject of intense global competition.
The big picture: If the field does decide to withhold some work, it would join several scientific disciplines, including nuclear, military and intelligence research, that often keep results under wraps.
"A very core principle in the computer science community has been that openness is a fundamental good," said Brent Hecht, a Northwestern professor who co-authored a proposal for how the field should address potentially harmful research. But he said "recent events have made me and my colleagues question that value."
- Potentially harmful research should be published, says Hecht, but should include a discussion of "complementary technologies, policy, or other interventions that could mitigate the negative broader impacts."
The other side: No, it shouldn't be published, at least in rare cases, says Jack Clark, strategy and communications director at OpenAI, and Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security.
- One reason for suppressing would be if a harmful finding was difficult to discover but easy to reproduce, Scharre says.
Would research be set back by selective openness?
- Definitely, says Clark, but it's a worthwhile tradeoff. Choosing not to pump the brakes is like "saying scientific progress is more important than societal stability," he said.
A potential model comes from the field of computer security, several experts told Axios.
- When security researchers find a critical vulnerability in computer code, they generally notify the software developers before announcing their findings publicly, giving engineers time to patch the problem. This is known as responsible disclosure.
- For other types of computer science research, responsible disclosure could involve waiting to publish potentially harmful results until an effective countermeasure is ready for publication, too.
- A middle-ground approach might see researchers withhold some details while still publishing their results. For instance, a paper might withhold computer code or training data that could easily be repurposed in a harmful way.
Go deeper: Confronting AI's demons.