Feb 21, 2019

Keeping AI away from the bad guys

Illustration: Sarah Grillo/Axios

In the U.S. and Europe, Big Tech is under fire — hit with big fines and the threat of stiff regulation — for failing to thwart the profound consequences of its inventions, including distorted elections, divided societies, invaded privacy, and sometimes deadly violence.

Driving the news: Now, artificial intelligence researchers, facing potentially adverse consequences from their own technology, are seeking to avoid being ensnared by the same "techlash."

  • AI researchers are working to limit dangerous byproducts of their work, like race- or gender-biased systems and supercharged fake news.
  • But the effort has partly backfired into a controversy of its own.

What's going on: As we reported, OpenAI, a prominent research organization, unveiled a computer program last week that can generate prose that sounds human-written.

  • It described the feat and allowed reporters to test it out (as we did — see this), but OpenAI said it would withhold the computer code.
  • It said it was attempting to establish a new norm around potentially dangerous inventions in which, for the sake of preventing their possible misuse, researchers would continue their work but keep some advances under wraps in the laboratory.
  • In the case of its own new invention, OpenAI said it feared that somebody could use it to effectively develop a weapon for mass-producing fake news.
  • This was the first time a major research outfit is known to have used the rationale of safety to keep AI work secret.

But the move met massive blowback: AI researchers accused the group of pulling off a media stunt, stirring up fear and hype, and unnecessarily holding back an important research advance.

Why it matters: Against the backdrop of the techlash, we're seeing a messy debate play out around an urgent question: what to do with increasingly powerful "dual-use" technologies — AI that can be used for good or for ill.

  • The outcome will determine how technology that could cause widespread harm will — or won't — be released into the world.
  • "None of us have any consensus on what we're doing when it comes to responsible disclosure, dual use, or how to interact with the media," Stephen Merity, a prominent AI researcher, tweeted. "This should be concerning for us all, in and out of the field."

Details: OpenAI says its partial disclosure was an experiment. In a conversation with two top AI researchers from Facebook, OpenAI's Dario Amodei held up social media companies as a cautionary tale:

"The people designing Twitter, Facebook, and other seemingly innocuous platforms didn't consider that they might be changing the nature of discourse and information in a democracy … and now we're paying the price for that with changes to the world order."
  • Several researchers praised OpenAI's decision to withhold code as a vital step toward rethinking norms. "I think it's amazingly responsible," said Kristian Hammond, a Northwestern professor and CEO of AI company Narrative Science.

But other academic researchers came down hard.

  • While the new program is often impressive, its researchers admit that they simply used a scaled-up version of previous work. It's very likely therefore that someone could replicate the feat at relatively minimal cost. OpenAI says that's why it sounded the alarm.
  • But Sam Bowman, a professor at New York University, said the move "feels like a worst-of-both-worlds compromise that slows down the research community without actually having a real long-term safety impact."
  • Several experts said OpenAI's warnings of potential societal impacts are exaggerated. "We're still very far away from the risks," says Anima Anandkumar, a Caltech professor and Nvidia's machine learning research director. She said it's early to be withholding any research at all.

What's next: Computer science is lurching toward the same robust discussion that biologists and nuclear scientists had before them — when to circumscribe openness in the name of safety and ethics.

  • Notably, Google recently said it will consider potential harms of its AI research before deciding to publish it.
  • "I'm not sure what alternative there was," says Jeremy Howard, a founder of AI company Fast.ai. "I think OpenAI did the right thing here, even if they communicated it sub-optimally."

Go deeper

Coronavirus updates: CDC monitoring 4 presumptive positive cases in western U.S.

Data: The Center for Systems Science and Engineering at Johns Hopkins, the CDC, and China's Health Ministry. Note: China numbers are for the mainland only and U.S. numbers include repatriated citizens.

State public health authorities are monitoring four new presumptive positive cases of the novel coronavirus as of late Friday evening, per the CDC. California is evaluating a second possible instance of community spread as Oregon announced its first possible case. Washington state has two presumptive cases, only one of which is likely travel-related.

The big picture: COVID-19 has killed more than 2,900 people and infected more than 85,000 others in over 60 countries and territories outside the epicenter in mainland China. The number of new cases reported outside China now exceed those inside the country.

Go deeperArrowUpdated 2 mins ago - Health

Don't panic

Illustration: Aïda Amer/Axios

The stock market is heading south with unprecedented velocity. Does that mean it's crashing? Are we in a recession? Is this a financial crisis?

No, no, and no.

Sanders' big socialism rebrand

Illustration: Sarah Grillo/Axios

Bernie Sanders is trying to rebrand socialism in the U.S., but he'll have to overcome common fears about what the word means — fears the Trump campaign is watching and waiting to exploit.

Why it matters: Sanders may face a major challenge in convincing Americans in their 40s or older that there's a meaningful difference between what he supports, described as democratic socialism, and the authoritarian socialism that we've seen in regimes like Venezuela.