Get the latest market trends in your inbox

Stay on top of the latest market trends and economic insights with the Axios Markets newsletter. Sign up for free.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Denver news in your inbox

Catch up on the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Des Moines news in your inbox

Catch up on the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Minneapolis-St. Paul news in your inbox

Catch up on the most important stories affecting your hometown with Axios Minneapolis-St. Paul

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tampa-St. Petersburg news in your inbox

Catch up on the most important stories affecting your hometown with Axios Tampa-St. Petersburg

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Illustration: Sarah Grillo/Axios

In the U.S. and Europe, Big Tech is under fire — hit with big fines and the threat of stiff regulation — for failing to thwart the profound consequences of its inventions, including distorted elections, divided societies, invaded privacy, and sometimes deadly violence.

Driving the news: Now, artificial intelligence researchers, facing potentially adverse consequences from their own technology, are seeking to avoid being ensnared by the same "techlash."

  • AI researchers are working to limit dangerous byproducts of their work, like race- or gender-biased systems and supercharged fake news.
  • But the effort has partly backfired into a controversy of its own.

What's going on: As we reported, OpenAI, a prominent research organization, unveiled a computer program last week that can generate prose that sounds human-written.

  • It described the feat and allowed reporters to test it out (as we did — see this), but OpenAI said it would withhold the computer code.
  • It said it was attempting to establish a new norm around potentially dangerous inventions in which, for the sake of preventing their possible misuse, researchers would continue their work but keep some advances under wraps in the laboratory.
  • In the case of its own new invention, OpenAI said it feared that somebody could use it to effectively develop a weapon for mass-producing fake news.
  • This was the first time a major research outfit is known to have used the rationale of safety to keep AI work secret.

But the move met massive blowback: AI researchers accused the group of pulling off a media stunt, stirring up fear and hype, and unnecessarily holding back an important research advance.

Why it matters: Against the backdrop of the techlash, we're seeing a messy debate play out around an urgent question: what to do with increasingly powerful "dual-use" technologies — AI that can be used for good or for ill.

  • The outcome will determine how technology that could cause widespread harm will — or won't — be released into the world.
  • "None of us have any consensus on what we're doing when it comes to responsible disclosure, dual use, or how to interact with the media," Stephen Merity, a prominent AI researcher, tweeted. "This should be concerning for us all, in and out of the field."

Details: OpenAI says its partial disclosure was an experiment. In a conversation with two top AI researchers from Facebook, OpenAI's Dario Amodei held up social media companies as a cautionary tale:

"The people designing Twitter, Facebook, and other seemingly innocuous platforms didn't consider that they might be changing the nature of discourse and information in a democracy … and now we're paying the price for that with changes to the world order."
  • Several researchers praised OpenAI's decision to withhold code as a vital step toward rethinking norms. "I think it's amazingly responsible," said Kristian Hammond, a Northwestern professor and CEO of AI company Narrative Science.

But other academic researchers came down hard.

  • While the new program is often impressive, its researchers admit that they simply used a scaled-up version of previous work. It's very likely therefore that someone could replicate the feat at relatively minimal cost. OpenAI says that's why it sounded the alarm.
  • But Sam Bowman, a professor at New York University, said the move "feels like a worst-of-both-worlds compromise that slows down the research community without actually having a real long-term safety impact."
  • Several experts said OpenAI's warnings of potential societal impacts are exaggerated. "We're still very far away from the risks," says Anima Anandkumar, a Caltech professor and Nvidia's machine learning research director. She said it's early to be withholding any research at all.

What's next: Computer science is lurching toward the same robust discussion that biologists and nuclear scientists had before them — when to circumscribe openness in the name of safety and ethics.

  • Notably, Google recently said it will consider potential harms of its AI research before deciding to publish it.
  • "I'm not sure what alternative there was," says Jeremy Howard, a founder of AI company Fast.ai. "I think OpenAI did the right thing here, even if they communicated it sub-optimally."

Go deeper

USAID chief tests positive for coronavirus

An Air Force cargo jet delivers USAID supplies to Russia earlier this year. Photo: Mikhail Metzel/TASS via Getty Images

The acting administrator of the United States Agency for International Development informed senior staff Wednesday he has tested positive for coronavirus, two sources familiar with the call tell Axios.

Why it matters: John Barsa, who staffers say rarely wears a mask in their office, is the latest in a series of senior administration officials to contract the virus. His positive diagnosis comes amid broader turmoil at the agency following the election.

Bryan Walsh, author of Future
6 hours ago - Health

COVID-19 shows a bright future for vaccines

Illustration: Annelise Capossela/Axios

Promising results from COVID-19 vaccine trials offer hope not just that the pandemic could be ended sooner than expected, but that medicine itself may have a powerful new weapon.

Why it matters: Vaccines are, in the words of one expert, "the single most life-saving innovation ever," but progress had slowed in recent years. New gene-based technology that sped the arrival of the COVID vaccine will boost the overall field, and could even extend to mass killers like cancer.

7 hours ago - Health

Beware a Thanksgiving mirage

Illustration: Sarah Grillo/Axios

Don't be surprised if COVID metrics plunge over the next few days, only to spike next week.

Why it matters: The COVID Tracking Project warns of a "double-weekend pattern" on Thanksgiving — where the usual weekend backlog of data is tacked on to a holiday.