Sign up for our daily briefing

Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Stay on top of the latest market trends

Subscribe to Axios Markets for the latest market trends and economic insights. Sign up for free.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Sports news worthy of your time

Binge on the stats and stories that drive the sports world with Axios Sports. Sign up for free.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tech news worthy of your time

Get our smart take on technology from the Valley and D.C. with Axios Login. Sign up for free.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Get the inside stories

Get an insider's guide to the new White House with Axios Sneak Peek. Sign up for free.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Want a daily digest of the top Denver news?

Get a daily digest of the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Want a daily digest of the top Des Moines news?

Get a daily digest of the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Want a daily digest of the top Twin Cities news?

Get a daily digest of the most important stories affecting your hometown with Axios Twin Cities

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Want a daily digest of the top Tampa Bay news?

Get a daily digest of the most important stories affecting your hometown with Axios Tampa Bay

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Want a daily digest of the top Charlotte news?

Get a daily digest of the most important stories affecting your hometown with Axios Charlotte

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Illustration: Sarah Grillo/Axios

In the U.S. and Europe, Big Tech is under fire — hit with big fines and the threat of stiff regulation — for failing to thwart the profound consequences of its inventions, including distorted elections, divided societies, invaded privacy, and sometimes deadly violence.

Driving the news: Now, artificial intelligence researchers, facing potentially adverse consequences from their own technology, are seeking to avoid being ensnared by the same "techlash."

  • AI researchers are working to limit dangerous byproducts of their work, like race- or gender-biased systems and supercharged fake news.
  • But the effort has partly backfired into a controversy of its own.

What's going on: As we reported, OpenAI, a prominent research organization, unveiled a computer program last week that can generate prose that sounds human-written.

  • It described the feat and allowed reporters to test it out (as we did — see this), but OpenAI said it would withhold the computer code.
  • It said it was attempting to establish a new norm around potentially dangerous inventions in which, for the sake of preventing their possible misuse, researchers would continue their work but keep some advances under wraps in the laboratory.
  • In the case of its own new invention, OpenAI said it feared that somebody could use it to effectively develop a weapon for mass-producing fake news.
  • This was the first time a major research outfit is known to have used the rationale of safety to keep AI work secret.

But the move met massive blowback: AI researchers accused the group of pulling off a media stunt, stirring up fear and hype, and unnecessarily holding back an important research advance.

Why it matters: Against the backdrop of the techlash, we're seeing a messy debate play out around an urgent question: what to do with increasingly powerful "dual-use" technologies — AI that can be used for good or for ill.

  • The outcome will determine how technology that could cause widespread harm will — or won't — be released into the world.
  • "None of us have any consensus on what we're doing when it comes to responsible disclosure, dual use, or how to interact with the media," Stephen Merity, a prominent AI researcher, tweeted. "This should be concerning for us all, in and out of the field."

Details: OpenAI says its partial disclosure was an experiment. In a conversation with two top AI researchers from Facebook, OpenAI's Dario Amodei held up social media companies as a cautionary tale:

"The people designing Twitter, Facebook, and other seemingly innocuous platforms didn't consider that they might be changing the nature of discourse and information in a democracy … and now we're paying the price for that with changes to the world order."
  • Several researchers praised OpenAI's decision to withhold code as a vital step toward rethinking norms. "I think it's amazingly responsible," said Kristian Hammond, a Northwestern professor and CEO of AI company Narrative Science.

But other academic researchers came down hard.

  • While the new program is often impressive, its researchers admit that they simply used a scaled-up version of previous work. It's very likely therefore that someone could replicate the feat at relatively minimal cost. OpenAI says that's why it sounded the alarm.
  • But Sam Bowman, a professor at New York University, said the move "feels like a worst-of-both-worlds compromise that slows down the research community without actually having a real long-term safety impact."
  • Several experts said OpenAI's warnings of potential societal impacts are exaggerated. "We're still very far away from the risks," says Anima Anandkumar, a Caltech professor and Nvidia's machine learning research director. She said it's early to be withholding any research at all.

What's next: Computer science is lurching toward the same robust discussion that biologists and nuclear scientists had before them — when to circumscribe openness in the name of safety and ethics.

  • Notably, Google recently said it will consider potential harms of its AI research before deciding to publish it.
  • "I'm not sure what alternative there was," says Jeremy Howard, a founder of AI company Fast.ai. "I think OpenAI did the right thing here, even if they communicated it sub-optimally."

Go deeper

Capitol review panel recommends more police, mobile fencing

Photo: Olivier Douliery/AFP via Getty Images

A panel appointed by Congress to review security measures at the Capitol is recommending several changes, including mobile fencing and a bigger Capitol police force, to safeguard the area after a riotous mob breached the building on Jan 6.

Why it matters: Law enforcement officials have warned there could be new plots to attack the area and target lawmakers, including during a speech President Biden is expected to give to a joint session of Congress.

Financial fallout from the Texas deep freeze

Illustration: Annelise Capossela/Axios

Texas has thawed out after an Arctic freeze last month threw the state into a power crisis. But the financial turmoil from power grid shock is just starting to take shape.

Why it matters: In total, electricity companies are billions of dollars short on the post-storm payments they now owe to the state's grid operator. There's no clear path for how they will pay — something being watched closely across the country as extreme weather events become more common.

U.S. Chamber decides against political ban for Capitol insurrection

A pedestrian passes the U.S. Chamber of Commerce headquarters as it undergoes renovation. Photo: Andrew Harrer/Bloomberg via Getty Images

The U.S. Chamber of Commerce revealed Friday it won't withhold political donations from lawmakers who simply voted against certifying the presidential election results and instead decide on a case-by-case basis.

Why it matters: The Chamber is the marquee entity representing businesses and their interests in Washington. Its memo, obtained exclusively by Axios, could set the tone for businesses debating how to handle their candidate and PAC spending following the Jan. 6 Capitol attack.