Sign up for our daily briefing

Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Denver news in your inbox

Catch up on the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Des Moines news in your inbox

Catch up on the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Minneapolis-St. Paul news in your inbox

Catch up on the most important stories affecting your hometown with Axios Twin Cities

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tampa Bay news in your inbox

Catch up on the most important stories affecting your hometown with Axios Tampa Bay

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Charlotte news in your inbox

Catch up on the most important stories affecting your hometown with Axios Charlotte

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Illustration: Aïda Amer/Axios

The threat of deepfakes to elections, businesses and individuals is the result of a breakdown in the way information spreads online — a long-brewing mess that involves a decades-old law and tech companies that profit from viral lies and forgeries.

Why it matters: The problem likely will not end with better automated deepfake detection, or a high-tech method for proving where a photo or video was taken. Instead, it might require far-reaching changes to the way social media sites police themselves.

Driving the news: Speaking at a Friday conference hosted by the Notre Dame Technology Ethics Center, deepfake experts from law, business and computer science described an entrenched problem with roots far deeper than the first AI-manipulated videos that surfaced two years ago.

  • The technology that powers them goes back to the beginning of the decade, when harmful AI-generated revenge porn or fraudulent audio deepfakes weren't yet on the map.
  • "We as researchers did not have this in mind when we created this software," Notre Dame computer scientist Pat Flynn says. "We should have. I admit to a failing as a community."

But the story begins in earnest back in the 1990s, along with the early internet.

  • When web browsers started supporting images, people predictably uploaded porn with celebrities' faces pasted on. That, it turns out, was just the beginning. Now, 96% of deepfakes are nonconsensual porn, nearly all of them targeting women.
  • "There was something much more dark coming if we sat back [in the 90s] and let people use women's faces and bodies in ways they never consented to," Mary Anne Franks, a law professor at the University of Miami, points out.

Part of a 1996 law, the Communications Decency Act allowed internet platforms to keep their immunity from lawsuits over user-created content even when they moderated or "edited" the postings.

  • Now, lawmakers are toying with revising it — or even (less likely) yanking it completely, Axios tech policy reporter Margaret Harding McGill reported this week.
  • The argument is that companies are not holding up their end of the bargain. "The responsibility lies with platforms. They are exploiting these types of fake content," Franks said. "We can't keep acting like they're simply innocent bystanders."

A massive challenge for platforms is dealing with misinformation quickly, before it can cause widespread damage.

  • Ser-Nam Lim, a Facebook AI research manager, described the company's goal: an automated system that flags potentially manipulated media to humans for fact checking.
  • But, as I argued on a separate panel Friday, platforms are the first line of defense against viral forgeries. Facebook's human fact-checking can be painfully slow — in one recent case, it took more than a day and a half — and so the company's immediate reaction, or lack thereof, carries a lot of weight.

Go deeper: Social media reconsiders its relationship with the truth

Go deeper

The rebellion against Silicon Valley (the place)

Photo illustration: Sarah Grillo/Axios. Smith Collection/Gado via Getty Images

Silicon Valley may be a "state of mind," but it's also very much a real enclave in Northern California. Now, a growing faction of the tech industry is boycotting it.

Why it matters: The Bay Area is facing for the first time the prospect of losing its crown as the top destination for tech workers and startups — which could have an economic impact on the region and force it to reckon with its local issues.

Erica Pandey, author of @Work
3 hours ago - Economy & Business

Telework's tax mess

Illustration: Annelise Capossela/Axios

As teleworkers flit from city to city, they're creating a huge tax mess.

Why it matters: Our tax laws aren't built for telecommuting, and this new way of working could have dire implications for city and state budgets.

Wanted: New media bosses, everywhere

Illustration: Sarah Grillo/Axios

The Washington Post, Los Angeles Times, Reuters, HuffPost and Wired are all looking for new editors. Soon, The New York Times will be too.

Why it matters: The new hires will reflect a new generation — one that's addicted to technology, demands accountability and expects diversity to be a priority.