Searching for smart, safe news you can TRUST?

Support safe, smart, REAL journalism. Sign up for our Axios AM & PM newsletters and get smarter, faster.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Searching for smart, safe news you can TRUST?

Support safe, smart, REAL journalism. Sign up for our Axios AM & PM newsletters and get smarter, faster.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Denver news in your inbox

Catch up on the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Des Moines news in your inbox

Catch up on the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Minneapolis-St. Paul news in your inbox

Catch up on the most important stories affecting your hometown with Axios Minneapolis-St. Paul

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tampa-St. Petersburg news in your inbox

Catch up on the most important stories affecting your hometown with Axios Tampa-St. Petersburg

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Current machine learning models aren't yet up to the task of distinguishing false news reports, two new papers by MIT researchers show.

The big picture: After different researchers showed that computers can convincingly generate made-up news stories without much human oversight, some experts hoped that the same machine-learning-based systems could be trained to detect such stories. But MIT doctoral student Tal Schuster's studies show that, while machines are great at detecting machine-generated text, they can't identify whether stories are true or false.

Details: Many automated fact-checking systems are trained using a database of true statements called Fact Extraction and Verification (FEVER).

  • In one study, Schuster and team showed that machine learning-taught fact-checking systems struggled to handle negative statements ("Greg never said his car wasn't blue") even when they would know the positive statement was true ("Greg says his car is blue").
  • The problem, say the researchers, is that the database is filled with human bias. The people who created FEVER tended to write their false entries as negative statements and their true statements as positive statements — so the computers learned to rate sentences with negative statements as false.
  • That means the systems were solving a much easier problem than detecting fake news. "If you create for yourself an easy target, you can win at that target," said MIT professor Regina Barzilay. "But it still doesn't bring you any closer to separating fake news from real news."
  • Both studies were headed by Schuster with teams of MIT collaborators.

The bottom line: The second study showed that machine-learning systems do a good job detecting stories that were machine-written, but not at separating the true ones from the false ones.

Yes, but: While you can generate bogus news stories more efficiently using automated text, not all stories created by automated processes are untrue.

Go deeper

Updated 7 hours ago - Politics & Policy

Coronavirus dashboard

Illustration: Sarah Grillo/Axios

  1. Health: Large coronavirus outbreaks leading to high death rates — Coronavirus cases are at an all-time high ahead of Election Day — U.S. tops 88,000 COVID-19 cases, setting new single-day record.
  2. Politics: States beg for Warp Speed billions.
  3. World: Taiwan reaches a record 200 days with no local coronavirus cases.
  4. 🎧Podcast: The vaccine race turns toward nationalism.

Technical glitch in Facebook's ad tools creates political firestorm

Facebook CEO Mark Zuckerberg. Photo: SOPA Images / Contributor

Facebook said late Thursday that a mix of "technical problems" and confusion among advertisers around its new political ad ban rules caused issues affecting ad campaigns of both parties.

Why it matters: A report out Thursday morning suggested the ad tools were causing campaign ads, even those that adhered to Facebook's new rules, to be paused. Very quickly, political campaigners began asserting the tech giant was enforcing policies in a way that was biased against their campaigns.

8 hours ago - Health

States beg for Warp Speed billions

A COVID-19 drive-thru testing center yesterday at Hard Rock Stadium in Miami Gardens. Photo: David Santiago/Miami Herald via AP

Operation Warp Speed has an Achilles' heel: States need billions to distribute vaccines — and many say they don't have the cash.

Why it matters: The first emergency use authorization could come as soon as next month, but vaccines require funding for workers, shipping and handling, and for reserving spaces for vaccination sites.