🚨Reminder: If you haven't participated in our annual reader survey, click here to take it. Knowing our readers helps us deliver stories you care about.
📺 “Axios on HBO” returns tomorrow.
Illustration: Aïda Amer/Axios
AI systems intended to help people make tough choices — like prescribing the right drug or setting the length of a prison sentence — can instead end up effectively making those choices for them, thanks to human faith in machines.
How it works: These programs generally offer new information or a few options meant to help a human decision-maker choose more wisely.
Why it matters: Over-reliance on potentially faulty AI can harm the people whose lives are shaped by critical decisions about employment, health care, legal proceedings and more.
The big picture: This phenomenon is called automation bias. Early studies focused on autopilot for airplanes — but as automation technology becomes more complex, the problem could get much worse with more dangerous consequences.
"When people have to make decisions in relatively short timeframes, with little information — this is when people will tend to just trust whatever the algorithm gives them," says Ryan Kennedy, a University of Houston professor who researches trust and automation.
And in the courtroom, human prejudice mixes in.
What's next: More information about an algorithm's confidence level can give people clues for how much they should lean on it. Lungren says the Stanford physicians made fewer mistakes when they were given a recommendation and an accuracy estimate.
Illustration: Aïda Amer & Eniola Odetunde/Axios
Scientists have long tried to use AI to automatically detect hate speech, which is a huge problem for social network users. And they're getting better at it, despite the difficulty of the task.
What's new: A project from UC Santa Barbara and Intel takes a big step further — it proposes a way to automate responses to online vitriol.
The big picture: Automated text generation is a buzzy frontier of the science of speech and language. In recent years, huge advances have elevated these programs from error-prone autocomplete tools to super-convincing — though sometimes still transparently robotic — authors.
How it works: To build a good hate speech detector, you need some actual hate speech. So the researchers turned to Reddit and Gab, two social networks with little to no policing and a reputation for rancor.
The results: Some of the computer-generated responses could easily pass as human written — like, "Use of the c-word is unacceptable in our discourse as it demeans and insults women" or "Please do not use derogatory language for intellectual disabilities."
Our take: This project didn't test how effective the responses were in stemming hate speech — just how successful other people thought it might be.
"We believe that bots will need to declare their identities to humans at the beginning," says William Wang, a UCSB computer scientist and paper co-author. "However, there is more research needed how exactly the intervention will happen in human-computer interaction."
Illustration: Aïda Amer/Axios
The threat of deepfakes to elections, businesses and individuals is the result of a breakdown in the way information spreads online — a long-brewing mess that involves a decades-old law and tech companies that profit from viral lies and forgeries.
Why it matters: The problem likely will not end with better automated deepfake detection, or a high-tech method for proving where a photo or video was taken. Instead, it might require far-reaching changes to the way social media sites police themselves.
Driving the news: Speaking at a Friday conference hosted by the Notre Dame Technology Ethics Center, deepfake experts from law, business and computer science described an entrenched problem with roots far deeper than the first AI-manipulated videos that surfaced two years ago.
But the story begins in earnest back in the 1990s, along with the early internet.
Part of a 1996 law, the Communications Decency Act allowed internet platforms to keep their immunity from lawsuits over user-created content even when they moderated or "edited" the postings.
A massive challenge for platforms is dealing with misinformation quickly, before it can cause widespread damage.
Illustration: Eniola Odetunde/Axios
An uncertain transportation overhaul (Joann Muller & Alison Snyder — Axios)
Interactive game: Quantifying fairness (Karen Hao & Jonathan Stray -— MIT Tech Review)
Automating poverty (Ed Pilkington — The Guardian)
Who wins and who loses as GM goes electric (Danielle Bochove — Bloomberg)
European universities stoking their own startup scene (Daniel Michaels — WSJ)
Photo: Noel Celis/AFP/Getty
The cuddly purple Cretaceous-era TV star that defined an American generation's childhood is coming to the big screen. And it's gonna be weird.
Details: The Barney film, announced Friday, will be produced by Daniel Kaluuya of "Get Out" fame — yes, the horror movie. He and some studio execs made a couple baffling remarks about the coming movie about the friendly dinosaur...
Slasher? Stoner comedy? Psychological thriller? We shall see.