Axios Future

December 14, 2019
Welcome back to Future! Thanks for reading. Please get in touch: You can reply to this email or send me a note at [email protected]. My colleague Erica Pandey, your Wednesday Futurist, is at [email protected].
This issue is 1,342 words, a 5-minute read.
1 big thing: A tug-of-war over biased AI…

Illustration: Eniola Odetunde/Axios
The idea that AI can replicate or amplify human prejudice, once argued mostly at the field's fringes, has been thoroughly absorbed into its mainstream: Every major tech company now makes the necessary noise about "AI ethics."
Yes, but: A critical split divides AI reformers. On one side are the bias-fixers, who believe the systems can be purged of prejudice with a bit more math. (Big Tech is largely in this camp.) On the other side are the bias-blockers, who argue that AI has no place at all in some high-stakes decisions.
Why it matters: This debate will define the future of the controversial AI systems that help determine people's fates through hiring, underwriting, policing and bail-setting.
What's happening: Despite the rise of the bias-blockers in 2019, the bias-fixers remain the orthodoxy.
- A recent New York Times op-ed laid out the prevailing argument in its headline "Biased algorithms are easier to fix than biased people."
- "Discrimination by algorithm can be more readily discovered and more easily fixed," says UChicago professor Sendhil Mullainathan in the piece. Yann LeCun, Facebook's head of AI, tweeted approvingly: "Bias in data can be fixed."
- But the op-ed was met with plenty of resistance.
The other side: At the top academic conference for AI this week, Abeba Birhane of University College Dublin presented the opposing view.
- Birhane's key point: "This tool that I'm developing, is it even necessary in the first place?"
- She gave classic examples of potentially dangerous algorithms, like one that claimed to determine a person's sexuality from a photo of their face, and another that tried to guess a person's ethnicity.
- "[Bias] is not a problem we can solve with maths because the very idea of bias really needs much broader thinking," Birhane tells Axios.
The big picture: In a recent essay Frank Pasquale, a UMD law professor who studies AI, calls this a new wave of algorithmic accountability that looks beyond technical fixes toward fundamental questions about economic and social inequality.
- "There's definitely still resistance around it," says Rachel Thomas, a University of San Francisco professor. "A lot of people are getting the message about bias but are not yet thinking about justice."
- "This is uncomfortable for people who come up through computer science in academia, who spend most of their lives in the abstract world," says Emily M. Bender, a University of Washington professor. Bender argued in an essay last week that some technical research just shouldn't be done.
The bottom line: Technology can help root out some biases in AI systems. But this rising movement is pushing experts to look past the math to consider how their inventions will be used beyond the lab.
- "AI researchers need to start from the beginning of the study to look at where algorithms are being applied on the ground," says Kate Crawford, co-founder of NYU's AI Now Institute.
- "Rather than thinking about them as abstract technical problems, we have to see them as deep social interventions."
2. …and how the split shapes AI's uses

Illustration: Eniola Odetunde/Axios
Despite a flood of money and politics propelling AI forward, some researchers, companies and voters hit pause this year.
- Most visibly, campaigns to ban facial recognition technology succeeded in San Francisco, Oakland and Somerville, Mass. This week, nearby Brookline banned it, too.
- One potential outcome: freezes or restrictions on other controversial uses of AI. This scenario scares tech companies, who prefer to send plumbers in to repair buggy systems rather than to rip out the pipes entirely.
But the question at the core of the debate is whether a fairness fix even exists.
The swelling backlash says it doesn't — especially when companies and researchers ask machines to do the impossible, like guess someone's emotions by analyzing facial expressions, or predict future crime based on skewed data.
- "It's anti-scientific to imagine that an algorithm can solve a problem that humans can't," says Cathy O'Neil, an auditor of AI systems.
- These applications are "AI snake oil," argues Princeton professor Arvind Narayanan in a presentation that went viral on nerd Twitter recently.
- The main offenders are AI systems meant to predict social outcomes, like job performance or recidivism. "These problems are hard because we can’t predict the future," Narayanan writes. "That should be common sense. But we seem to have decided to suspend common sense when AI is involved."
This blowback's spark was a 2017 research project from MIT's Joy Buolamwini. She found that major facial recognition systems struggled to identify female and darker-toned faces.
- The study and its follow-ups prompted companies to try "fixing" the problem by increasing their systems' accuracy across faces — mostly by gathering more data from underrepresented groups, sometimes in shady ways.
- But Buolamwini and others argue that the technology shouldn't be used without being tightly regulated.
What's next: Companies are tightening access to their AI algorithms, invoking intellectual property protections to avoid sharing details about how their systems arrive at critical decisions.
- "The real problem is we citizens have no power to even examine or scrutinize these algorithms," says O'Neil. "They're being used by private actors for commercial gain."
3. Counter-drone tech explodes

A French soldier with an anti-drone rifle. Photo: Chesnot/Getty
Weapons that down threatening drones — by scrambling their electronics or just plain shooting them out of the sky — are flooding the market, even though most are still illegal in the U.S.
What's new: Just in the last year, hundreds of new products were released, in a scramble to head off an urgent unsolved menace. But off-the-shelf drones are evolving apace, threatening to make a thorny problem even worse.
The big picture: As I wrote this summer, plenty of roadblocks still lie ahead for the counter-drone industry. Fundamentally, many anti-drone systems don't work well — and even if they did, most are illegal in the U.S., except if used by federal agencies.
Driving the news: A new report from the Center for the Study of the Drone at Bard College is a comprehensive census of counter-drone technology.
- Altogether, Bard researchers found 537 systems for sale — hundreds more than they found in last year's sweep.
- More than 350 of these products are billed for intercepting and disabling drones; the rest simply detect them.
- Radio jamming is the most popular method for taking down drones. But other creative approaches involve lasers, nets or even a "sacrificial collision drone."
The report raises two new problems. One is the limited range of many detection systems.
- "The response time for successfully shooting down drones is incredibly short if the drone is even moderately fast," says the report's author, Arthur Holland Michel.
- Even with a 1 km detection range — which may seem far — several steps remain after an incoming drone is detected: a second check, a decision to intercept, a scramble to ready the relevant weapon…
- "By that time, the drone is right over your head," Michel says. "You don't hear this discussed in the marketing materials."
The second problem is the rapid progress of consumer drones, which is creating a "vicious feedback loop," Michel says. Advances that make the devices safer can also make them impervious to some counter-drone systems.
- They're fast, with some more expensive drones able to reach 180 mph, or accelerate from 0 to 90 in a second.
- They're autonomous. Skydio's newest drone can follow a moving target without human guidance, avoiding obstacles as it goes.
- They can fly without GPS. This makes them less prone to dropped signals — but at the same time less susceptible to jamming.
- And soon, they'll group into swarms, so that one pilot can fly a horde of drones, opening new doors for drone attacks that are harder to defend against.
The bottom line: "There's nothing on the horizon that will cut the line on this [cycle]," says Michel. "There's nothing that just ends the game. … Until there is, it's going to be like this: a game of cat and mouse."
4. Worthy of your time

Illustration: Aïda Amer/Axios
The multifront fight against robocalls (Margaret Harding McGill & Ina Fried - Axios)
The scholar who diagnosed "surveillance capitalism" (Frank Bajak - AP)
A sweeping government hunt for spies among Chinese–Americans (Peter Waldman & Andre Tartar - Bloomberg)
The gadgets that shaped the decade (The Verge)
2020 candidates want to track your phone's location (Theodore Schleifer - Recode)
5. 1 fun thing: Science tackles an uncanny mystery

Photo: Spencer Platt/Getty
Does the ritual tapping of a shaken-up can of Coke or beer really stop it from foaming over?
With the help of 1,000 cans of Pilsner beer, a brave troop of volunteers and a device called the Unimax 2010 orbital platform shaker, scientists in Denmark set out to determine whether the trick really works.
- It doesn't.
- But the experiment is fun to read about anyway. See more from the MIT Technology Review.