Sep 21, 2019

Axios Future

Welcome back to Future. I love to hear from readers — just respond to this email, or write me at kaveh@axios.com. Erica, who writes Future on Wednesdays, is at erica@axios.com.

Today's Future is 1,510 words, about a 6-minute read. Here we go...

1 big thing: The last bastion of privacy

Illustration: Aïda Amer/Axios

Brain–computer interfaces, once used exclusively for clinical research, are now under development at several wealthy startups and a major tech company, and rudimentary versions are already popping up in online stores.

Why it matters: If users unlock the information inside their heads and give companies and governments access, they're inviting privacy risks far greater than today's worries over social media data, experts say — and raising the specter of discrimination based on what goes on inside a person's head.

What's happening: Machines that read brain activity from outside the head, or in some cases even inside the skull, are still relatively limited in the data they can extract from wearers' brains, and how accurately they can interpret it.

  • But the tech is moving fast. We can now recognize basic emotional states, unspoken words and imagined movements — all by analyzing neural data.
  • Researchers have found similarities in the way different people's brains process information, such that they can make rough guesses at what someone is thinking about or doing based on brain activity.

"These issues are fundamental to humanity because we're discussing what type of human being we want to be," says Rafael Yuste, a neuroscientist at Columbia.

The big picture: Clinical brain–computer interfaces can help people regain control of their limbs or operate prosthetics. Basic headsets are being sold as relaxation tools or entertainment gadgets — some built on flimsy claims — and market researchers are using the devices to fine-tune advertising pitches.

  • Facebook and startups like Elon Musk's Neuralink are pouring money into a new wave of neurotechnology with bold promises, like typing with your thoughts or, in Musk's words, merging with AI.
  • All of these devices generate huge amounts of neural data, potentially one of the most sensitive forms of personal information.

Driving the news: Neuroethicists are sounding the alarm.

  • Earlier this month the U.K.'s Royal Society published a landmark report on the promise and risk of neurotechnology, predicting a "neural revolution" in the coming decades.
  • And next month Chilean lawmakers will propose an amendment to the country's constitution enshrining protections for neural data as a fundamental human right, according to Yuste, who is advising on the process.

A major concern is that brain data could be commercialized, the way advertisers are already using less intimate information about people's preferences, habits and location. Adding neural data to the mix could supercharge the privacy threat.

  • "Accessing data directly from the brain would be a paradigm shift because of the level of intimacy and sensitivity of the information," says Anastasia Greenberg, a neuroscientist with a law degree.
  • If Facebook, for example, were to pair neural data with its vast trove of personal data, it could create “way more accurate and comprehensive psychographic profiles,” says Marcello Ienca, a health ethics researcher at ETH Zurich.
  • There's little to prevent companies from selling and trading brain data in the U.S., Greenberg found in a recent peer-reviewed study.

Neural data, more than other personal information, has the potential to reveal insights about a brain that even that brain's owner may not know.

  • This is the explicit promise of "neuromarketing," a branch of market research that uses brain scans to attempt to understand consumers better than they understand themselves.
  • Ethicists worry that information hidden inside a brain could be used to discriminate against people — for example, if they showed patterns of brain activity that were similar to patterns seen in people with propensities for addiction, depression or neurological disease.

"The sort of future we're looking ahead toward is a world where our neural data — which we don't even have access to — could be used" against us, says Tim Brown, a researcher at the University of Washington Center for Neurotechnology.

Editor's note: This story has been updated to clarify Marcello Ienca's quote.

2. Automation, immigration and fear

Illustration: Sarah Grillo/Axios

People often blame immigration and trade for destroying American work, even though automation and technological change are far more likely to take away jobs in the coming years.

What's happening: In a first-of-its-kind experiment, an MIT political scientist tested whether informing people about potential job loss from automation would change their minds about immigration and trade.

  • In three studies, MIT's Baobao Zhang got the same result — people didn't shift their beliefs. Even when presented with evidence that automation was by far the more salient risk to jobs, people continued to hold anti-immigration, anti-trade views.
  • These findings suggest that support for populist Trumpian policies may not be as closely linked to economic anxiety as is often argued.

"Right-wing populism is not only an economic story," says Zhang. "Economic anxiety might not be the main driver for support for Donald Trump. For instance, it could be people feeling threatened by out-groups" like immigrants and foreign workers.

The big picture: This narrative has been around since President Trump won in 2016 on a wave of protectionist proposals — that largely white, middle- and low-income Americans living far from coastal wealth voted in their economic self-interest.

  • But a truly self-interested person, economists say, would focus on the automation, which could wipe out millions of jobs, rather than immigration or trade.
  • The MIT research, which has not yet been peer reviewed, found that learning about automation also didn't make people more likely to support retraining programs that experts say are essential to counterbalance the disruption.

Another root problem is that it's really, really hard to change people's minds, even in the face of overwhelming evidence. Zhang previously studied reactions to information about climate change, an issue that remains polarizing despite scientific consensus.

"It may be that cultural factors play a substantial role in protectionist sentiments," says Darrell West, director of the Center for Technology Innovation at Brookings. "Workers may worry about immigrants taking their jobs as automation kicks in but also be concerned about what that will mean for American identity and the future of the country," he tells Axios.

3. The world through AI's eye

How ImageNet sees me. On the left: "beard" / On the right: "Bedouin, Beduin"

Maybe you've seen images like these floating around social media this week: photos of people with lime-green boxes around their heads and funny, odd or in some cases super-offensive labels applied.

What's happening: They're from an interactive art project about AI image recognition that doubles as a commentary about the social and political baggage built into AI systems.

Why it matters: This experiment — which will only be accessible for another week — shows one way that AI systems can end up delivering biased or racist results, which is a recurring problem in the field.

  • It scans uploaded photos for faces and sends them to an AI object-recognition program that uses ImageNet, the gold-standard dataset for training such programs.
  • The program matches the face with the closest label from WordNet, a project that started in the 1980s to map out word relationships throughout the English language, and applies it to the image.

Some people got generic results, like "woman" or "person." Others received hyper-specific labels, like "microeconomist." And many got some pretty racist stuff.

"The point of the project is to show how a lot of things in machine learning that are conceived of as technical operations or mathematical models are actually deeply social and deeply political," says Trevor Paglen, the MacArthur-winning artist who co-developed the project with Kate Crawford of the AI Now Institute.

  • The experiment and accompanying essay reveal the assumptions that go into building AI systems.
  • Here, the system depends on judgment calls from the people who originally labeled the images — some straightforward, like "chair"; others completely unknowable from the outside, like "bisexual."
  • From those image–label pairs, AI systems can learn to label new photos that they've never seen before.

But, but, but: This is an art project, not an academic takedown of ImageNet, which is mostly intended to detect objects rather than people. Some AI experts have criticized the demonstration for giving a false impression of the dataset.

This week ImageNet responded to the project, which Paglen says is currently being accessed more than 1 million times per day.

  • The ImageNet team says it's making changes to person-related image labels, in part by removing 600,000 potentially sensitive or offensive images — more than half of the images of people in the dataset.

Bonus: When Erica uploaded a photo of herself, the ImageNet experiment classified her as a "flibbertigibbet," which is disrespectful but a great word.

4. Worthy of your time

Photo Illustration: Sarah Grillo. Photo by Prince Williams/Wireimage 

Special report: Higher education in crisis (Alison Snyder & Kim Hart - Axios)

Google claims quantum supremacy (Madhumita Murgia & Richard Waters - FT)

Real-time surveillance goes mainstream in the U.K. (Adam Satariano - NYT)

Silicon Valley and the Dems: a messy divorce (Gabriel Debenedetti - NY Mag)

You can run, but you can't hide from this AI (Karen Hao - MIT Tech Review)

5. 1 yikes thing: This drone's got a gun!!

Photo courtesy University of Michigan

Oh — it's just a nail gun.

What's happening: A research team at the University of Michigan attached the tool to a drone and programmed it to autonomously fly over a rooftop and pound in shingles.

  • On its own, it can line up with unattached shingles, lower itself and fire in a nail, according to the university.
  • It's not very fast, as you can see in this video. But it can't fall off the roof and break a hip.

The big picture: The 8-rotor roofer — just an academic exercise for now — comes the same week that FedEx announced it's making some deliveries by drone in a small Virginia town.

Our thought bubble: What could possibly go wrong?