Axios Future

A robotic hand with the palm facing upward.

March 21, 2019

Have your friends signed up?

Any stories we should be chasing? Hit reply to this email or message me at [email protected]. Kaveh Waddell is at [email protected] and Erica Pandey at [email protected].

Okay, let's start with ...

1 big thing: AI’s uneasy coming of age

Illustration: Rebecca Zisser/Axios

For the first 6 decades of AI's development, the biggest question facing researchers was whether their inventions would work at all. Now, in a shift from the previously unrelenting push for progress, the field has entered a new stage of introspection as its effects on society — both positive and damaging — reverberate outside the lab.

Kaveh reports: In this uneasy coming of age, AI researchers are determined to avert the catastrophic mistakes of their forefathers who brought the internet to adulthood.

What's happening: As the tech world reels from a hailstorm of crises around privacy, disinformation and monopoly — many stemming from decisions made 30 years ago — there's a sense among AI experts that this is a rare chance to get a weighty new development right, this time from the start.

  • In the internet's early days, technologists were "naive" about its potential downsides, John Etchemendy, co-director of Stanford's new Institute for Human-Centered AI (HAI), told reporters Monday at the institute's kickoff.
"We all imagined that it would allow everybody to have a voice, it would bring everybody together — you know, kumbaya. What has in fact happened is just frightening. I think it should be a lesson to us. Now we're entering the age of AI. … We need to be 100 times more vigilant in trying to make the right decisions early on in the technology."
— John Etchemendy, Stanford
  • At the beginning of Microsoft, nobody knew their work would lead to today's information free-for-all on social media, Bill Gates said at the HAI event. "There wasn’t a recognition way in advance that that kind of freedom would have these dramatic effects that we're just beginning to debate today," he said.

Driving the news: Stanford trotted out some of the biggest guns in AI to celebrate the birth of its new research center on Monday. The programming emphasized the university's outsized role in the technology’s past — but the day was shot through with anxiety at a potential future shaped by AI run amok.

  • The question at the center of the symposium, and increasingly of the field: "Can we have the good without the bad?" It was asked from the stage Monday by Fei-Fei Li, a co-director at HAI and leading AI researcher.
  • "For the first time, the ethics of AI isn't an abstraction or philosophical exercise," said Li. "This tech affects real people, living real lives."
  • Similar themes swirled around MIT's high-profile launch of its own new AI center earlier this month.

At this early stage, the angst and determination has yielded only baby steps toward answers.

Among the concerns motivating the explosion of conferences, institutes and experts centered on ethics in AI: algorithms that perpetuate biases, widespread job losses due to automation, and an erosion of our own ability to think critically.

  • "Something big is happening in the plumbing of the world," said California Gov. Gavin Newsom at the Stanford event. "We're going from something old to something new, and we are not prepared as a society to deal with it."

Go deeper: Tech's scramble to limit offline harm from online ads

2. Guarding against the tricksters

Animated illustration of a health cross with 0s and 1s running across, some sequences are highlighted as malware.

llustration: Aïda Amer/Axios

One of the oddest ways that an AI system can fail is by falling prey to an adversarial attack — a cleverly manipulated input that makes the system behave in an unexpected way.

Kaveh writes: Autonomous car experts worry that their cameras are susceptible to these tricks: It's been shown that a few plain stickers can make a stop sign look like a "Speed Limit 100" marker to a driverless vehicle. But other high-stakes fields — like medicine — are paying too little attention to this risk.

That's according to a powerhouse of researchers from Harvard and MIT, who published an article in the journal Science Thursday arguing that these attacks could blindside hospitals, pharma companies and big insurers.

Details: Consider a photo of a mole on a patient's skin. Research has shown that it can be manipulated in a way that's invisible to the human eye, but change the result of an AI system's diagnosis from cancerous to non-cancerous.

The big question: Why would anyone want to do this?

  • For Samuel Finlayson, an MD-Ph.D. candidate at Harvard and MIT and the lead author of the new paper, it’s a question of incentives. If someone sending in data for analysis has a different goal than the owner of the system doing the analysis, there's potential for funny business.
  • We're not talking about a malicious doctor manipulating cancer diagnoses — "There's way more effective ways to kill a person," Finlayson says — but rather an extension of existing dynamics into a near future where AI is involved in billing, diagnosis and reading medical scans.

Doctors and hospitals already game the insurance billing system — these could be considered proto-adversarial attacks, Finlayson tells Axios.

  • They can bill for more expensive procedures than they performed, in order to make more money, or they avoid billing for procedures that they know will land a huge bill in the patient's lap.
  • Insurance companies are already hiring tech firms to put a stop to the practice, often with AI tools. Finlayson sees a future where basic adversarial attacks are used to fool the AI systems into continuing to accept fraudulent claims.

But, but, but: These hypotheticals are a bit far-fetched for Matthew Lungren, associate director of the Stanford Center for Artificial Intelligence in Medicine and Imaging. "There are a lot of easier ways to defraud the system, frankly," he tells Axios.

3. Getting ready for hands-free driving

Driverless cameras. Photo: Justin Sullivan/Getty

Drivers are encountering more automation in their cars, but experts say they don't yet know naturally how and when to actually use it.

Axios' Alison Snyder writes: Assisted driving features are turning cars into next-generation automated machines — the first ones that many people will be exposed to. How humans and machines learn to interact when driving could indicate how people might work with robots in the future.

In the air, automation has made aviation safer, in part because pilots are educated about how the technology affects their attention and ability to fly. But with too little training, automation in the flight deck can cause problems.

  • Case in point: The FAA is investigating whether training would have prepared pilots of the Boeing 737 MAX 8 to deal with new automation in their planes.

In cars, some partially automated technologies — automatic emergency braking and collision detection — provide safety benefits. But it's not yet known if convenience features — for example, lane-keeping assist — are making driving safer.

  • In these systems, drivers are supposed to be engaged and in control even if they aren't steering.
  • But drivers' minds wander, and their ability to refocus and then react takes time.
  • "We're terrible at paying attention — and we think we're awesome at it," says Steve Casner, a research psychologist at NASA who studies how humans interact with automation.

He says people's misconceptions about their ability to jump back in when needed, along with their misunderstanding of the technologies, can lead them to become dangerously disengaged or complacent.

What's needed: In a new paper, Casner argues that drivers, like pilots, need education and continuous experience with automation.

Go deeper: Drivers don't understand their cars' automated technology

4. Worthy of your time

Buzi, central Mozambique, March 20. Photo: Adrien Barbier/AFP/Getty

EU rethinks its China policy (Michael Peel, Lucy Hornby, Rachel Sanderson — FT)

The "inland ocean" within the Indian Ocean (Andrew Freedman — Axios)

The one and only Naomi Osaka (Soraya Nadia McDonald — The Undefeated)

A hitchhiker saved Lion Air day before crash (Alan Levin, Harry Suhartono — Bloomberg)

First woman to win premier math prize (Kenneth Chang — NYT)

5. 1 LA thing: A surprising jobs juggernaut

Two silhouettes in front of the Hollywood sign

The iconic sign. Photo: Valery Sharifulin/TASS/Getty

An unlikely American industry is creating scores of jobs and experiencing a trade surplus at a time when several iconic U.S. companies are falling to China: show business.

Erica writes: Hollywood employs 927,ooo people in its TV and film jobs, and it supports an additional 1.6 million jobs. That’s more than the farming, mining, or oil and gas industries. The show biz jobs generate $76 billion in wages per year, reports Bloomberg.

  • On average, pay for Hollywood jobs is 47% higher than the average wage.
  • The entertainment industry also has a $10 billion trade surplus.

Correction

Yesterday’s story “The trouble with smart cities” has been updated to show that property taxes sought by Sidewalk Labs are to reimburse the company for investment in public services.