Have your friends signed up?
Okay, let's start with ...
Have your friends signed up?
Okay, let's start with ...
Illustration: Rebecca Zisser/Axios
For the first 6 decades of AI's development, the biggest question facing researchers was whether their inventions would work at all. Now, in a shift from the previously unrelenting push for progress, the field has entered a new stage of introspection as its effects on society — both positive and damaging — reverberate outside the lab.
Kaveh reports: In this uneasy coming of age, AI researchers are determined to avert the catastrophic mistakes of their forefathers who brought the internet to adulthood.
What's happening: As the tech world reels from a hailstorm of crises around privacy, disinformation and monopoly — many stemming from decisions made 30 years ago — there's a sense among AI experts that this is a rare chance to get a weighty new development right, this time from the start.
"We all imagined that it would allow everybody to have a voice, it would bring everybody together — you know, kumbaya. What has in fact happened is just frightening. I think it should be a lesson to us. Now we're entering the age of AI. … We need to be 100 times more vigilant in trying to make the right decisions early on in the technology."— John Etchemendy, Stanford
Driving the news: Stanford trotted out some of the biggest guns in AI to celebrate the birth of its new research center on Monday. The programming emphasized the university's outsized role in the technology’s past — but the day was shot through with anxiety at a potential future shaped by AI run amok.
At this early stage, the angst and determination has yielded only baby steps toward answers.
Among the concerns motivating the explosion of conferences, institutes and experts centered on ethics in AI: algorithms that perpetuate biases, widespread job losses due to automation, and an erosion of our own ability to think critically.
llustration: Aïda Amer/Axios
One of the oddest ways that an AI system can fail is by falling prey to an adversarial attack — a cleverly manipulated input that makes the system behave in an unexpected way.
Kaveh writes: Autonomous car experts worry that their cameras are susceptible to these tricks: It's been shown that a few plain stickers can make a stop sign look like a "Speed Limit 100" marker to a driverless vehicle. But other high-stakes fields — like medicine — are paying too little attention to this risk.
That's according to a powerhouse of researchers from Harvard and MIT, who published an article in the journal Science Thursday arguing that these attacks could blindside hospitals, pharma companies and big insurers.
Details: Consider a photo of a mole on a patient's skin. Research has shown that it can be manipulated in a way that's invisible to the human eye, but change the result of an AI system's diagnosis from cancerous to non-cancerous.
The big question: Why would anyone want to do this?
Doctors and hospitals already game the insurance billing system — these could be considered proto-adversarial attacks, Finlayson tells Axios.
But, but, but: These hypotheticals are a bit far-fetched for Matthew Lungren, associate director of the Stanford Center for Artificial Intelligence in Medicine and Imaging. "There are a lot of easier ways to defraud the system, frankly," he tells Axios.
Driverless cameras. Photo: Justin Sullivan/Getty
Drivers are encountering more automation in their cars, but experts say they don't yet know naturally how and when to actually use it.
Axios' Alison Snyder writes: Assisted driving features are turning cars into next-generation automated machines — the first ones that many people will be exposed to. How humans and machines learn to interact when driving could indicate how people might work with robots in the future.
In the air, automation has made aviation safer, in part because pilots are educated about how the technology affects their attention and ability to fly. But with too little training, automation in the flight deck can cause problems.
In cars, some partially automated technologies — automatic emergency braking and collision detection — provide safety benefits. But it's not yet known if convenience features — for example, lane-keeping assist — are making driving safer.
He says people's misconceptions about their ability to jump back in when needed, along with their misunderstanding of the technologies, can lead them to become dangerously disengaged or complacent.
What's needed: In a new paper, Casner argues that drivers, like pilots, need education and continuous experience with automation.
Buzi, central Mozambique, March 20. Photo: Adrien Barbier/AFP/Getty
EU rethinks its China policy (Michael Peel, Lucy Hornby, Rachel Sanderson — FT)
The "inland ocean" within the Indian Ocean (Andrew Freedman — Axios)
The one and only Naomi Osaka (Soraya Nadia McDonald — The Undefeated)
A hitchhiker saved Lion Air day before crash (Alan Levin, Harry Suhartono — Bloomberg)
First woman to win premier math prize (Kenneth Chang — NYT)
The iconic sign. Photo: Valery Sharifulin/TASS/Getty
An unlikely American industry is creating scores of jobs and experiencing a trade surplus at a time when several iconic U.S. companies are falling to China: show business.
Erica writes: Hollywood employs 927,ooo people in its TV and film jobs, and it supports an additional 1.6 million jobs. That’s more than the farming, mining, or oil and gas industries. The show biz jobs generate $76 billion in wages per year, reports Bloomberg.
Yesterday’s story “The trouble with smart cities” has been updated to show that property taxes sought by Sidewalk Labs are to reimburse the company for investment in public services.