Welcome back to Future! Anything you want us to know? Just hit reply or send me a note at firstname.lastname@example.org. Erica, who writes Future on Wednesdays, is at email@example.com.
Situational awareness: "Axios on HBO" has been extended for two additional seasons, through the end of 2021, with 12 shows a year.
Today's issue is 1,527 words, which should take <6 minutes to read.
Illustration: Eniola Odetunde/Axios
Most jobs are still out of reach of robots, which lack the dexterity required on an assembly line or the social grace needed on a customer service call. But in some cases, the humans doing this work are themselves being automated as if they were machines.
What's happening: Even the most vigilant supervisor can only watch over a few workers at one time. But now, increasingly cheap AI systems can monitor every employee in a store, at a call center or on a factory floor, flagging their failures in real time and learning from their triumphs to optimize an entire workforce.
Why it matters: Companies can use this data to juice workers' productivity and efficiency. Eventually, they could gather enough data from humans to train machines to mimic them.
"How often is an employee going out to smoke a cigarette? How long a lunch are they taking? How long are they sitting in the lunchroom?" These are the questions clients want answered with AI software, says Kim Hartman, CEO of Surveillance Secure, a D.C.-area company that installs security systems.
In a handful of factories in the U.S., cameras have been installed over each worker's head in assembly lines as they put together car parts or electronics.
"The most programmable machine on the planet today is still the human."— Drishti CEO Prasad Akella
"Employers and companies attempting to extract more value from its labor force by making that labor more efficient is nothing new," says Jess Kutch, co-founder of Coworker.org, a nonprofit that helps workers organize. A century ago, managers used stopwatches to pursue efficiency under the banner of "scientific management," or Taylorism.
But extreme monitoring enabled by new technologies can be inhumane, Kutch says.
The creators of AI monitoring tools argue that their software benefits employers and employees.
What's next: Extensive AI-annotated video or audio data about how people work is a potential gold mine for automation developers.
Go deeper: Automated management for call centers (NYT)
Illustration: Aïda Amer/Axios
AI systems have an endless appetite for data. For an autonomous car's camera to identify pedestrians every time — not just nearly every time — its software needs to have studied countless examples of people standing, walking and running near roads.
Yes, but: Gathering and labeling those images is expensive and time consuming, and in some cases impossible. (Imagine staging a huge car crash.) So companies are teaching AI systems with fake photos and videos, sometimes also generated by AI, that stand in for the real thing.
The big picture: A few weeks ago, I wrote about the synthetic realities that surround us. Here, the machines that we now rely on — or may soon — are also learning inside their own simulated worlds.
How it works: Software that has been fed tons of human-labeled photos and videos can deduce the shapes, colors and movements that correspond, say, to a pedestrian.
Synthetic data is useful for any AI system that interacts with the world — not just cars.
"We're still in the early days," says Evan Nisselson of LDV Capital, a venture firm that invests in visual technology.
Illustration: Rebecca Zisser/Axios
As deepfakes become more convincing and people are increasingly aware of them, the realistic AI-generated videos, images and audio threaten to disrupt crucial evidence at the center of the legal system.
Why it matters: Leaning on key videos in a court case — like a smartphone recording of a police shooting, for example — could become more difficult if jurors are more suspicious of them by default, or if lawyers call them into question by raising the possibility that they are deepfakes.
What's happening: Elected officials, experts and the press have been warning about the potential future fallout for business or elections from deepfakes. But apart from a few high-profile examples, the tech so far has been used almost exclusively for porn, according to a landmark new report from Deeptrace Labs.
"This is dangerous in the courtroom context because the ultimate goal of the courts is to seek out truth," says Pfefferkorn, who recently wrote an article about deepfakes in the courtroom for the Washington State Bar magazine.
Already, people accused of possessing child porn often claim that it's computer-generated, says Hany Farid, a digital forensics expert at UC Berkeley. "I expect that in this and other realms, the rise of AI-synthesized content will increase the likelihood and efficacy of those claiming that real content is fake."
Illustration: Aïda Amer/Axios
Mainstream economists are getting radical (Dion Rabouin - Axios)
Surveillance tech is powered by photos of your kids (Kashmir Hill & Aaron Krolik - NYT)
Can a machine learn to write for The New Yorker? (John Seabrook - New Yorker)
The end of silence (Bianca Bosker - The Atlantic)
"Police" robot falls short (Katie Flaherty - NBC News)
A computer vision system identifies a great white shark. Video courtesy Salesforce.
Turn AI cameras on your employees and you can measure their productivity. Fly them over the Pacific Ocean and you've got yourself an automated shark-warning system.
What's happening: UC Santa Barbara, with the help of a few AI experts from Salesforce, is using drones to monitor sharks near California beaches in real time.