1 big thing: A move against deepfakes
Deepfakes — digitally forged videos that can be impossible to detect — are called the end of truth, a threat to democracy and a potential disruption to society. Everyone agrees on the danger, but no one has figured out what to do about it.
Kaveh reports: But now Congress and several states are considering the first legislation against AI-altered videos and audio — suggesting a coming barrage of such laws.
- Last month, Sen. Ben Sasse (R-Neb.) introduced a bill to criminalize the malicious creation and distribution of deepfakes — the first of its kind. Introduced a day before the government shutdown, the bill flew under the radar and expired when the year ended. But Sasse's office tells Axios he intends to reintroduce it.
- In New York, a controversial state bill would punish people who knowingly make digital videos, photos and audio of others — including deepfakes — without their consent.
- Other lawmakers are looking into the subject: Sen. Mark Warner (D-Va.) and House Oversight chairman Adam Schiff (D-Calif.) have invited legal scholars to privately brief their staff on deepfakes, and experts tell Axios they're fielding calls from state policymakers.
What's next: Spokespeople for Warner and Schiff said both are considering deepfakes legislation.
"Deepfakes — video of things that were never done with audio of things that were never said — can be tailor-made to drive Americans apart and pour gasoline on just about any culture war fire. Even though this is something that keeps our intelligence community up at night, Washington isn’t really discussing the problem.”— Sen. Ben Sasse to Axios
Details: Sasse's bill targets two different groups:
- Individual deepfake creators, if they act with the intent to do something illegal (like commit fraud).
- Distributors, like Facebook — but only if they know they're distributing a deepfake. That means that platforms could set up a reporting system, like ones used to suppress pirated movies, and take down deepfakes when they're notified of them.
Sasse's proposed punishment: A fine and/or up to two years' imprisonment, or — if the deepfake could incite violence or disrupt government or an election — up to 10 years.
Several experts tell Axios that Sasse's bill is a step in the right direction. But one worry is that it misses the mark in its rules for platforms.
- Danielle Citron, a University of Maryland law professor and co-author of a landmark law article on deepfakes, says the bill places over-broad liability on distributors. She says it could scare platforms into immediately taking down everything that's reported as a deepfake — potentially deleting legitimate posts in the process.
- Mary Anne Franks, a law professor at the University of Miami and president of the Cyber Civil Rights Initiative, sees the opposite problem: Proving "actual knowledge" that they're circulating a deepfake could be nearly impossible.
But, but, but: Some are less convinced that Congress should step in. David Greene, civil liberties director at the Electronic Frontier Foundation, says making malicious deepfakes a federal crime may hamper protected speech — like the creation of parody videos.
Reality check: New laws would be a last line of defense against deepfakes, as legislation can’t easily prevent their spread. Once the law gets involved, “the harm is so out of the bag and it’s viral,” Citron says. The holy grail, a system that can automatically detect forgeries, is still well out of reach.
2. The bot trust tightrope
As intelligent machines begin muscling into daily life, a big issue remaining is how deeply people will trust them to take over critical tasks like driving, elder or child care, and even military operations.
Kaveh writes: Calibrating a human's trust to a machine's capability is crucial, as we've reported: Things go wrong if a person places too much or too little trust in a machine.
Now, researchers are searching for ways of monitoring trust in real time so they can immediately alter a robot's behavior to match it.
The trouble is that trust is inexact. You can't measure it like a heart rate. Instead, most researchers examine people's behaviors for evidence of confidence.
- But an ongoing project at Purdue University found more accurate indicators by peeking under the hood at people's brain activity and skin response.
- In an experiment whose results were published in November, the Purdue team used sensors to measure how participants' bodies changed when they were confronted with a virtual self-driving car with faulty sensors.
Understanding a person's attitude toward a bot — a car, factory robot or virtual assistant — is key to improving cooperation between human and machine. It allows a machine to "self-correct" if it's out of sync with the person using it, Neera Jain, a Purdue engineering professor involved with the research, tells Axios.
Some examples of course-correcting robots:
- An autonomous vehicle that would give a particularly skeptical driver more time to take control before reaching an obstacle that it can't navigate on its own.
- An industrial robot that reveals its reasoning to boost confidence in a worker who might otherwise engage a manual override and potentially act less safely.
- A military reconnaissance robot that gives a trusting soldier extra information about the uncertainty in a report to prevent harm.
3. Slower wage growth
U.S. wage growth has fallen over the last two months, suggesting a flattening of worker pay after a spurt earlier last year, according to a new report.
Andrew Chamberlain, chief economist at Glassdoor, the jobs site, said pay grew by 2.3% in both December and January year on year. That was down from 2.6% to 2.7% in July, August and September.
- Chamberlain tells Axios that it's not possible to know yet whether the wage data reflects a trend or just normal fluctuations, but "pay seems to have leveled off."
- U.S. worker pay has been essentially flat for some three decades. The gains noted by Glassdoor, even with the flattening, reflect wages starting to pull away from the inflation rate and returning real gains to workers.
4. Worthy of your time
Is Facebook ready for Asia's elections? (Gwen Robinson, Cliff Venzon — Nikkei Asian Review)
Wall Street split on self-driving cars (Joann Muller — Axios)
The economics of the daily commute (Richard Florida — CityLab)
Why unbiased facial recognition is still scary (Karen Hao — MIT Tech Review)
Now your groceries see you, too (Sidney Fussell — The Atlantic)
5. 1 🔥 thing: Making the trains run on time
Chicagoans are marching to work in the most frigid temperatures the city has seen in 25 years. Today the high was -12°F, Erica writes.
To keep the trains running in the heart-stopping cold, the city is lighting fires on the tracks. CNN has footage of the incredible scene.