Have your friends signed up?
Okay, let's start with ...
Illustration: Sarah Grillo/Axios
Deepfakes — digitally forged videos that can be impossible to detect — are called the end of truth, a threat to democracy and a potential disruption to society. Everyone agrees on the danger, but no one has figured out what to do about it.
Kaveh reports: But now Congress and several states are considering the first legislation against AI-altered videos and audio — suggesting a coming barrage of such laws.
What's next: Spokespeople for Warner and Schiff said both are considering deepfakes legislation.
"Deepfakes — video of things that were never done with audio of things that were never said — can be tailor-made to drive Americans apart and pour gasoline on just about any culture war fire. Even though this is something that keeps our intelligence community up at night, Washington isn’t really discussing the problem.”— Sen. Ben Sasse to Axios
Details: Sasse's bill targets two different groups:
Sasse's proposed punishment: A fine and/or up to two years' imprisonment, or — if the deepfake could incite violence or disrupt government or an election — up to 10 years.
Several experts tell Axios that Sasse's bill is a step in the right direction. But one worry is that it misses the mark in its rules for platforms.
But, but, but: Some are less convinced that Congress should step in. David Greene, civil liberties director at the Electronic Frontier Foundation, says making malicious deepfakes a federal crime may hamper protected speech — like the creation of parody videos.
Reality check: New laws would be a last line of defense against deepfakes, as legislation can’t easily prevent their spread. Once the law gets involved, “the harm is so out of the bag and it’s viral,” Citron says. The holy grail, a system that can automatically detect forgeries, is still well out of reach.
French Prime Minister Édouard Philippe shaking hands with a robot. Photo: Alain Jocard/AFP/Getty
As intelligent machines begin muscling into daily life, a big issue remaining is how deeply people will trust them to take over critical tasks like driving, elder or child care, and even military operations.
Kaveh writes: Calibrating a human's trust to a machine's capability is crucial, as we've reported: Things go wrong if a person places too much or too little trust in a machine.
Now, researchers are searching for ways of monitoring trust in real time so they can immediately alter a robot's behavior to match it.
The trouble is that trust is inexact. You can't measure it like a heart rate. Instead, most researchers examine people's behaviors for evidence of confidence.
Understanding a person's attitude toward a bot — a car, factory robot or virtual assistant — is key to improving cooperation between human and machine. It allows a machine to "self-correct" if it's out of sync with the person using it, Neera Jain, a Purdue engineering professor involved with the research, tells Axios.
Some examples of course-correcting robots:
Photo: Spencer Platt/Getty
U.S. wage growth has fallen over the last two months, suggesting a flattening of worker pay after a spurt earlier last year, according to a new report.
Andrew Chamberlain, chief economist at Glassdoor, the jobs site, said pay grew by 2.3% in both December and January year on year. That was down from 2.6% to 2.7% in July, August and September.
Illustration: Rebecca Zisser/Axios
Is Facebook ready for Asia's elections? (Gwen Robinson, Cliff Venzon — Nikkei Asian Review)
Wall Street split on self-driving cars (Joann Muller — Axios)
The economics of the daily commute (Richard Florida — CityLab)
Why unbiased facial recognition is still scary (Karen Hao — MIT Tech Review)
Now your groceries see you, too (Sidney Fussell — The Atlantic)
Chicagoans are marching to work in the most frigid temperatures the city has seen in 25 years. Today the high was -12°F, Erica writes.
To keep the trains running in the heart-stopping cold, the city is lighting fires on the tracks. CNN has footage of the incredible scene.