Axios Future

June 13, 2019
Have your friends signed up?
Today's Smart Brevity count: 1,176 words, <5 minute read.
- A survey request: For the past few weeks, we've included a word and read-time count at the top of Future and our other newsletters. Now, we want to hear from you — do you love it or hate it? Click here for love and here for hate. We'll share the results Monday. Thanks in advance for responding!
Any stories we should be chasing? Hit reply to this email or message me at [email protected] or the rest of the Future team: Kaveh Waddell at [email protected] and Erica Pandey at [email protected].
Okay, let's start with ...
1 big thing: A coming surveillance awakening

Illustration: Sarah Grillo/Axios
There are millions of surveillance cameras in the U.S., but not nearly enough eyes to watch them all. When you pass one on the street, you can rightly expect your actions to go unnoticed in the moment; footage is instead archived for review if something goes wrong, Kaveh reports.
But now, AI software can flag behavior it deems suspicious in real-time surveillance feeds, or pinpoint minute events in past footage — as if each feed were being watched unblinkingly by its own hyper-attentive security guard. The new technology, if it spreads in the U.S., could put an American twist on Orwellian surveillance systems abroad.
Big picture: In a new report today, ACLU surveillance expert Jay Stanley describes a coming mass awakening of millions of cameras, powered by anodyne-sounding "video analytics."
Collecting data has become dirt cheap, but attention has remained a scarce, expensive resource — especially for analyzing video, Stanley says. That's what is changing.
- "The danger is that video analytics would be used to make sure that if you do anything, it will never be missed," Stanley tells Axios. That would be a significant departure from today's largely unmonitored cameras.
- "We're right on the cusp of this technology really becoming real."
Quick take: This new software democratizes high-powered surveillance — once the purview of wealthy governments and organizations. Companies are selling it effectively as "surveillance in a box" for far cheaper than hiring video analysts.
Police, retailers, railroads and even carmakers are installing various shades of this software. And we've written about its use in schools.
- The full extent of its deployment, or even how well the technology lives up to its marketing promises, isn't entirely clear.
- What's certain is that there's demand for it. Analysts predict that the video analytics market, which was worth $3.23 billion in 2018, will grow to $8.55 billion in 2023.
How it works: The software is marketed as being able to:
- Detect specific events like people hugging, smoking, fighting or drinking, or instead automatically detect "anomalies" — deviations from the usual goings-on in a certain feed, like a car driving the wrong way or a person loitering at an odd hour.
- Search historical footage by clothing or even skin color and "summarize" countless hours of footage into a single image or a short clip.
- Determine a person's emotional state or even make assumptions about their personality, based only on their face and body movements.
The danger: Losing anonymity in public can change the way people behave, experts say, much like China's omnipresent surveillance can cause residents to constantly look over their shoulders.
- "People will start to wonder if they'll be cataloged or monitored if they're at a protest or political event, and potentially be subject to retribution," says Jake Laperruque, a privacy expert at the Project on Government Oversight.
- And in the case of emotion detection, significant decisions — like whether or not you get a job — can hang on the software's interpretation of your facial expressions, says Meredith Whittaker, co-founder of NYU's AI Now Institute.
2. Reconsidering social media's immunity

Illustration: Lazaro Gamio/Axios
For all the talk of antitrust investigations, the bigger threat to tech platforms like Google and Facebook is an intensifying call from Congress to revamp a law that shields them and other web companies from legal liability for users' posts, Kaveh reports.
Driving the news: House Intelligence Chairman Adam Schiff today joined a motley group of policymakers calling to reconsider the legal protections afforded to tech platforms. It's a broadening of a line of attack that caught fire last year when a new law made it easier to sue tech platforms for hosting sex-trafficking ads.
The big picture: Social media companies are taking hits from every direction for allowing hate speech, false information and now fake video to mushroom on their sites. But legally, they're in the clear even when hosting the most odious content.
Be smart, per Axios' David McCabe: Lawmakers have been threatening broad changes to the immunity law for over a year but haven't advanced any legislative proposals doing so. At this point, it's more potent leverage than it is something they've been willing to get moving.
Details: Section 230 of the Communications Decency Act protects companies that carry user-generated content — like Facebook, Twitter, YouTube and other sites — from bearing legal liability for what their users post.
- It's become a cornerstone of the modern internet since it was passed in 1996, freeing companies from having to closely police every sentence, video or photo published on their platforms.
- But critics say it's allowed them to shirk a societal responsibility to keep harmful and false information from spreading online.
After a hearing today on national security implications of deepfakes — AI-manipulated videos — Schiff told reporters:
"If the social media companies can't exercise a proper standard of care when it comes to a whole variety of fraudulent or illicit content, then we have to think about whether that immunity still makes sense. These are not nascent industries or companies that are struggling for viability — they're now behemoths, and we need them to act responsibly."
One idea for how to update the law comes from Danielle Citron, a University of Maryland law professor who has written extensively about deepfakes and was a witness at today's hearing.
- "It shouldn't be a free pass," Citron said of the immunity. "It should be conditioned on reasonable content moderation practices."
- That would mean that companies like Facebook could get in legal trouble if someone posted a defamatory fake video and the company didn't act reasonably to take it down or tell users it was manipulated.
- The "reasonable person" standard is commonly applied across law.
What's next: If this idea picks up steam again in Congress, expect Big Tech — including any site that hosts user comments and reviews, user-written ads, or videos and photos — to fight tooth and nail to keep its Section 230 immunity.
3. Tyson enters the faux meat race

A Beyond Meat burger. Photo: Adam Berry/Getty
The race to own the booming fake meat business is heating up.
The latest news: Two of the biggest American meat producers — Tyson Foods and Perdue — say they will market faux and hybrid chicken nuggets.
- In a high-profile announcement today, Tyson said it will sell plant-based nuggets that mimic the taste of chicken, along with burgers made of a blend of beef and plants.
- In a statement yesterday, Perdue said that in September, it will begin selling blended nuggets mixing chicken and vegetables, reports CBS' Kate Gibson.
The big picture: The popularity of faux meat has surprised almost everyone. There is a nationwide U.S. shortage of Impossible Burgers, and Beyond Meat shares have more than quintupled from the IPO price in May.
4. Worthy of your time

Illustration: Rebecca Zisser/Axios
How bookstores saved themselves from Amazon (Frederick Studemann — FT)
Uber's plans are in the air — really (Joann Muller — Axios)
The day the music burned (Jody Rosen — NYT Magazine)
The new Silicon Valley problem: Chinese money (Rolfe Winkler — WSJ)
The Lehman allegory (Sarah Churchwell — NY Review of Books)
5. 1 Achaemenid thing: X, before the X-ray

Image: Hans Weiditz/Print Collector/Getty
Today, Xerxes is not a household name. But back in the fifth century B.C., the Persian king cut quite a swath through the Middle East before being assassinated by his bodyguard under not-quite-clear circumstances.
- Xerxes has another claim to fame as well: In the 19th century, before the invention of the X-ray, Xerxes was the go-to schoolhouse example of how to use the letter X, reports The Public Domain Review (h/t Jason Kottke).
- U would be for urchin, W for wagon or woodman — and X for Xerxes.
It was Wilhelm Röntgen, discoverer of X-rays in 1895, who dethroned Xerxes the second time.