Jun 14, 2019 - Technology

AI is "awakening" surveillance cameras

Illustration: Sarah Grillo/Axios

There are millions of surveillance cameras in the U.S., but not nearly enough eyes to watch them all. When you pass one on the street, you can rightly expect your actions to go unnoticed in the moment; footage is instead archived for review if something goes wrong.

What's happening: Now, AI software can flag behavior it deems suspicious in real-time surveillance feeds, or pinpoint minute events in past footage — as if each feed were being watched unblinkingly by its own hyper-attentive security guard. The new technology, if it spreads in the U.S., could put an American twist on Orwellian surveillance systems abroad.

Big picture: In a new report today, ACLU surveillance expert Jay Stanley describes a coming mass awakening of millions of cameras, powered by anodyne-sounding "video analytics."

Collecting data has become dirt cheap, but attention has remained a scarce, expensive resource — especially for analyzing video, Stanley says. That's what is changing.

  • "The danger is that video analytics would be used to make sure that if you do anything, it will never be missed," Stanley tells Axios. That would be a significant departure from today's largely unmonitored cameras.
  • "We're right on the cusp of this technology really becoming real."

Quick take: This new software democratizes high-powered surveillance — once the purview of wealthy governments and organizations. Companies are selling it effectively as "surveillance in a box" for far cheaper than hiring video analysts.

Police, retailers, railroads and even carmakers are installing various shades of this software. And we've written about its use in schools.

  • The full extent of its deployment, or even how well the technology lives up to its marketing promises, isn't entirely clear.
  • What's certain is that there's demand for it. Analysts predict that the video analytics market, which was worth $3.23 billion in 2018, will grow to $8.55 billion in 2023.

How it works: The software is marketed as being able to:

  • Detect specific events like people hugging, smoking, fighting or drinking, or instead automatically detect "anomalies" — deviations from the usual goings-on in a certain feed, like a car driving the wrong way or a person loitering at an odd hour.
  • Search historical footage by clothing or even skin color and "summarize" countless hours of footage into a single image or a short clip.
  • Determine a person's emotional state or even make assumptions about their personality, based only on their face and body movements.

The danger: Losing anonymity in public can change the way people behave, experts say, much like China's omnipresent surveillance can cause residents to constantly look over their shoulders.

  • "People will start to wonder if they'll be cataloged or monitored if they're at a protest or political event, and potentially be subject to retribution," says Jake Laperruque, a privacy expert at the Project on Government Oversight.
  • And in the case of emotion detection, significant decisions — like whether or not you get a job — can hang on the software's interpretation of your facial expressions, says Meredith Whittaker, co-founder of NYU's AI Now Institute.

Go deeper

Facebook changing deepfake policies

Illustration: Sarah Grillo/Axios

Facebook is tightening its policies on "manipulated media," including deepfakes, Monika Bickert, the company's vice president of global policy management, says in a blog post.

Why it matters: Facebook has been criticized for the way it enforces its policies on deepfakes (AI-generated audio and video) and other misleading media. In particular, critics took aim at the tech giant's decision to allow a doctored video of House Speaker Nancy Pelosi to remain on its platform last year.

Go deeperArrowJan 7, 2020

2020's first wave of facial surveillance bills

Illustration: Lazaro Gamio/Axios

Ten states have introduced bills in 2020 that would regulate, ban or study facial recognition systems, according to the Georgetown Law Center on Privacy and Technology.

The big picture: There is no federal regulation on this tech, despite consensus for guardrails from its creators and bipartisan support for its restraint in Congress.

Go deeperArrowJan 18, 2020

Google CEO calls for balanced regulations on artificial intelligence

Photo: Carsten Koall/Getty Images.

Google CEO Sundar Pichai is calling for regulations on artificial intelligence, warning that the technology can bring both positive and negative consequences, AP reports.

Why it matters: Lawmakers are largely scrambling to play catch-up on AI regulation as the technology continues to grow. Pichai did not provide specific proposals, but did urge while speaking at the Bruegel European economic think tank Monday that "international alignment" between the United States and the European Union will help ensure AI is used primarily for good.

Go deeperArrowJan 20, 2020