Aug 15, 2019

Axios Codebook

Axios

Last week, I told you I'd bet $5 on the championship hopes of the NFL team of your choosing.

You weirdos picked the Cleveland Browns. Welcome to Codebook, the patron newsletter of lost causes.

We got 15-to-1 odds! Go Browns!

Today's Smart Brevity: 1,453 words, 5 minute read

1 big thing: Why the deepfakes threat is shallow

Illustration: Aïda Amer/Axios

Despite the sharp alarms being sounded over deepfakes — uncannily realistic AI-generated videos showing real people doing and saying fictional things —security experts believe that the videos ultimately don't offer propagandists much advantage compared to the simpler forms of disinformation they are likely to use.

Why it matters: It’s easy to see how a viral video that appears to show, say, the U.S. president declaring war would cause panic — until, of course, the video was debunked. But deepfakes are not an efficient form of a long-term disinformation campaign.

Deepfakes are detectable. Deepfakes are only undetectable to humans, not computers. In fact, a leading online reputation security firm, ZeroFOX, announced last week it would begin offering a proactive deepfake detection service.

  • “It’s not like you’ll never be able to trust audio and video again,” said Matt Price, principal research engineer at ZeroFOX.
  • There are a number of ways to detect AI-generated video, ranging from digital artifacts in the audio and video, misaligned shadows and lighting, and human anomalies that can be detected by machine, like eye movement, blink rate and even heart rate.
  • Price noted that current detection techniques likely won't be nimble enough for a network the size of YouTube to screen every video, meaning users would likely see — and spread — a fake before it was debunked.

But, but, but: If we have learned anything from the manipulated Nancy Pelosi video and years of work from conservative provocateur James O’Keefe, it's this: A lot of people will go on believing manipulative content rather than demonstrable truth if the manipulation brings them comfort. It doesn’t take high-tech lying to do that.

The intrigue: As Camille François, chief innovation officer at Graphika, a firm used by the Senate Intelligence Committee to analyze Russian disinformation on social media, told Codebook, “When I consider the problem, I don’t worry about deepfakes first.”

  • She added, “There are really sophisticated disinformation campaigns run by threat actors with a lot of money, and they don’t do fake stuff — it’s not efficient. They steal content that’s divisive or repurpose other content.”
  • Or as Darren L. Linvill, a Clemson University researcher on Russian social media disinformation, put it, deepfakes will be “less of a problem than funny memes.”
  • “A lot of research shows fake news is not the problem many people think it is," he said. "[The Internet Research Agency, a Russian social media manipulation outfit], for instance, barely employed what you could truly call ‘fake news’ after early 2015."

When disinformation groups do use fake media in their campaigns, it usually takes the form of fake images presented in a misleading context — so-called "shallow fakes." François uses the example of denying the reality of a chemical weapons attack by tweeting a photo of the same area that predates the attack.

  • "Shallow fakes" are cheaper, faster, require no technical expertise and can’t be disproven by signals analysis.

The bottom line: Deepfakes take advantage of human vulnerabilities that can be exploited much more efficiently by other means.

  • That means the disinformation problem won't be solved through technology or policy alone.
  • “Nations that have successfully built resilience to these problems have included digital literacy elements to better protect their populations,” said Peter Singer, co-author of "LikeWar," a book on social media disinformation.
2. Innovative software bolsters IoT threat data harvesting

The nonprofit Global Cyber Alliance released AIDE, a clever threat analysis tool, to assist in the study of cyber threats to internet-connected devices.

The big picture: Security researchers often use decoy systems known as "honeypots" to learn how hackers would try to break into authentic systems. The GCA project allows internet-connected device honeypots to be operated at scale (known as a "honeyfarm") without investing in a ton of devices.

How it works: The GCA ProxyPot allows a single device, anything from a toaster to the systems running nuclear power plants, to be connected to the internet dozens of times, with each connection appearing on the internet to be a unique, hackable target.

  • As hackers attack the honeyfarm, the data is collected then used in AIDE to prevent similar attacks.
  • GCA told Codebook it is inviting academics to study data gathered from the project with the understanding that any useful algorithm resulting from research can be used for AIDE.

Attivo Networks, a security firm, is already working with GCA to build a farm of industrial monitoring computers known as SCADA systems.

3. Final notes from Black Hat and DEF CON

"Hacker summer camp," as the 2 consecutive Las Vegas conferences are known, continued after the release of last week's Codebook. Here are some last gasps of knowledge from this year's experience.

Win an internship by beating the Air Force: Heck, win 2 internships by beating the Air Force. Firmware security company Red Balloon and the Air Force teamed up to offer an elaborate simulation with a grand prize of a 6-month internship at the Air Force's research labs and Red Balloon.

  • Hackers had to enter a shipping container protected by security cameras monitored by Air Force personnel and a turret before obtaining a golden ticket from inside an ATM machine.
  • "The Air Force told me that they wouldn't bring an automated turret," Red Balloon CEO Ang Cui told Codebook, "but I told them I had my heart set on a turret, so I built my own."

Hardware drivers aren't great: Eclypsium reported discovering security problems in more than 40 hardware drivers certified by Microsoft Windows.

  • Drivers integrate hardware with a computer's operating system, giving them fairly thorough access to the system.
  • The study found flaws in the drivers for the 3 main makers of BIOS, NVIDIA and Huawei systems, as well as others.
  • "Because it’s fundamentally a design flaw, developers need to design better," said Eclypsium CEO Yuriy Bulygin. "But Microsoft needs to be a little more rigid in certifying them."

The hip industrial threat is oil and gas: Sergio Caltagirone, vice president of threat intelligence at Dragos, told Codebook there was a sea change happening in the world of industrial threats.

  • Most threat groups specialize in a single industry. And most of the media coverage of industrial threats focus on the electric grid as the most vulnerable target.
  • "Now of the 9 threat groups we track, 5 target oil and gas," he said. "What we’re really scared about is there is a higher chance of destruction and loss of life than hacking the electric grid ever had."
  • Oil and gas can create many of the same impacts as hacking electric utilities — electric utilities often require the uninterrupted flow of natural gas to generate power.
  • Much of the natural gas distribution in North America is autonomous and remote, so gas and oil creates a unique problem in human staffing for security of facilities designed for massive connectivity.

A massive study of firmware security shows there hasn't been much improvement in firmware security in the last 15 years. Firmware is the low-level software that's embedded inside hardware. Read Security Ledger's writeup here.

Elevator phreaking: In a DEF CON talk, Wired went deep into "elevator phreaking," ways to illicitly call the emergency phones in elevators.

Spending a week in Las Vegas is a lot of Las Vegas: This is true every year. It's the most blinky city in the United States.

4. Firm allegedly aping Cambridge Analytica exaggerates resume

Brazilian firm IDEIA Big Data — the subject of a recent story in Quartz — reminds people of Cambridge Analytica, based on slides from the presentation used to pitch new clients. The deck seems to claim IDEIA worked with the Democratic National Committee, which does not appear to have been the case.

Details: In one of the slides listing clients, the DNC’s logo appears. While the DNC didn’t comment in the Quartz story, it denied working with IDEIA to Axios — and a search of OpenSecrets shows no U.S. political expenditures by any party or candidate at IDEIA.

The parallels between IDEIA and Cambridge Analytica aren't exact, at least according to what the Quartz story could confirm.

  • Like Cambridge Analytica, IDEIA uses social media personality quizzes to harvest information on potential advertising targets, using the OCEAN personality model. IDEIA appears to have copied some of the language describing that model from Cambridge Analytica.
  • Unlike Cambridge Analytica, IDEIA claims to be upfront about what the data collection practices are in its personality quizzes.
5. Odds and ends
  • Huawei aided African nations' efforts to spy on political opponents. (Wall Street Journal)
  • A major biometric security firm leaked data including fingerprints from an unsecured online database. While VPNMentor, the firm that discovered the leaky database, describes this as a breach, traditionally the word breach is only used when criminals get their hands on data, which we don't know happened in this case. (VPNMentor)
  • 527 Wisconsin election officials are either using Windows XP, which Microsoft no longer releases security patches for, or Windows 7, which Microsoft will soon no longer support. (StateScoop)
  • Microsoft says to patch Windows to avert a wormable bug. (Microsoft)
  • Russian actors attempted to phish the ProtonMail accounts of international security news site Bellingcat and several NGOs (Bellingcat)
  • Google's got some stats on its password safety measures. (Google)
Axios

Codebook will return next week. Browns season kicks off 9/8 against the Titans.