Dec 11, 2018

Axios Codebook

Axios

Welcome to Codebook, a cybersecurity newsletter written by a man who has gotten lost in a Starbucks.

Situational Awareness: Google CEO Sundar Pichai is currently (as of newsletter release) testifying before the House Judiciary Committee. You can stream that here.

If you've got tips or story ideas, I'd love to see. Just reply to this email.

1 big thing: Admitting what failed in a breach

Illustration: Rebecca Zisser/Axios

At a Federal Trade Commission hearing on Wednesday, Malcolm Harkins, chief security and trust officer at Cylance, will pitch his pet idea: The government should hold companies that make security software — like his — accountable.

The big picture: Harkins is hoping the FTC, or at least someone, will require companies to "disclose all of the controls that failed" during a breach — from the security flaws exploited by hackers to the security products that didn't capture them.

  • "Do what the FAA does. They report the primary cause of the problem, like a broken wheel, and all of the contributing factors that didn't stop it," Harkins told Codebook.

To be clear, this isn't the type of idea the FTC usually goes for. The FTC's regulatory powers are largely based on its mandate to fight unfair practices — in cybersecurity that means deceptive claims of privacy protection. It's not an IT advice shop.

  • "The FTC has historically been averse to specifying security measures or products that a company should employ," noted Julie O'Neill, former FTC staff attorney and current privacy and data security partner at Morrison & Foerster.
  • That doesn't make it any less interesting an idea for someone, somewhere to run with.

Why it matters: If Harkins' idea ever gets adopted, we'd know a lot more about blind spots in breach prevention.

  • Organizations typically use multiple security products designed to thwart breaches at different points in the process — one product may detect strange computers trying to log in, another might detect malicious code being run, and a third might detect data being stolen.
  • But when the public hears about breaches, while we might learn about the initial entryway into the network, we don't tend to hear about why none of those products halted the hackers' progress.
  • Harkins compared it to how the government intercedes when there's trouble with automobile parts. "Takata was crucified," he said of the airbag maker forced into a massive recall. "Why aren't we crucified?"

That doesn't mean breaches always result from problems with products. But a company whose product roster matches that of a breached competitor might want to know how that combination failed.

  • This would be a good way to identify less capable systems or show how to improve capable ones.
  • If companies have clear gaps in their security product systems, knowing their negligence would be exposed might motivate some action.

Security vendors would almost definitely push back against any such scheme, as has happened whenever Harkins has brought up his idea in the past.

  • Vendors argue that breaches that circumvent their products often happen thanks to factors beyond their control: misconfigured software, poorly trained IT staff, other user error.
  • Harkins has a different explanation for the resistance: "They're embarrassed about major breaches they didn't prevent. And they should be."

The bottom line: The security industry is not at a point where it's comfortable with the message that "even the best products staffed with the best people will occasionally fail" — nor is the public ready for that nuance.

2. MuddyWater hackers use GitHub to store malware

Symantec traced a recent flurry of activity from the espionage group MuddyWater, known to spy on Middle Eastern countries, to a public GitHub source code repository.

The big picture: GitHub is an online platform that programmers use to collaborate on open source software development. If you keep an open repository on GitHub, then anyone who knows where to look can find your toolkit.

The intrigue: MuddyWater got its name for being hard to attribute. The group uses a lot of scripts (essentially a list of system commands), rather than executable malware, to make its work harder to track.

The bottom line: The new campaign discovered by Symantec was heavily focused on Pakistan and Turkey, with a handful of victims in Russia, Saudi Arabia and other countries and multinational organizations. Symantec found the group had upgraded its toolkit.

  • The campaign hit a number of different sectors, but nearly one-quarter of the 131 victims analyzed by Symantec were related to the telecommunications industry.
3. Nearly half of cloud databases aren't encrypted

Palo Alto Networks published an analysis of corporate cloud security on Tuesday, and some of the results weren't pretty: 49% of cloud databases aren't encrypted.

Why it matters: Any important database should be encrypted. That's not purely a cloud problem. There are inherent security advantages and a few disadvantages to the cloud, but the bottom line is that no matter where you put data, basic security hygiene is still important.

In the same study, Palo Alto found that corporations were failing to meet many industry standards — missing more than half of the CIS AWS Foundations' best practices, around two-thirds of NIST best practices and GDPR requirements, and around 90% of HIPAA and PCI requirements.

  • Forgive the alphabet soup. The point is that companies aren't using a broad range of generally accepted security practices that are in some cases required by law.
4. From the "We're getting a little better at this" files

Photo: Michael Heim/EyeEm via Getty Images

More organizations are able to detect breaches on their own, according to CrowdStrike data released Tuesday.

Details: The company's data shows that 75% of incident response customers discovered breaches on their own, as opposed to, say, being alerted by researchers that their data is for sale on the dark web. That's up from 68% last year.

Why it matters: Discovering a breach on its own means that a company can reduce the dwell time of an attacker — the amount of time a hacker can cause damage in a system or steal files.

That's still not perfect: "It's better, but we still see an average 80 days of dwell time," said CrowdStrike CSO and president of service Shawn Henry.

5. Odds and ends
  • Google+ will shut down its consumer services sooner than expected after Google found (and fixed) an unexpected security flaw. (Axios)
  • That should give Google's CEO plenty of new things to talk about during today's congressional hearing, expected to focus on privacy and largely debunked allegations of bias. (Axios)
  • Huawei's embattled CFO will receive a bail decision today. (CNN)
  • Doom, the first first-person shooter video game, is now 25 years old. For comparison, Pong was 21 when Doom came out. (The Register)
  • Equifax was "Entirely preventable," says a House Oversight report. (House Oversight Committee)
  • EUROPOL raided a dark net counterfeiting ring. (Europol)
  • A Super Micro-funded review of its hardware didn't find the spy chip alleged in a Bloomberg report. (Reuters)
  • What to expect when you expect Twitter amplification bots. (Duo)
Axios

Codebook will return Thursday.