Sep 30, 2019

Axios Login

By Ina Fried
Ina Fried

Scott Rosenberg here, briefly subbing for Ina as your Login attendant. Sit back and enjoy the words — all 1,496 of them (a 5-minute read) — but remember that news may move around in the overhead compartments.

1 big thing: New California initiative ups the privacy ante

Illustration: Lazaro Gamio/Axios

With impeachment hogging Congress' agenda, no national privacy law is likely to preempt California's stringent rules from going into effect next year — and activists in the state are already gearing up to put an even tougher initiative on the state's 2020 ballot.

Why it matters: California's rules often become de facto national standards. Home to Google and Facebook, this is where the tech industry's user-tracking, ad-targeting economy was born, but now it's also where efforts to tame the industry keep sprouting.

Driving the news: Real estate developer Alastair Mactaggart and his organization Californians for Consumer Privacy, which led the drive for a state law in 2018, last week introduced a new privacy-focused ballot initiative for 2020 that would bolster the requirements of the state's current law.

  • The California Consumer Privacy Act (CCPA), passed in 2018 and set to go into effect Jan. 1, 2020, gives state residents the right to find out whatever personal information about them companies possess, to have them delete it, and to stop them from selling it.

The new ballot initiative goes further ...

  • establishing a data protection agency for the state to enforce new privacy laws and make new regulations.
  • creating a new class of "sensitive information" — data like social security numbers, precise location, and financial info — that firms could not sell without users opting in.
  • enacting a new right to correct inaccurate personal information stored by companies.

Flashback: The CCPA was written and passed hastily in 2018 as part of a deal with Mactaggart and his group to withdraw an earlier ballot initiative that had first spurred the push for a state-level privacy law in California.

  • Businesses have argued ever since that the law is full of loopholes and needs to be revised, but the California legislature defeated efforts to revamp the law.
  • The tech industry hoped that bipartisan efforts in Congress earlier this year would produce a less strict national privacy law that would take precedence over California's, but those efforts faltered.

Between the lines: Critics say big global companies that have already adapted to Europe's strict GDPR rules won't bat an eye at further privacy limits in California, while small firms and startups may find themselves hobbled.

  • CCPA only applies to companies with over $25 million in revenue, personal information on at least 50,000 people, or earning at least half their money by selling consumers' personal information.
  • The new initiative raises the bar to apply only to firms with information on 100,000 customers or households.

What's next: California initiatives need more than 600,000 signatures to qualify for the ballot.

  • If the new privacy initiative qualifies, it will get a yes or no decision from voters in 2020.
  • Successful initiatives are much harder to modify or amend than laws passed by the state legislature.

Go deeper.

2. Cities aren't ready for the AI revolution
Expand chart
Data: Oliver Wyman Forum; Table: Axios Visuals

Globally, no city is even close to being prepared for the challenges brought by AI and automation. Of those ranking highest in terms of readiness, nearly 70% are outside the U.S., according to a report by Oliver Wyman, Axios' Kim Hart reports.

Why it matters: Cities are ground zero for the 4th industrial revolution. 68% of the world's population will live in cities by 2050, per UN estimates. During the same period, AI is expected to upend most aspects of how those people live and work.

The big picture: Many cities are focused on leveraging technology to improve their own economies — such as becoming more efficient and sustainable "smart cities" or attracting companies to compete with Silicon Valley.

  • But the majority of cities have ignored or downplayed the potential and significant downsides of the rise of automation, Oliver Wyman concluded after interviewing more than 50 business and city leaders and reviewing 250 city planning documents.
"What struck me most is just how many cities didn't have this on their radar screens. The thing about AI is that it's fundamentally opaque, and that makes it harder for cities to keep track of it. The overall focus on smart cities almost masks the broader trends."
— Timo Pervane, partner at Oliver Wyman, told Axios

What they found: No city or continent has a significant advantage when it comes to AI readiness, but some have parts of the recipe.

  • Size matters: Megacities have an advantage thanks to their well-developed business communities and high-skilled talent pools. But smaller cities win regarding the "vision" for the next few decades.
  • Urban realists: A global survey of 10,000 city dwellers found that, while they are optimistic about the opportunities provided by technologies in their cities, roughly 45% anticipate job loss resulting from AI or automation.
  • Small city confidence: In the U.S. there is an inverse relationship between city size and perception of job loss. Pittsburgh and Boston are the least concerned about job loss due to AI.

By the numbers: Here are the survey stats that stood out.

  • 46% of Chinese citizens see data privacy violations as the No. 1 risk from AI.
  • 95% of Shanghai residents believe technological change will make their lives better, compared with 47% in Berlin (the global average is 69%).
  • 89% of respondents in Dubai said they believe their city government has a strategy to respond to the rise of AI, compared with 45% in San Francisco (the global average is 58%).

Reality check: Cities can't deal with the repercussions of AI on their own. National and regional governments will also have to step in with policy strategies in collaboration with businesses.

Go deeper: See how your city measures up

3. Revenge of the deepfake detectives

Illustration: Sarah Grillo/Axios

Tech giants, startups and academic labs are pumping out datasets and detectors in hopes of jump-starting the effort to create an automated system that can separate real videos, images and voice recordings from AI forgeries, Kaveh Waddell writes for Axios Future.

Why it matters: Algorithms that try to detect deepfakes lag behind the technology that creates them — a worrying imbalance given the technology's potential to stir chaos in an election or an IPO.

Driving the news: Dessa, the AI company behind the hyper-convincing fake Joe Rogan voice from earlier this summer, published a tool today for detecting deepfake audio — the kind that recently scammed a CEO out of $240,000.

  • The new detector, which Axios is reporting first, is open source, so anybody can go through the code for free to understand and potentially improve it.
  • But the company gets something out of it: The detector is built on a Dessa platform, which you have to download (without paying) to set it up.

The big picture: There's an all-hands scramble for better detectors, which generally require a lot of really good examples of deepfakes. Researchers use them to train algorithms that can tell if media was created by AI.

  • Yesterday, SUNY Albany deepfake expert Siwei Lyu released a dataset filed with celebrity deepfakes.
  • Earlier in the week, Google and Jigsaw — both owned by parent company Alphabet — released a large set of video deepfakes.
  • And earlier this month, Facebook, Microsoft and the Partnership on AI teamed up with academic researchers to release more deepfake videos — and offer a prize to the team that uses them to make the best detector.

Unlike these datasets, which allow researchers to cook up their own detectors, Dessa is releasing a pre-baked system — which has advantages and risks.

  • The company felt a responsibility to release an antidote after it made the realistic Rogan voice, says Ragavan Thurairatnam, Dessa's co-founder.
  • "I think it's inevitable that malicious actors are going to move much faster than those who want to stop it," he tells Axios. The free detector is a "starting point" for people to push detection forward.

But, but, but: Thurairatnam acknowledged that an open-source detector could help a particularly determined troll create new audio fakes that fool it. That's because generative AI systems can be trained to trick a specific detector.

4. WeWork — the book

WeWork's roller coaster over the last 2 weeks has monopolized headlines, and now the story of the office coworking company and its high-flying CEO will be subjects of an upcoming book by Wall Street Journal reporters Eliot Brown and Maureen Farrell, they tell Axios' Kia Kokalitcheva.

The big picture: The growing influence of technology companies on the world has made them not only the subjects of regulatory and investor scrutiny, but now also the focus of grand business narratives.

The intrigue: There's already been a string of tech startup books, from John Carreyrou's "Bad Blood" (about Theranos) to Mike Isaac's "Super Pumped" (Uber). Brown tells Axios there's room for more.

  • "[The story of WeWork] tells us a lot about Silicon Valley in the past decade, as well as a chunk of global finance," says Brown.
  • "Everything about WeWork — its fundraising, its founders and its losses — has seemed bigger and more dramatic than any of the other companies I've followed, including Uber," says Farrell.

The book will be published by Crown, an imprint of Random House. There's no release date yet.

Go deeper:

5. Take note

On Tap

The Bits & Pretzels conference runs through Tuesday in Munich.


6. After you Login

Bruce Bochy’s grandson tips his hat as the SF Giants' manager prepares for his final game as skipper.

Ina Fried