Sep 4, 2020

Axios Login

By Ina Fried
Ina Fried

Here's my two cents' worth on a busy day in tech. That may not sound like much, but we are in the midst of a national coin shortage.

🚨 On the next "Axios on HBO": Mark Zuckerberg warns that Facebook "and other media need to start ... preparing the American people that there's nothing illegitimate about this election," warning of the potential for civil unrest (watch clip). 

  • Catch the full interview next Tuesday, Sept. 8 at 11pm ET/PT on all HBO platforms.

Today's Login, meanwhile, is 1,396 words, a 5-minute read.

1 big thing: Tech's deepfake problem is worsening

Illustration: Annelise Capossela/Axios

The run-up to the U.S. presidential election is also speeding up the arrival of a tipping point for digital fakery in politics, Axios' Ashley Gold reports.

What's happening: As the election, a pandemic, and a national protest movement collide with new media technology, this political moment is accelerating the proliferation and evolution of deliberately deceptive media, leaving companies struggling to enforce often-vague policies.

Driving the news: Recent manipulated media that has spread widely includes fake or misleading clips of Joe Biden (twice over), health care activist Ady Barkan and House Speaker Nancy Pelosi.

The big picture: Platforms are now taking fairly mild measures against even such crudely edited content, often slapping them with labels after they've already spent hours or days circulating. Experts are worried about Silicon Valley's ability to meet the challenge once AI-generated deepfakes become widespread, and it's trivially easy to make any famous person appear to say anything.

  • "We're unprepared because social media companies have failed to detect this type of content at internet scale, and detect it fast enough to stop the spread of it before it does damage," said Jeffrey McGregor, CEO of photo authentication firm Truepic.

Plus: Some especially tricky challenges we face or will soon face, according to experts:

  • "Cheapfakes," or "shallow fakes" — like the slowed-down Pelosi video — can spread quickly before getting caught, and even then may not be taken down. Facebook, whose manipulated media policy largely focuses on specifically thwarting deepfakes, labeled that video as misleading but left it up.
  • "Readfakes" are on the rise — a coinage from Graphika researcher Camille François referring to AI-generated text, which can take the form of fake articles and op-eds.
  • Generative Adversarial Networks, which can create images of non-existent people, let disinformation campaigns make fake social media accounts or even infiltrate traditional media.
  • "Digital humans" expand on that idea, relying on voice synthesis and faked video to create entire faked personas.
  • Sheer volume is a concern, as AI gets better at generating a lot of images or text at once, flooding the internet with junk and making people less sure of what's real.
  • Those sharing faked media are getting smarter about skirting the line of breaking platform rules, such as by claiming a video is just parody. Of course, some manipulated videos are meant as parodies, which only makes this problem tougher.

Solutions are tough to come by. Experts agree that attempting to catch and kill deepfakes and cheapfakes on a case-by-case basis may never work at scale. But some tactics can help.

  • Putting deepfake detection tools in users' hands would help platforms address the challenge of scale, Graphika's Camille François told Axios. And it may give users more confidence than a platform telling them what's real or fake.
  • Best practices shared across the industry are a must, said Rob Meadows, chief technology officer at the AI Foundation, which recently partnered with Microsoft on Reality Defender, a deepfake detection initiative. Ideally these would include some sort of objective criteria for assessing the likelihood that a given piece of media is faked, he said.
  • Authenticating images and videos by tracking when they're taken and every subsequent edit or manipulation afterward could prove more effective in restoring trust than trying to detect and quash deepfakes once they're already circulating. Truepic is among the companies working on an open standard to do just that.

Go deeper: Tech platforms struggle to police deepfakes

2. Facebook, Twitter face more tough political calls

While faked video remains one problem, the social media platforms are struggling equally hard to police the verifiable statements coming out of actual politicians' mouths — especially when they are saying things that threaten the legitimacy of the upcoming presidential election.

Why it matters: Facebook and Twitter have both said that they would take a tough line when it came to election-related misinformation. Recent events are reminding us how great that challenge is.

Driving the news:

  • Facebook, citing its policies against voter fraud, said Thursday it will take down a video of President Trump suggesting people vote twice in North Carolina — if the video is being shared approvingly.
  • Trump tweeted a slightly toned-down version of his voting advice on Thursday as well. Twitter subsequently added a label to those tweets noting that they violated platform rules on civic and election integrity. It also blocked them from being retweeted or liked any further.
  • In his interview for "Axios on HBO," CEO Mark Zuckerberg said Facebook is trying to curb misinformation that could sow chaos around the election, particularly if results are uncertain as of election night. "I think we need to be doing everything that we can to reduce the chances of violence or civil unrest in the wake of this election," he said.

What they're saying:

  • Facebook's stance on the Trump video was tougher than usual. "This video violates our policies prohibiting voter fraud and we will remove it unless it is shared to correct the record," Facebook spokesperson Andy Stone told Axios.
  • Twitter added a label to the tweet and took steps to limit it from spreading, but did not remove it entirely. "To protect people on Twitter, we err on the side of limiting the circulation of Tweets which advise people to take actions which could be illegal in the context of voting or result in the invalidation of their votes," the company said.

The big picture: Buckle up. Misinformation from elected officials is only going to increase in the lead-up to Nov. 3, and complex judgment calls by tech platforms will grow more frequent, too.

Go deeper: Chaos scenarios drive gatekeepers' election prep

3. Gig companies know plenty about having employees

Illustration: Eniola Odetunde/Axios

Gig-economy companies have long argued that their workers place high value on the freedom to choose their own hours. But many of these firms either used to schedule workers for shifts — or still do, to some extent, Axios' Kia Kokalitcheva reports.

Why it matters: The companies are fighting efforts to force them to reclassify workers as employees, arguing that a rigid work model is incompatible with their operations.

The state of play: Grocery delivery company Instacart currently has part-time employees in a number of markets across the country, who focus on assembling orders inside stores, a practice it first introduced in 2015.

  • Nevertheless, the company also has what it calls "full-service shoppers," who assemble orders inside a store and also deliver them to customers. These workers are currently independent contractors who can work whenever they choose (within grocery store hours, of course).

Some other delivery companies, such as Doordash and GrubHub, have shift systems for drivers in many or all their markets.

  • Their work schedules are more flexible than those of many hourly employees. But the practice shows that delivery companies, whose operations are largely shaped by the schedules of restaurants, aren't strangers to arranging their workforces into schedules and predicting staffing needs.
  • Even Lyft has some experience with this from its earliest days, when drivers would sign up for shifts and be guaranteed a certain level of hourly earnings.

Similarly, both Uber and Lyft instituted forms of short shifts and staffing prioritizations for drivers in New York City last year to comply with a new set of rules. (These practices have been suspended during the pandemic given low ride demand.)

Be smart: There are other big reasons gig economy companies don't want to take on their drivers and delivery people as regular employees: That would require the employers to provide benefits, pay overtime, and pay their half of the Social Security payroll tax.

4. Apple pausing ad tracking change

Photo: Silas Stein via Getty Images

Apple is delaying implementation of a new policy requiring iOS app developers to get opt-in consent before tracking user activity that some firms rely on to target ads, Axios' Kyle Daly reports.

Why it matters: The policy, originally intended to come with the release of iOS 14 this month, had some developers, particularly mobile game-makers, worried that they'd see a major drop-off in revenue. Facebook publicly took Apple to task over the change.

What they're saying: "We want to give developers the time they need to make the necessary changes, and as a result, the requirement to use this tracking permission will go into effect early next year," Apple said in a statement.

The Information was first to report the delay.

5. Take Note

On Tap

ICYMI

6. After you Login

You may think you don't want to read a long Twitter thread about a dispute over a truck filled with rice. You would be wrong. You definitely want to read it.

Ina Fried