Dec 21, 2019

Axios Future

By Bryan Walsh
Bryan Walsh

Welcome back to Future! This is our last letter of 2019 — and my final issue as your correspondent.

  • It's been such a delight bringing you stories from the leading edge of technology and hearing back from you in return.
  • I'm off to something new in January. Erica Pandey (erica@axios.com) will continue to bring you smart writing and deep reporting in the new year.

This issue is 1,300 words, or a 5-minute read.

1 big thing: The return of analog?

Illustration: Aïda Amer/Axios. Photos: Authenticated News/Getty Staff, GraphicaArtis/Getty Contributor

Returning to a technology largely discarded since the 1960s, scientists are betting on analog computing to wean AI systems off the monstrous amounts of electricity they currently require, I write with Axios managing editor Alison Snyder.

Why it matters: AI is on track to use up a tenth of the world's electricity by 2025, by one estimate. Cutting back on this consumption has huge climate implications — plus it’s essential for mobile devices and autonomous cars to do complex calculations on the fly.

The background: Analog computing was dethroned by today's dominant digital machines in the 1960s. Since then, computing has been about "higher speed, higher precision, higher throughput," says IBM's Jeff Wesler. That's where digital tech shines.

  • But as AI becomes omnipresent, some of those core requirements of computers are being reconsidered.
  • A realization is dawning in some corners of the tech world that "maybe we were too quick to dispense with analog 60 years ago," says Eli Yablonovitch, a professor at Berkeley.

What's happening: The neural networks that drive most AI systems rely on multiplying numbers really, really fast. They currently use the precision and power of digital computing for the job. But AI computations may not need to be so precise.

  • "When you start getting pushed to the limits of what [digital computing] can offer, when you have a new class of problems, then it becomes interesting to revisit analog," says Shahin Farshchi, a computer scientist and VC at Lux Capital.
  • IBM, several startups, academic researchers and others are doing just that.

How it works: In a digital computer, everything runs on 1s and 0s — a universal, highly exact human-made language.

  • But an analog computer is built on the physical properties of its components. It can perform multiplication, for example, by utilizing the properties of transistors.
  • “The idea is to let the natural dynamics of the physical system solve the problem,” says Garrett Kenyon of the Los Alamos National Laboratory.
  • These systems come with obstacles: They can be inconsistent and difficult to program, Kenyon says.

Modern experiments with analog technology likely won’t result in a completely analog computer but a hybrid, with an analog portion that approximates an answer that can be fed into a digital part for refinement.

The big picture: There’s a broader resurgence of interest in new and forgotten approaches to computing.

  • "Both of the most futuristic areas we're looking at are actually not all digital," Wesler says of analog and quantum computing.
  • Researchers at Los Alamos and elsewhere are developing neuromorphic chips, a subset of analog computing that more closely mirrors neurons in the brain.

What’s next: Analog computing is vying to be a part of the AI explosion. "AI is obviously already a very, very huge thing," says Yablonovitch. "If analog is contributing to that, then it means it has come back after 60 years in the wilderness."

2. The privacy smokescreen

Illustration: Eniola Odetunde/Axios

Tech companies appear to be bowing to new privacy rules springing up in Europe, California and elsewhere, putting in place processes to show they're complying.

Yes, but: Some of these moves are smokescreens that allow the companies to avoid making big, painful changes, some legal experts argue — enabled by a legal system that offloads enforcement onto the very companies being regulated.

The big picture: Companies are painting over existing practices with a veneer of rule-following, argues NYU law professor Ari Waldman in an upcoming article for the Washington University Law Review.

  • "Mere symbols of compliance are standing in for real privacy protections," he writes.
  • Companies that are meant to be constrained by privacy law are able to "recast and reframe it to benefit themselves," Waldman tells Axios.

The stand-ins, according to Waldman, include privacy policies, impact assessments, trainings, audits and paper trails.

  • "These things have all the trappings of systems but instead are really just window dressing," he says.
  • In surveys and interviews with privacy professionals, Waldman turned up a check-the-boxes approach to privacy.

What's happening: As privacy laws in Europe and California kick in, companies are setting up new internal structures to comply with them, says Dominique Shelton Leipzig, a privacy attorney at Perkins Coie.

The other side: "To conclude that assessments aren't working, I think, is a false conclusion," says Al Gidari, a longtime privacy lawyer now at the Stanford Center for Internet and Society.

  • "Those processes work really well in companies because if they don't, people go to jail, employees get fired, companies get prosecuted," he tells Axios. But it's up to companies to prioritize privacy and implement effective systems.
  • Gidari argues that internal assessments are necessary at big tech companies like Google, which he represented when it was investigated by the Federal Trade Commission in 2011. It's not possible to formally audit dozens of products and services on a regular basis, he says.

The bottom line: The offloading of enforcement to companies is a result of vague, toothless laws and weakened agencies like the FTC that would otherwise be in charge of enforcement.

  • "Procedure is not enough," says Waldman. Laws should require a substantive change like a ban on sharing certain data, rather than a process like assessments of whether or not the data is being dealt with correctly.
  • And penalties should be much higher for wrongdoing, Gidari argues. When the FTC fined Facebook $5B for a privacy violation earlier this year, the company's stock went up. "It's awfully hard to see how that alone is sufficient," Gidari says.

"When you have companies setting the rules, my biggest concern is that it's just going to be streamlined toward the most efficient process for them — but not necessarily the most efficient process for users or the fairest process for users," says Frank Pasquale, a law professor at the University of Maryland.

Go deeper: The global shortage of privacy experts

3. Automation bias on the move

Photo: Jaap Arriens/NurPhoto/Getty

The more drivers use assisted-driving systems, the more comfortable they become with the technology — and the more likely they are to misuse it, according to new research from AAA and Virginia Tech, writes Axios transportation correspondent Joann Muller.

What they found: After becoming accustomed to driving with advanced driver assistance systems (ADAS), like adaptive cruise control and lane-keeping assist, drivers were nearly twice as likely to engage in distracted driving behavior (texting, adjusting the radio) compared with when they were driving without the systems.

  • Conversely, drivers less familiar with the technology paid closer attention when the systems were turned on than when they were off.

My thought bubble: This is a dangerous example of automation bias — our tendency as humans to put too much trust in algorithmic decisions.

  • Here, relatively simple driver assists are already causing problems.
  • In the future, potentially faulty AI that takes over a larger task — like driving, yes, or hiring or underwriting a loan — presents an even bigger threat.

"This new research suggests that as drivers gain more experience using ADAS technology, they could develop complacency while behind the wheel," the executive director of the AAA Foundation for Traffic Safety, David Yang, told Joann. "Over-reliance on these systems can put drivers and others in dangerous conditions during critical moments."

4. Worthy of your time

Illustration: Eniola Odetunde/Axios

Facebook struggles to clean up its messes (Ina Fried - Axios)

Twelve million phones, one dataset, zero privacy (Stuart A. Thompson & Charlie Warzel - NYT)

What happened when the bard of Google became a vocal critic (Claire Stapleton - Elle)

Housing discrimination goes high tech (Patrick Sisson - Curbed)

San Francisco spent a decade being rich, important, and hating itself (Scott Lucas - BuzzFeed)

5. 1 fun thing: How SF stepped on a rake

Photo: Elijah Nouvelage/AFP/Getty

Earlier this year, as we reported, San Francisco became the first U.S. city to ban government use of facial recognition for surveillance, a move privacy advocates celebrated.

  • Turns out many city employees were using devices equipped with facial recognition every day — their smartphones.
  • A new carve-out lets them use their phones — but they have to punch in passcodes instead of using facial recognition to log in.

At the same time, Simonite and Barber found, the police department had to scramble to disable facial recognition in a product it was testing to search mug shots.

Why it matters, according to Wired: "The two incidents underscore how efforts to regulate facial recognition … will prove tricky given its many uses and how common it has become in consumer devices as well as surveillance systems."

Bryan Walsh

Thank you so much for reading!