Welcome back to Future! This is our last letter of 2019 — and my final issue as your correspondent.
This issue is 1,300 words, or a 5-minute read.
Illustration: Aïda Amer/Axios. Photos: Authenticated News/Getty Staff, GraphicaArtis/Getty Contributor
Returning to a technology largely discarded since the 1960s, scientists are betting on analog computing to wean AI systems off the monstrous amounts of electricity they currently require, I write with Axios managing editor Alison Snyder.
Why it matters: AI is on track to use up a tenth of the world's electricity by 2025, by one estimate. Cutting back on this consumption has huge climate implications — plus it’s essential for mobile devices and autonomous cars to do complex calculations on the fly.
The background: Analog computing was dethroned by today's dominant digital machines in the 1960s. Since then, computing has been about "higher speed, higher precision, higher throughput," says IBM's Jeff Wesler. That's where digital tech shines.
What's happening: The neural networks that drive most AI systems rely on multiplying numbers really, really fast. They currently use the precision and power of digital computing for the job. But AI computations may not need to be so precise.
How it works: In a digital computer, everything runs on 1s and 0s — a universal, highly exact human-made language.
Modern experiments with analog technology likely won’t result in a completely analog computer but a hybrid, with an analog portion that approximates an answer that can be fed into a digital part for refinement.
The big picture: There’s a broader resurgence of interest in new and forgotten approaches to computing.
What’s next: Analog computing is vying to be a part of the AI explosion. "AI is obviously already a very, very huge thing," says Yablonovitch. "If analog is contributing to that, then it means it has come back after 60 years in the wilderness."
Illustration: Eniola Odetunde/Axios
Tech companies appear to be bowing to new privacy rules springing up in Europe, California and elsewhere, putting in place processes to show they're complying.
Yes, but: Some of these moves are smokescreens that allow the companies to avoid making big, painful changes, some legal experts argue — enabled by a legal system that offloads enforcement onto the very companies being regulated.
The big picture: Companies are painting over existing practices with a veneer of rule-following, argues NYU law professor Ari Waldman in an upcoming article for the Washington University Law Review.
The stand-ins, according to Waldman, include privacy policies, impact assessments, trainings, audits and paper trails.
What's happening: As privacy laws in Europe and California kick in, companies are setting up new internal structures to comply with them, says Dominique Shelton Leipzig, a privacy attorney at Perkins Coie.
The other side: "To conclude that assessments aren't working, I think, is a false conclusion," says Al Gidari, a longtime privacy lawyer now at the Stanford Center for Internet and Society.
The bottom line: The offloading of enforcement to companies is a result of vague, toothless laws and weakened agencies like the FTC that would otherwise be in charge of enforcement.
"When you have companies setting the rules, my biggest concern is that it's just going to be streamlined toward the most efficient process for them — but not necessarily the most efficient process for users or the fairest process for users," says Frank Pasquale, a law professor at the University of Maryland.
Go deeper: The global shortage of privacy experts
Photo: Jaap Arriens/NurPhoto/Getty
The more drivers use assisted-driving systems, the more comfortable they become with the technology — and the more likely they are to misuse it, according to new research from AAA and Virginia Tech, writes Axios transportation correspondent Joann Muller.
What they found: After becoming accustomed to driving with advanced driver assistance systems (ADAS), like adaptive cruise control and lane-keeping assist, drivers were nearly twice as likely to engage in distracted driving behavior (texting, adjusting the radio) compared with when they were driving without the systems.
My thought bubble: This is a dangerous example of automation bias — our tendency as humans to put too much trust in algorithmic decisions.
"This new research suggests that as drivers gain more experience using ADAS technology, they could develop complacency while behind the wheel," the executive director of the AAA Foundation for Traffic Safety, David Yang, told Joann. "Over-reliance on these systems can put drivers and others in dangerous conditions during critical moments."
Illustration: Eniola Odetunde/Axios
Facebook struggles to clean up its messes (Ina Fried - Axios)
Twelve million phones, one dataset, zero privacy (Stuart A. Thompson & Charlie Warzel - NYT)
What happened when the bard of Google became a vocal critic (Claire Stapleton - Elle)
Housing discrimination goes high tech (Patrick Sisson - Curbed)
San Francisco spent a decade being rich, important, and hating itself (Scott Lucas - BuzzFeed)
Photo: Elijah Nouvelage/AFP/Getty
Earlier this year, as we reported, San Francisco became the first U.S. city to ban government use of facial recognition for surveillance, a move privacy advocates celebrated.
At the same time, Simonite and Barber found, the police department had to scramble to disable facial recognition in a product it was testing to search mug shots.
Why it matters, according to Wired: "The two incidents underscore how efforts to regulate facial recognition … will prove tricky given its many uses and how common it has become in consumer devices as well as surveillance systems."
Thank you so much for reading!