Welcome back to Future. Let me know what you think about this issue, and what I should write about in the coming weeks. Just hit reply or send me a note at email@example.com. Erica, who writes Future on Wednesdays, is at firstname.lastname@example.org.
Here we go! Today's letter is 1,714 words, which should take about 6 minutes to read.
Illustration: Eniola Odetunde/Axios
Some technologists look at the pileup of crises weighing down American health care — overworked doctors, overpriced treatments, wacky health record systems — and see an opportunity to overhaul the industry, which could save lives and make them money.
Yes, but: There's frequently a chasm between can-do engineers itching to rethink health care and the deliberate doctors and nurses leery of tech that can make their lives more complicated, or worse, harm their patients, Axios health care business reporter Bob Herman and I report.
What's happening: Last year, investors handed more than $8 billion to health care tech startups. But the other end of the pipe is mostly dry — relatively few products have been integrated deeply into the labyrinthine medical system. Those that are often focus on sideshows like wellness, instead of core issues like caring for the chronically ill.
High-profile flameouts show what happens when medicine and technology are totally out of sync.
At the root of some failures is the way developers approach health care. Some take it on like any other technical puzzle, when it can be orders of magnitude more complex.
Why it matters: "If your website comes down, well, OK, you figure it out," says David Shaywitz, a Silicon Valley investor with a medical degree. "If you're promising, 'This is how we're going to figure out the dose of insulin you need,' and you're off, you can kill somebody."
The other side: Some firms have made inroads without isolating clinicians.
The bottom line: "Most of these health care startups are gonna go belly up because the health care delivery system is not all that easily disrupted," says Bob Wachter, head of UCSF Department of Medicine.
Illustration: Sarah Grillo/Axios
We're seeing the beginnings of a tug-of-war at the highest levels of government over how much access people should have to AI systems that make critical decisions about them.
What's happening: Life-changing determinations, like the length of a criminal's sentence or the terms of a loan, are increasingly informed by AI programs. These can churn through oodles of data to detect patterns invisible to the human eye, potentially making more accurate predictions than before.
Why it matters: The systems are so complex that it can be hard to know how they arrive at answers — and so valuable that their creators often try to restrict access to their inner workings, making it potentially impossible to challenge their consequential results.
Driving the news: Two recent proposals are pulling in opposite directions.
These are among the earliest attempts to set down rules and definitions for algorithmic transparency. How they shake out could set rough precedents for how the government will approach the many future questions that will emerge.
Proponents of more access say it's vital to test whether walled-off systems are making serious mistakes or unfair determinations — and argue that the potential for harm should outweigh companies' interest in protecting their secrets.
The HUD proposal would require someone to show that an algorithmic decision was based on an illegal proxy, like race or gender, in order to succeed in a lawsuit. But critics say that can be impossible to determine without understanding the system.
The other side: "The goal here is to bring more certainty into this area of the law," said HUD General Counsel Paul Compton in an August press conference. He said the proposal "frees up parties to innovate, take risks and meet the needs of their customers without the fear that their efforts will be second-guessed through statistics years down the line."
Illustration: Sarah Grillo/Axios
Some freelancers can pull in more than $100 an hour for management consulting, programming or graphic design. Others struggle to make much more than $10 an hour, beholden to "gig work" platforms like Uber or TaskRabbit.
Why it matters: Being one's own boss, with the flexibility it brings, can be lucrative for people who can differentiate themselves from competitors. For the rest, it can be quicksand.
The big picture: Freelance work makes up nearly 5% of U.S. GDP, according to a new study commissioned by Upwork, a site for high-earning freelancers to find jobs. And more people than ever — 28.5 million people, or half the freelance workforce — say it's a long-term plan.
But for those without a rare or standout skill, reality hasn't quite panned out that way.
The bottom line: "Given that being in the traditional workforce typically comes with benefits and protections, I think most workers would be better off being there rather than having to constantly hustle for the next gig," says Ravenelle.
Illustration: Aïda Amer/Axios
All eyes on U.S.–China trade talks (Dion Rabouin - Axios)
Google's hunt for "darker skin tones" takes questionable turn (Ginger Adams Otis & Nancy Dillon - NY Daily News)
Deportations in the surveillance age (McKenzie Funk - NYT Magazine)
The rise of the financial machines (The Economist)
Attack of the kamikaze drones (Joshua Brustein - Bloomberg)
A game of Codenames. Photo: Kaveh Waddell/Axios
It's one thing to play chess against a computer — you'll lose — but it's another entirely to play a collaborative word game. That stretches the limits of today's AI.
What's happening: Game geeks are trying to create bots that can play Codenames, the super-popular word guessing game.
Giving a good clue is pretty easy for computers, using basic open-source machine learning tools for language understanding.
What's really hard is guessing whether your teammates will understand your clue. This is something humans are great at — if you're playing with a sibling or friend, you can draw on shared experiences to come up with the perfect word.