April 17, 2021
Welcome to Axios Future, where the joke cryptocurrency dogecoin is currently worth more than $45 billion in total as of this writing, and nothing makes sense.
- If you haven't subscribed, hit this up.
- Send feedback, tips and spare dogecoin from the couch to [email protected].
🎥 Check out Axios' new STEM video, on the science and perils of neurotechnology.
Today's Smart Brevity count: 1,731 words or 6½ minutes.
1 big thing: Meet your doctor's AI assistant
Artificial intelligence is breaking into the doctor's office, with new models that can transcribe, analyze and even offer predictions based on written notes and conversations between physicians and their patients.
Why it matters: AI models can increasingly be trained on what we tell our doctors, now that they're starting to understand our written notes and even our conversations. That will open up new possibilities for care — and new concerns about privacy.
How it works: One of the biggest, if most invisible, contributions AI can make is to automatically capture a physician's written or spoken notes.
- But the real value comes in via data captured in doctors' conversations with patients or written case notes.
Details: For example, while identifying the right set of patients to enroll in clinical trials would usually take weeks of manually extracting information from databases, the AI models could do the work "within minutes," says Mona Flores, global head of AI at Nvidia.
- By analyzing the histories of millions of case studies, AI systems can help predict how patients might respond to different treatments, or flag doctors about likely complications before a surgery.
Driving the news: Some major deals and announcements about AI companies crossing into health care have come out in the past couple of weeks.
- On April 8, researchers at the University of Florida's academic health center announced a collaboration with Nvidia to develop a massive natural language processing (NLP) model — an AI system designed to recognize and understand human language — trained on the records of more than 2 million patients.
- On Monday, Microsoft announced it would buy Nuance Communications, a software company that focuses on speech recognition through artificial intelligence and has a popular product that transcribes and analyzes voice conversations between doctors and patients, for $19.7 billion.
By the numbers: For all our focus on vital signs like blood pressure or cholesterol levels, "80% of health care data exists in text or narrative, and the doctor's note is still the primary way things get documented," says William Hogan, the director of biomedical information and data science at the UF College of Medicine.
- That means everything from notes about a patient's medical history to a doctor's written impressions of a case — the dark matter of medical data that was mostly beyond the reach of computers until recent improvements of NLP.
Mental health is one of the best examples of how AI models might change medicine.
- "Clinical psychiatry occurs in very much the same way as it did 100 years ago, where a clinician will sit down and talk to a patient and based on that conversation, develop a treatment plan," writes the psychiatrist Daniel Barron in his forthcoming book, "Reading Our Minds: The Rise of Big Data Psychiatry."
- Instead, Barron envisions a near future in which those conversations can be recorded by AI models that can analyze patient speech and even facial expressions for clues about mental illness and how to treat it.
The catch: How many of us would be comfortable with the idea of an AI listening in and analyzing our conversations with a family doctor, let alone a therapist?
What's next: "Clinicians and patients need to have a conversation to figure out how best to make use" of data and AI systems, Barron says.
The bottom line: Personal health is one area in which each of us stands to benefit from AI's ability to suck up and analyze vast quantities of data — but it's also where sharing that data feels the most uncomfortable.
2. The all-purpose disease pathogen test
Faster and cheaper genetic sequencing can give us the ability to test for almost any infectious pathogen — provided we use it.
Why it matters: Doctors never identify the causative agents of many infections, leading them to misdiagnose patients and even miss the early emergence of new diseases, but wider use of genetic sequencing promises a future in which no virus will be left behind.
Driving the news: On Friday, the White House announced the federal government would invest $1.7 billion from the American Rescue Plan to "improve the detection, monitoring and mitigation" of COVID-19 variants, including funding to shore up the country's lagging genomic sequencing efforts.
Context: More widespread genetic sequencing could be the key not just to tracking coronavirus variants, but identifying mysterious pathogens of all kinds: viruses, bacteria, fungi, parasites and more.
- One example: The Virginia-based startup Aperiomics has developed a massive database of the genetic sequences of tens of thousands of known pathogens.
- When doctors are presented with an infection of unknown cause, they can use shotgun metagenomic sequencing — decoding the genes of all organisms in a biological sample — and compare the findings against Aperiomics' list.
- If unknown genetic sequences show up, it's a decent clue that doctors could be dealing with something new.
What they're saying: "There is a huge difference between what we know exists and what the existing testing is capable of identifying," Aperiomics CEO Crystal Icenhour says.
What to watch: How quickly improvements in genetic sequencing bring down the costs of such tests, and whether insurance companies will cover them.
Follow: Axios' Coronavirus Variant Tracker.
3. Scientists create embryos with human and monkey cells
Researchers for the first time have created embryos in the lab that contain both human and monkey cells.
Why it matters: So-called chimeric embryos could help scientists produce organs for people desperately in need of transplants, but the very act of mixing human and animal cells raises major ethical questions.
- Over 100,000 people in the U.S. alone are waiting for lifesaving organ transplants, and organ donations decreased significantly during the early months of the pandemic.
What's new: In a study published Thursday, researchers in the U.S. and China injected 25 induced pluripotent stem cells from humans into embryos from macaque monkeys.
- After a single day, the researchers could detect human cells growing in 132 of the embryos, known as chimeras because they are a mix of species.
- The embryos survived for 19 days.
- The work provided the scientists with insight into how the human and monkey cells communicated in the chimeric embryos, which in turn could help them learn to grow organs for human transplantation in animals.
Background: Scientists have tried injecting human stem cells into sheep and pig embryos in recent years in an effort to grow organs for transplant, but they've had little success — hence the turn to macaque monkeys, which are more genetically similar to humans.
The catch: If the idea of mixing human and monkey cells in an embryo makes you a little squeamish, many bioethicists share your concerns.
- Some fear that a rogue scientist might use these tools to make a baby out of chimeric embryo, which could result in a nightmare scenario of a living monkey spiked with human cells — including in its brain.
- Chimeric embryos could also potentially confound medical regulations that treat animal and human subjects very differently.
- "I do think it's an appropriate time for us to start thinking about, 'Should we ever let these go beyond a petri dish?'" Hank Greely, a Stanford bioethicist, told NPR.
What to watch: Next month the International Society for Stem Cell Research will issue revised guidelines for the field, including for work on non-human primate and human chimeras.
- Those new guidelines may lead the NIH to lift a ban on federal funding for chimera research.
The bottom line: This is not the hybrid future I was expecting.
4. Google's powerful new lens into climate change
Google Earth has unveiled features for its Timelapse tool that allows users to zoom in on locations to view more than three decades of imagery, my Axios colleague Andrew Freedman reports.
Why it matters: The result is a sobering look at the overwhelming human footprint on the planet, from melting glaciers in Alaska and Greenland to deforestation in South America and the rapid expansion of cities.
- By making intangible, long-term trends visible, the tool provides scientists, journalists and activists a new way to tell stories — and may also galvanize support for environmental protection.
How it works: Using Google Earth Engine, the company combined more than 24 million satellite images, or "roughly 10 quadrillion pixels," to create the global cloud-free images that comprise Timelapse.
- The data sources include the U.S. Geological Survey/NASA Landsat satellites, as well as the European Union's Copernicus Program and its Sentinel series of satellites.
5. Worthy of your time
The genetic mistakes that could shape our species (Zaria Gorvett — BBC Future)
- As good as genetic engineering is getting, scientists still make mistakes — and those mistakes could pollute our gene pool.
Geoffrey Hinton has a hunch about what’s next for AI (Siobhan Roberts — MIT Tech Review)
- The AI pioneer's big idea about what's next in the field involves an "imaginary system" called GLOM that models human perception for machines.
What 250 years of innovation history reveals about our green future (Per Espen Stoknes — MIT Press Reader)
- The past shows us that when technological change comes, it comes fast — and we might be on the brink with climate tech.
Does birth order really determine personality? (Lynn Berger — TIME)
- Maybe? But I can tell you from experience that oldest siblings should dominate younger ones for as long as possible — until they grow up to become an Army Ranger.
6. 1 AHHHH thing: The many nuances of the human scream
Human screams can signal more than just fear — and we're actually more alert to positive screams than alarming ones, researchers have found.
Why it matters: The fact that a simple scream can connote such a variety of emotions shows the complexity of nonverbal human communication and may indicate we're more alert to joy than terror.
How it works: In a study published this week, researchers asked 12 subjects to vocalize positive and negative screams, while another group rated the emotional nature of the screams.
- The second group also had their brains scanned with functional magnetic resonance imaging machines (fMRI) while listening.
Details: The researchers identified six "psycho-acoustically" distinct screams: pain, anger, fear, pleasure, sadness and joy.
- The listening group responded more quickly to the positive screams, which provoked more activity across frontal and auditory brain regions as indicated in the fMRI scans.
Of note: The emotional diversity of human screams is unusual — other primates and mammalian species scream but almost exclusively as an alarm signal, like when vervet monkeys scream to warn others of a threat.
The bottom line: As someone who was a high school senior in 1997, I recognize one Scream and one Scream only.