May 22, 2023
Hi, Ryan here, reporting from New York, where this year's never-ending allergy season continues.
💰 Situational awareness: EU regulators fined Meta a record $1.3 billion for illegally transferring user data to the U.S., the Wall Street Journal reports.
Today's Login is 1,204 words, a 5-minute read.
1 big thing: Medical AI's weaponization
Machine learning can bring us cancer diagnoses with greater speed and precision than any individual doctor — but it could also bring us another pandemic at the hands of a relatively low-skilled programmer.
Why it matters: The health field is generating some of the most exciting artificial intelligence innovation, but AI can also weaponize modern medicine against the same people it sets out to cure.
Driving the news: The World Health Organization is warning about the risks of bias, misinformation and privacy breaches in the deployment of large language models in healthcare.
- There is a 1 in 300 chance of an individual being harmed throughout the patient journey, most often through data error, per WHO research.
The big picture: As this technology races ahead, everyone — companies, government and consumers — has to be clear-eyed that it can both save lives and cost lives.
What's happening: AI in health is delivering speed, accuracy and cost dividends — from quicker vaccines to helping doctors outsmart killer heart conditions.
- But disaster is sometimes only one click or security breach away.
1. Escaped viruses are a top worry. Around 350 companies in 40 countries are working in synthetic biology.
- With more artificial organisms being created, there are more chances for accidental release of antibiotic resistant superbugs, and possibly another global pandemic.
- The UN estimates superbugs could cause 10 million deaths each year by 2050, outranking cancer as a killer.
- Through tolerance to high temperatures, salt, and alkaline conditions, escaped artificial organisms could overrun existing species or disturb ecosystems.
- What they're saying: AI models capable of generating new organisms "should not be exposed to the general public. That's really important from a national security perspective," Sean McClain, founder and CEO of Absci, which is working to develop synthetic antibodies, told Axios. McClain isn't opposed to regulator oversight of his models.
2. One person's lab accident is another's terrorism weapon.
- Researchers in 2022 proved they could create 40,000 new chemical weapons compounds in just six hours.
- They used AI models meant to predict and ultimately reduce toxicity, and trained them to increase toxicity instead.
3. Today's large language models make things up when they don't have ready answers. These so-called hallucinations could be deadly in a health setting.
- Arizona State University researchers Visar Berisha and Julie Liss say clinical AI models often have large blind spots, and sometimes worsen as data is added.
- Some medical research startups have started working with smaller datasets, such as the 35 million peer reviewed studies available on PubMed, to avoid the high error rate and lack of citations common with models trained on the open internet.
- System CEO Adam Bly told Axios the company's latest AI tool for medical researchers "is not able to hallucinate, because it’s not just trying to find the next best word." Answers are delivered with mandatory citations: when Axios searched causes of stroke, 418 citations were offered alongside the answer.
On top of the dangers of weaponizing medical research, AI in healthcare settings poses a risk of worsening racial, gender and geographic disparities, since bias is often embedded in the data used to train the models.
- Equal access to technology matters, too.
- German kids with Type 1 diabetes from all backgrounds are now achieving better control of glucose levels: because patients are provided smart devices and fast internet. That's not a given in the U.S., per Stanford pediatrician Ananta Addala.
Yes, but: The FDA's current framework for regulating medical devices is not equipped to handle the surge of AI-powered apps and devices hitting the market, a September FDA report found.
- The CDC still points healthcare facilities to a guide from 1999 for tips on avoiding bioterrorism. There's no mention of AI.
What we're watching: Updated CDC and FDA guidance would be a first line of defense.
2. The next AI battle: copyright
How generative AI systems should treat copyrighted work is the next fierce policy debate in Washington around the game-changing technology, Axios' Ashley Gold reports.
Why it matters: Songwriters, artists and other creators have struggled to secure compensation and credit in the internet age and with the prevalence of streaming music and TV.
The big picture: Current debate mirrors past ones around music licensing, compensation, streaming, copyright, trademarks and patents.
- "This has the potential to be as big, or have even bigger impact, than Napster had," Michael Huppe, CEO of SoundExchange, told Axios.
What's happening: A House Judiciary panel hearing last week investigated potential job displacement of artists, whether AI-generated work should be eligible for copyright protection, and if AI-generated content is "art."
What they're saying: Rep. Hank Johnson (D-Ga.) at Wednesday's hearing said he couldn't understand how a generative AI provider "owes nothing, not even notice, to the owners of the works it uses to power its system."
- Generative AI may steal "the core of a professional performer's identity," wrote Recording Industry Association of America CEO Mitch Glazier and National Music Publishers' Association CEO David Israelite.
The other side: Copyright allows for "fair use" sampling of creative works, while AI content can also be licensed.
- Adobe's Firefly image generator, is trained on Adobe stock images and openly licensed content.
- A cross-sector Content Authenticity Initiative promotes content attribution to prevent misinformation and deepfakes.
What's next: Sen. Marsha Blackburn (R-Tenn.), who represents Nashville artists, will focus on copyright in a July Senate hearing.
3. "Mrs. Davis" fights an AI nightmare
A nun on a quest to find the Holy Grail in order to destroy an omnipresent AI being may sound far-fetched — but it's the premise of Peacock's "Mrs. Davis," Axios' Kia Kokalitcheva reports.
Why it matters: The series landed it smack in the middle of a broader public debate over the powers and ethics of AI.
The big picture: The show is set in 2023 and chronicles Sister Simone (played by Betty Gilpin) and former boyfriend Wiley (Jake McDorman) as they try to destroy Mrs. Davis — an AI being Simone believes killed her father — by hunting down the Holy Grail.
What they're saying: "It's less about religion and more about faith," Owen Harris, who directed a number of the season's episodes, told Axios.
- "We seem to be putting all of our faith into an algorithm," Harris said, hinting at what the show's finale reveals about the nature of the Mrs. Davis AI.
Between the lines: "Mrs. Davis is an app, and we all use apps," says Alethea Jones, another of the show's directors, "it is meant to resemble how we give ourselves away to technology without questioning."
The intrigue: The show also airs amid the current writers' strike, which began under AI's shadow, as some fear studios will use AI to devalue or eliminate writers' work.
4. Take note
- Zoom is expected to report quarterly earnings after the market closes.
- China's top tech regulator banned several products from Micron, a U.S. chip maker, after Micron announced a major investment in Japan's chip industry. (Wall Street Journal)
- War in Ukraine is helping U.S. tech firms bring change to the military industrial complex. (New York Times)
- At least 125 websites operating in 10 languages are publishing news content generated by AI tools, with "little to no human oversight," up from 49 at the start of May. (NewsGuard)
5. After you Login
It seems the Cannes Film Festival hasn't taken up Smart Brevity yet.
- Martin Scorsese's "Killers of the Flower Moon" yesterday received a nine-minute standing ovation.
Thanks to Scott Rosenberg and Peter Allen Clark for editing and Bryan McBournie for copy editing this newsletter.