Axios AI+

August 13, 2024
Ina here. If my flight isn't further delayed, I should be on a plane flying home as you read this.
🇺🇸 Will you be in Chicago next week for the DNC? Join us downtown at Axios House on Aug. 19 and 20 for newsworthy events and hear from Rep. James Clyburn (D-S.C.), former DNC chair Tom Perez and more on the latest around the 2024 election. See our schedule and RSVP here.
Today's AI+ is 1,205 words, a 4.5-minute read.
1 big thing: Trump speeds AI-driven truth decay
Former President Trump's false charge that Vice President Harris used AI to forge a photo of a crowd of supporters shows yet another dimension of AI's potential to harm democracy.
Why it matters: AI's greatest danger, many experts in the field argue, isn't that it can be used to manufacture falsehoods — but that its very existence makes it so easy to undermine the truth.
Catch up quick: Trump posted a message on Truth Social Sunday claiming that photos showing Harris meeting a large crowd of supporters on a Detroit runway were doctored.
- "There was nobody at the plane, and she 'A.I.'d' it, and showed a massive 'crowd' of so-called followers, BUT THEY DIDN'T EXIST!" Trump declared.
Reality check: Many people have affirmed they were there and saw the crowds. Many of those people took their own photos.
- It's hard to tamper with the reality of a public event that had myriad witnesses.
Trump, who has long been obsessed with the size of his own and his rivals' crowds, noted that there were no people reflected on the metallic sides of the vice president's plane.
- But the aircraft has curved sides and was angled away from the crowd.
The big picture: You don't need AI to alter a photo — Photoshop has been doing that for decades.
- Today's AI produces images that are often easily flagged as artificial. But that won't always be the case. Audio impersonation is already more advanced. Video is next.
Between the lines: Warnings about the danger of deepfakes have helped arm the public against an expected flood of fakery.
- But they've also unavoidably made it possible to question the trustworthiness of any evidence you don't like.
- The next time a recording surfaces of some private event where a politician said something damaging, it will be that much easier to deny it.
Some Jan. 6 defendants tried to argue that photos showing them attacking the U.S. Capitol were AI-generated fakes, invoking what a recent American Bar Association article calls "the deepfake defense."
- "The growing use of AI-generated false and misleading information is exacerbating the challenge of the so-called liar's dividend, in which widespread wariness of falsehoods on a given topic can muddy the waters to the extent that people disbelieve true statements," a Freedom House report last year argued.
Our thought bubble: Skepticism and doubt advance the truth only when everyone involved is acting in good faith.
- A world in which nobody trusts anything is one where autocratic leaders can easily mobilize hate and invent their own realities.
The bottom line: As Yale historian Timothy Snyder, author of "On Tyranny," puts it, "What authoritarians do is they say, 'Look, there's no truth at all. Sure you don't trust me — but don't trust them, or them, or certainly not the media. Don't trust anybody.'"
- "And so just stay on your couch, basically ... just do nothing. Affect a pose of cynicism. Be equally skeptical about everything."
2. States are writing their own AI health care rules
In the absence of federal guardrails on artificial intelligence in health care, state governments are figuring out their own rules of the road.
Why it matters: Artificial intelligence is health care's biggest wild card. But it's drawing hundreds of millions of dollars in investment, and health providers and drug developers are already using it — essentially without oversight.
State of play: Colorado in May enacted one of the first comprehensive state AI laws, which places limits on developers and deployers of AI systems that make "consequential decisions," including in health care.
- "The federal government is particularly ineffective and slow these days," said Democratic state Rep. Brianna Titone, a sponsor of the bill. "The states really need to step up" to make sure conversations around ethical and responsible use of AI are happening, she said.
- Utah's AI office is working to regulate mental health chatbots. Many health care workers in the state also have to disclose when they have generative AI interact with a consumer.
- State medical and osteopathic boards this spring also adopted recommendations for best practices for governing the use of AI in clinical care.
Yes, but: States have many more health AI proposals than enacted policies, said Valerie Rogers, senior director of government relations at the Healthcare Information and Management Systems Society.
Policymaking on AI and health will likely pick up in 2025, Rogers said.
- "States do feel under some pressure to rise to the challenge … particularly around privacy, around security, to limit bias or any sort of discriminatory use of AI," she said.
The big picture: States can often make policy quicker than the federal health bureaucracy and with specific community needs in mind.
- Still, state officials have run into many of the same problems as their D.C. counterparts, like the lack of clear definitions on AI and differing stakeholder opinions.
Between the lines: Regulating AI use in health care on a state-by-state basis may create a patchwork system that's difficult for users and developers to navigate. That's not practical in the long run for many generative AI technologies, said Jennifer Geetter, a partner at law firm McDermott Will and Emery.
- "There are states that take different approaches to other health regulatory topics, but at a broad level, people move across states, technology moves across states, data moves across states, and risk moves across state lines," Geetter said.
- States are making an effort to collaborate on their AI policies, including in the health sector, through convening groups like the National Conference of State Legislatures, said Colorado's Titone. An open forum doesn't solve all the problems, though.
- "You can't just copy and paste a law into someone else's statute book and expect it to work exactly the same," she said.
What to watch: The federal government is slowly making progress toward national regulations on health AI. The Biden administration in late July reorganized its health IT offices in part to better focus on regulating artificial intelligence.
- Last week, Food and Drug Administration officials promised transparent and predictable guardrails for the use of artificial intelligence in drug development, Axios' Peter Sullivan reported.
3. Google says AI can fix traffic problems
One of the oldest American cities is exploring how AI can make its neighborhoods work better.
Catch up quick: Boston announced a partnership with Project Green Light, Google's AI traffic analysis initiative.
- The team studies traffic patterns and makes recommendations for optimizing traffic light plans in hopes of reducing delays and emissions, per the website.
- The project operates in 13 other cities worldwide, including Seattle.
How it works: The team uses AI and Google Maps driving trends to track traffic patterns at intersections around the city.
- It looks for signs of movement, idling, and starting and stopping.
- Engineers can cross-reference the data with loop sensors, Santiago Garces, Boston's chief innovation officer, says.
State of play: Google has analyzed the data for five months and started making recommendations.
Subscribe to the Axios Boston newsletter.
4. Training data
- Google hosts its "Made by Google" event today in San Francisco, where it is expected to show off its latest Pixel hardware, among other advances. Check out Axios for the news as it breaks.
- The FBI confirmed yesterday that it is investigating allegations that the Trump campaign was hacked. (Axios)
5. + This
Here's a fun thread of sports fails compiled under the banner of "unfortunately I didn't make the Olympics."
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+







