Axios AI+

January 03, 2024
Ina here, wishing a belated happy birthday to tech pioneer Lynn Conway. Today's AI+ is 1,338 words, a 5-minute read.
1 big thing: Welcome to the generative AI election era
Illustration: Annelise Capossela/Axios
Around one billion voters will head to polls all over the world this year, while busy campaign managers and underfunded election officials will face pressure to use AI for efficiency, Ryan reports.
Why it matters: Conditions are ripe for bad actors to use generative AI to amplify efforts to suppress votes, libel candidates and incite violence.
- New companies providing powerful generative AI have untested and relatively small election integrity teams, while older companies have cut back those teams — at its peak in 2019, Meta's integrity staff numbered over 500 globally.
- AI may end up disenfranchising voters as election officials use new tools for a variety of tasks, from identifying and removing ineligible citizens from voting registries to AI-powered signature matching.
Speech is difficult to regulate. A deep tension exists between the rights to freedom of expression and information and the need to combat misinformation to ensure a fair campaign.
- That tension will play out against a backdrop of Americans having little trust in the companies deploying AI and a plurality believing AI could alter election results.
- The few guardrails in place are voluntary — including those demanded by the White House.
What's happening: Microsoft says it caught Beijing operating a network of online accounts using AI-generated material to sway U.S. voters and both the CIA and DHS warn that China, Russia and Iran are using generative AI to target election infrastructure and processes.
- YouTube is among the platforms that reversed bans on election result denialism in 2023, while Facebook currently restricts ads that deny "upcoming" or "ongoing" election results, but not past ones.
- YouTube, TikTok, Facebook and Instagram now require labeling of election-related advertisements or content created with AI.
Several U.S. states have passed legislation banning or requiring disclosure of political deepfakes, including California, Michigan, Minnesota, Texas and Washington. Legislation is under consideration in New York, Illinois, New Jersey and Kentucky.
- Arizona election officials conducted a two-day exercise in December, designed to help them spot and respond to deepfake videos.
Yes, but: AI is useful to campaigns and serves as a tool for first drafts of everything from speeches to marketing materials. It also provides customizable robo-conversations with voters and helps candidates better understand the people they aim to serve.
What we're watching: Social media companies will step up their fight to stop floods of AI-generated misinformation from reaching our screens. If they can't, their platforms may either become useless or dangerous to democracy.
What they're saying: Russian election interference in 2016 was "child's play, compared to what either domestic or foreign AI tools could do to completely screw up our elections," Sen. Mark Warner (D-Va.) tells Axios.
- "Panic responsibly. It is important to not to freak out about every single thing," per Katie Harbath, former head of election safety at Meta.
- Social media companies should allow for "free speech for humans, not computers," Eric Schmidt told CNBC.
2. AI may ease shopping for health insurance
Illustration: Sarah Grillo/Axios
For all the frenzied speculation about how AI can transform health care, some companies are leveraging the technology for a decidedly simpler but still critical task: making shopping for health insurance less terrible, Axios' Maya Goldman reports.
Why it matters: Many Americans typically stick with their health plan year after year even when better and cheaper options are available, often because it's too hard to predict how much care they'll need or figure out if they can actually get a better deal.
- Companies are rolling out AI-powered tools aimed at making the shopping experience easier. Even brokers and agents selling health plans say they see the technology as a helpful aid, rather than an existential threat.
Context: The tools can be especially helpful for the tens of millions of people purchasing private Medicare Advantage plans or shopping for their own coverage on the Affordable Care Act marketplaces.
- The average shopper on the ACA marketplaces during the current enrollment season has 100 plan options, with differing levels of cost and access to health care providers.
- The number of Medicare Advantage plans available to the average enrollee in recent years has more than doubled to 43, according to health policy research nonprofit KFF.
How it works: The AI tools generally gather basic information about an individual insurance shopper and their expected health needs and then use that data to churn out predictions for the best health plan options.
- Alight, a company providing cloud-based HR services, said 95% of the employers it serves used AI technology — including a virtual assistant feature — to help employees pick health benefits during fall open enrollment.
- The Big Plan, which launched last year for ACA open enrollment, offers up the best three health plan options available to a customer based on several factors, including their income, prescriptions and preferred doctors.
- Healthpilot, an AI startup specializing in Medicare coverage, markets itself as removing the "commission bias" brokers may have to steer patients to certain health plans.
Reality check: Technology-driven tools that help people pick health insurance aren't exactly new.
- The ACA insurance marketplaces and Medicare.gov all offer varying levels of decision-support tools to recommend health plans right for customers, noted Zarek Brot-Goldberg, an assistant professor at the University of Chicago studying health care markets.
- But the current hype around AI could boost consumer interest in the tools, or at least give companies a new angle for promoting them.
What they're saying: It's still probably best to get some input from a professional, says Louise Norris, a broker and policy analyst at healthinsurance.org.
- Brokers can help shoppers decipher health insurance terminology — which many consumers don't understand — and provide advice as they sort through plan options, she said.
3. How California schools can bring AI into classrooms
Illustration: Natalie Peeples/Axios
California is just one of two states to issue policy guidance for K-12 schools on artificial intelligence platforms such as ChatGPT, Axios' Kate Murphy and Jennifer Kingson report.
Why it matters: Teachers and administrators are eager for guidelines to use AI — and how to quash misuse. But the field is moving so rapidly that governments have been loath to issue pronouncements.
Driving the news: The Center on Reinventing Public Education (CRPE), a nonpartisan research center at Arizona State University, asked each of the 50 states and the District of Columbia to share their approach to AI guidance.
- Only California and Oregon offered official recommendations for the current school year.
- 11 states are currently developing guidance: Arizona, Connecticut, Maine, Mississippi, Nebraska, New York, Ohio, Pennsylvania, Virginia, Vermont and Washington.
Zoom in: The California Department of Education suggests AI can enhance learning, while acknowledging potential ethics, bias, inaccuracy or data privacy risks.
- It outlines why and how California schools can utilize AI, including the development of planning and workflow tools for teachers and personalized learning materials for students with varying abilities or language barriers.
- Students can also create and program AI themselves if schools incorporate "5 Big Ideas in AI" and computer science standards into their curriculum.
- That could improve access to STEM fields for traditionally underrepresented groups and help students develop problem-solving and critical thinking skills, per the department.
Yes, but: The department also advised local education agencies to evaluate concerns and processes around security, data privacy and retention when implementing AI systems.
4. Training data
- More big tech companies are offering legal protections to customers who use their generative AI products, but there are limits to that indemnity. (Runtime)
- X (née Twitter) is once again including a headline for links within posts, rather than just a photo — but the type is small. (Axios)
- OpenAI has hired an artist-in-residence as it looks to bolster the case that AI can be a new canvas for artists and not just a threat. (NYT)
- Former and current Google employees say 2023 (and the rise of ChatGPT) "shook the core of the company." (Business Insider)
5. + This
Once again my family rang in the New Year by enjoying the festivities within "Animal Crossing" on the Nintendo Switch.
- Harvey, meanwhile, discovered a great easter egg in the game that shows the real-world musicians performing the "Animal Crossing: New Horizons" theme song. (It's also posted on YouTube.)
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter.
Sign up for Axios AI+

Scoops on the AI revolution and transformative tech, from Ina Fried, Madison Mills, Ashley Gold and Maria Curi.


