Axios AI+

November 18, 2024
Axios' AM Executive Briefing is a limited-term membership from Axios co-founders Mike Allen and Jim VandeHei providing deep reporting, insight and fresh analysis on how Washington REALLY works. Join today.
Situational awareness: President-elect Trump last night named Republican FCC commissioner Brendan Carr to head the agency. Carr has criticized social media platforms for stifling conservative viewpoints.
Today's AI+ is 1,137 words, a 4-minute read.
1 big thing: What Google AI knows about you
Google knows all about most of us — our email, our search queries, often our location and our photos — but the search giant isn't using most of that data to train its AI models.
The big picture: Google trains most of its models on content that's publicly available on the web — but free and experimental Google products do often come with caveats that your data could be used to train the company's AI.
In this series on What AI Knows About You, Axios is looking, company by company, at how the data-hungry AI industry uses its own customers' information to create and refine its products.
- Today's AI developers don't face any requirement to divulge the exact sources for their training data — but under various privacy laws, they do have to share what customer data they collect and how they use it.
Catch up quick: Google has released a host of generative AI services for businesses and consumers, including its standalone Gemini chatbot (formerly Bard) and AI features within other products, including Maps and Photos.
Zoom in: Google says it largely trains its AI models the way other companies do: using huge swaths of the "publicly available" open web. Google says it gives publishers the ability to control whether their data is used to train future models.
- Broadly speaking, Google says business users' interactions with Gemini won't be used for training the model, and the company doesn't use corporate data stored in Google Cloud or Workplace to train or improve its systems.
- On the consumer side, Google does use data shared with its Gemini chatbot to improve and develop its products and services, though individuals have some control based on various settings.
- Google says your Gmail inbox, your location data and what you type into search aren't used to train Google's foundation models. But your interactions with Gemini typically are.
For Google-owned YouTube, the rules are somewhat different.
- YouTube confirmed in a blog post earlier this year that it makes use of content uploaded to YouTube to create and improve its own services, including for the development of AI products. It also said it takes exception to other companies' using YouTube content to train their AI models (as OpenAI, Apple, Anthropic and others reportedly have).
- "As we have for many years, we use content uploaded to YouTube to improve the product experience for creators and viewers across YouTube and Google, including through machine learning and AI applications," Google said. "This encompasses powering our Trust & Safety operations, improving our recommendation systems, and developing new generative AI features like auto dubbing."
Google has different policies for some products that are in its experimental "labs" programs, even for corporate customers. For example, data from users of Google Workspace Labs — including prompts — can be used to improve its services.
Between the lines: Consumers using Google's standard AI services have some choice in how their data is collected and stored.
- Gemini Apps Activity lets users control whether their conversations with Gemini are stored and potentially used for training new models. It's "on" by default for users 18 and over, and "off" by default for those under 18 (who can choose to turn it on).
- Ordinarily Google stores such activity for 18 months, though it can also be set to be retained for three months or 36 months.
Go deeper:
2. Preparing for government surveillance in Trump 2.0
Now is the time to evaluate and get serious about your digital security practices, experts say.
Why it matters: President-elect Trump made several promises on the campaign trail to target people in marginalized communities, undermine the press and seek retribution against his enemies.
- His administration could use several government surveillance and law enforcement tools to carry out those promises, including subpoenaing user data from major technology companies, purchasing data from third-party brokers and tapping the intelligence community's internal programs.
What they're saying: Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, told Axios she's seen an uptick in requests from organizations looking for privacy and security training since Election Day.
- Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, said he's heard from major civil society organizations, civil rights groups and others who believe they'll be targeted by the new administration.
- They're looking for advice on overseas data storage, encryption and other ways to avoid surveillance.
- Several former officials also told the Washington Post this week they've started packing "go bags" and applying for foreign citizenship in case they need to flee.
What they're saying: "We've never seen in the history of the United States a stronger surveillance capacity paired with weaker rule-of-law protections," Cahn told Axios. "It is a really dangerous combination."
The big picture: Virtually everything we do leaves some sort of digital footprint.
- That includes the obvious, like using Google Maps to get directions, making an online purchase and posting personal updates on social media.
But there are also less obvious ways people are tracked.
- A retail app might ask someone to turn on location services to find their closest physical store.
- A message sent on an unencrypted messaging service leaves a trail of its contents and the accompanying metadata.
- Even voter registration rolls in most states are sold to data brokers, including people's home addresses, and are sometimes published online.
Threat level: Digital privacy experts tell Axios they're worried about a wide range of people as they prepare for the second Trump administration.
- Many people seeking abortions in states that have implemented near-total bans have been prosecuted using their digital data trail.
- Immigrants — both those who are undocumented and those with protected statuses — are very likely targets for surveillance amid promises of mass deportations.
- Trans adults and youth — and their medical professionals — could be outed in communities where care is outlawed, and companies may be emboldened not to hire them amid rising hate speech.
- Activists could be tracked and stalked in an effort to create hurdles for their cause, and journalists' activities could be tracked to identify whistleblowers and other anonymous sources.
Experts say digital privacy is key as the president-elect's rhetoric could further encourage people to harass and attack people in marginalized communities.
Reality check: If you interact with the internet, there's no way to completely disappear.
- But there are steps people can take to minimize their footprint and protect their most sensitive information.
- Even if someone isn't worried about government surveillance, reevaluating their digital footprint can help keep cybercriminals and other nefarious actors from stealing sensitive information.
3. Training data
- Nvidia's new Blackwell chip is said to have overheating issues, forcing several server redesigns in recent months. (The Information)
4. + This
This is an oldie but a goodie: Check out these deleted scenes from "Best in Show," featuring Catherine O'Hara and Eugene Levy.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+




