Axios AI+

November 25, 2024
We saw "Wicked" yesterday — and it was great!
Today's AI+ is 981 words, a 3.5-minute read.
1 big thing: What Microsoft's AI knows about you
Microsoft has aggressively added AI-powered Copilots to nearly all its products, but that doesn't necessarily mean your data is being used to train models.
Why it matters: You won't know how much data you might be sharing with Microsoft's AI developers unless you dig into the firm's policies and understand your options.
In our ongoing series, What AI Knows About You, Axios is looking company by company at how tech giants are sometimes using their customers' information to develop and improve their products, and how users can opt out.
- Today's AI developers don't face any requirement to divulge the exact sources for their training data — but under various privacy laws, they do have to reveal what customer data they collect and how they use it.
Zoom in: Microsoft has partnered with OpenAI to power many of its services. But Microsoft's privacy policies are the ones that prevail, and Microsoft says that it has not provided any customer data to OpenAI to help train that company's models.
- Like most other tech companies, Microsoft doesn't train models using its business customers' data or prompts, including those using Copilot for Microsoft 365.
- Microsoft says it also does not use any content created in Microsoft 365 — that includes Word, Excel, PowerPoint, Outlook and Teams — to train its underlying foundational models.
Yes, but: Microsoft says it does use consumer data from Bing, MSN, the consumer version of Copilot and interactions with ads in certain countries (including the U.S.) for AI training.
- It says that if any images are included in AI conversations, it will "take steps to de-identify them such as removing metadata or other personal data and blurring images of faces."
Users who are logged in to a Microsoft account can control whether or not their data is used to train Microsoft's AI models, the company says.
- Microsoft says opting out will "exclude your past, present, and future consumer data from being used for training these AI models, unless you choose to opt back in."
- Microsoft also has a variety of AI services in preview or beta and, in some cases, it asks customers to share data that can be used for training. The company says it clearly states how the data is being used and how users can opt out if they choose.
- Finally, some of Microsoft's subsidiaries have their own policies, including LinkedIn and GitHub, which trains its coding assistant using public code repositories, along with the usual array of "publicly available" internet data.
Between the lines: One place where Microsoft is trying to be extra clear about its privacy policies is with its new crop of Copilot+ PCs, where a number of AI features — including the controversial Recall feature — do their work on the device rather than in the cloud.
- "[Users of Microsoft's early access program] and Recall users, we want you to know your snapshots are truly yours," Microsoft reiterated in a blog post on Friday. "We do not send your snapshots off your PC to Microsoft or third parties, and don't use them for training purposes."
Go deeper:
2. Gen Z knowledge workers are all in with AI: survey
Twenty-something knowledge workers are pretty much all using generative AI tools, finds a survey out today from Google Workspace.
Why it matters: The youngs are early adopters. If Gen Z is doing it, there's a good chance that at some point soon, everyone else might, too.
- Think of any popular office tech, from email to Slack-type tools to collaborative document use. Typically, it's the youngest who get there first. We all follow.
How they did it: In the late summer, Google surveyed 1,005 full-time knowledge workers, age 22-39, who are either in leadership roles or aspire to one.
What they found: 93% of Gen Z respondents, age 22-27, said they were using two or more AI tools a week — such as ChatGPT, DALL-E, Otter.ai and other generative AI products.
- 79% of millennials (28-39) said they used two or more of these tools a week.
Zoom in: Younger workers are using AI to revise emails and documents, to take notes during meetings or even just to start generating ideas, says Yulie Kwon Kim, VP of product at Google Workspace.
- 88% of Gen Z workers said they'd use AI to start a task that felt overwhelming.
Reality check: Google has a big stake in selling AI as the future of work. It's invested billions of dollars in the nascent technology — this is just one small survey that helps make its case.
- A larger study of full-time workers — of all ages — out earlier this month found overall AI adoption is stalling out, with nearly half of workers saying they weren't comfortable even admitting to using the technology.
- But younger workers are a bit more open about what they're doing, per Google's findings. 52% of those age 22-27 said they frequently discuss their use of AI tools with colleagues.
What to watch: AI boosters say that all these tools make life easier for workers. They can avoid certain drudgeries — like note-taking in meetings, for example — and focus on bigger picture tasks.
- Down the line, that could translate into both big productivity gains for the macro-economy — and potentially job loss.
3. Training data
- Spanish bank BBVA, one of OpenAI's largest financial services customers, is optimistic about what generative AI can do — but worries about how far the technology will scale and how well it can integrate with the company's existing systems. (WSJ)
- AI is starting to make coding bootcamps obsolete. (New York Times)
- A mom and dad in Massachusetts sued a school after it punished their kid for using AI to do his homework, but a federal court sided with the school. (Ars Technica)
4. + This
Austrian Airlines announced its departure from X in epic fashion.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+




/2024/11/22/1732309199856.gif?w=3840)