Exclusive: Microsoft Copilot is getting personal
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Allie Carl/Axios
People are increasingly turning to Microsoft's Copilot chatbot for advice about their health, careers and relationships, according to data Microsoft first shared with Axios.
Why it matters: Understanding how people use Copilot and similar tools is key to teasing out their benefits versus the risks.
The big picture: Researchers found that on the desktop users see Copilot as a productivity tool. But on mobile they see it more as "a conversational partner."
- This suggests the need for chatbot interfaces that are different depending on whether a user is on desktop or mobile.
- "A desktop agent should optimize for information density and workflow execution," the researchers write in the report, "while a mobile agent might prioritize empathy, brevity, and personal guidance."
What they did: Microsoft researchers analyzed 37.5 million conversations with Copilot between January and September 2025.
- To preserve user privacy, the messages were stripped of personally identifiable details.
- The research focused not just on what people do with AI, but on how and when they use it.
The intrigue: Philosophical questions increase during late-night hours.
Reality check: An always-online mentor/therapist/health coach bot can be helpful, but chatbots weren't designed for this kind of emotional support.
- They have been known to get things wrong, tell you only what you want to hear, reinforce delusional behavior and encourage self-harm.
- People share sensitive information in these chats, but those conversations lack the legal confidentiality of consultations with a doctor or lawyer.
Yes, but: This is not Microsoft's first chatbot rodeo. It isn't a startup without experience in high-profile cases of chatbot relationships gone terribly awry.
- "We are working to figure this out because there is so much potential upside here, but you really have to think about the kind of controls and guardrails around it," Sarah Bird, Microsoft's chief product officer of responsible AI, told Axios' Ina Fried on stage last week.
- "The experience for one person might not be the right thing for someone else."
- Microsoft researchers have been forced to think about chatbot guardrails since at least 2016 when its disastrous chatbot, Tay, began generating lewd and racist messages.
Behind the scenes: The big AI companies originally steered away from pushing their chatbots as companions, Helen Toner — formerly on OpenAI's board — told Axios in an interview in October. "I think because they know that [AI and social connection] can be so dicey, and there's so many tricky issues to navigate," Toner said.
- But AI devotees are turning out to be loyal to their bot of choice for productivity tasks and want to use it for everything else, whether it's purpose-built for that or not.
The bottom line: Microsoft, OpenAI, Google, Meta and Anthropic are racing to win long-term users.
- Designing their bots to respond to people's most personal requests may increase engagement, but at the risk of privacy and emotional well-being, especially for the most vulnerable users.
