What Anthropic's AI knows about you
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Maura Losch/Axios
Anthropic, whose Claude models are a key rival to OpenAI, takes an "opt-in" approach when it comes to using customer data to train its models.
Why it matters: How AI companies do and don't make use of the information their users provide is taking on even greater importance as Anthropic and its rivals give their chatbots broader access to personal data.
Catch up quick: By and large, AI companies aren't required to disclose where they get the data used to train their models. However, thanks to a number of privacy laws, they do have to say how they use the data their customers provide.
- In this series, Axios is looking at how the key companies in the generative AI make use of that data.
Zoom in: Anthropic — founded in 2021 by former OpenAI employees seeking to build AI with more stringent safety measures — says that, by default, it won't use customer information to train its models.
- That policy applies to both consumers and businesses, as well as services built on top of Anthropic's APIs — programming interfaces that allow third party access to its models.
Yes, but: Anthropic does reserve the right to use prompts and outputs to train its models, with permission, in some cases, such as when someone clicks a "thumbs up" or "thumbs down."
- Anthropic notes that not only in its privacy policy, but also flags this for users in a dialog box when they give feedback.
Between the lines: While not used for training of models, Anthropic — as is typical in the industry — automatically scans users' prompts and responses to enforce its safety policies, though it will not use that data to train its models.
- The company says it can use those prompts and responses that are flagged to improve its abuse detection systems, though not the models themselves.
Go deeper: Read the other entries in the series.
