Terms-of-service land grab: Tech firms seek private data to train AI
Tech companies are confronting a challenge: how to balance asking users for more data in order to deliver new AI features without scaring away privacy-conscious businesses and consumers.
Why it matters: Consumers consistently tell pollsters they want transparency about when AI is used and trained. But when companies provide such detail, it's often written in legalese and buried in fine print that is often being rewritten to give tech companies more rights.
Driving the news: Video conferencing company Zoom encountered a massive backlash over concerns the contents of video chat might be used to train AI systems. The move prompted an apologetic post from Zoom's CEO, but the company is far from alone in seeking more consumer data in order to train AI models.
Details: Companies are deploying different approaches to ensure they have access to user data. At the same time, many are also adding in language to prevent anyone else from scraping their websites to train AI systems.
- Instacart notified customers of such changes this week.
- The New York Times, which is already contemplating legal action against AI providers, updated its terms of service on Aug. 3 to forbid using Times content in "training a machine learning or artificial intelligence (AI) system."
- Microsoft is also updating its terms of service, effective Sept. 30, to both assert its right to use data for AI training while prohibiting others from that type of use.
Yes, but: Microsoft has also explicitly said it won't use data from business-oriented products, such as Microsoft 365 and Bing Chat Enterprise to train its foundational models.
- Amazon Web Services says it will not use "personal data" but may use some "user content" to "improve AWS and affiliate machine-learning and artificial-intelligence technologies' for some services. It offers customers the ability to opt out of that use of their content.
Between the lines: Some large businesses may have the clout to fight for better terms with their service providers, but consumers and smaller firms often have little option beyond clicking the agree button or to stop using a service entirely.
- "I think we need to totally rebuild the way in which informed consent operates," lawyer Ryan Clarkson told Axios.
- Clarkson, whose firm is suing a number of AI companies, says terms of services agreements often amount to "a form of coercion." He said he is also seeing "consent being given for one purpose, but then having that consent read so broadly that it's used for other purposes."
- "People are really in a tough situation here because they feel powerless against big tech companies. The video conferencing platforms, like Zoom, are so entangled into all aspects of our personal and professional lives."
The impact: Zoom suffered real consequences from the backlash it faced. According to a survey of 1,074 American adults conducted for Home Security Heroes, more than three quarters of respondents said their trust in Zoom decreased from where it was prior to the recent controversy.
The big picture: The absence of a federal privacy law fosters an AI development environment that allows companies to grab more data without facing limits or consequences.
- As users and regulators discover more examples of non-transparent AI data collection, the pressure to regulate against these trends will increase. The EU is already pursuing its AI Act, while some states are also considering entering the fray, as Axios reported this week.
- The White House's "Blueprint for an AI Bill of Rights" — which has yet to be codified into any type of binding law — affirms a user's "reasonable expectations" that "only data strictly necessary for the specific context is collected" and that consent requests be brief, understandable and in plain language.
- But that's far from being enforceable policy and much of what the White House suggests would require action from Congress, which has been slow to legislate generally.
Be smart: Absent regulations covering how AI systems use customer data, lawyers recommend businesses and consumers scrutinize the services they are using, consider how sensitive the data being collected is and stay on top of any changes made to service terms.
- A number of tech companies pointed Axios to disclosures they make to customers stating how exactly their data is being used for current AI services. But legal experts warn to pay attention when the terms of service grant broad permissions to use user data to train AI systems.
- "Maybe you're OK with the reassurances, but at the end of the day, reassurances can change on the fly," says Mauricio Uribe, chair of the software and IT practice group at Knobbe Martens. "The rights are already there. They don't need extra permission to start doing kind of what you would fear they would want to do."
- Henry Noye, a partner in Obermayer's litigation practice, says one of the challenges is businesses are feeling pressure to adopt AI ahead of rivals, but at the same time they are doing so at a time when it is really hard to quantify the risks.
- "I would just really advise corporations to understand that this is an evolving, fluid situation," he said.
Editor's note: This story has been corrected to reflect that Amazon's terms-of-service say user content may be used to improve Amazon Web Services and affiliate AI technologies, but it is not collected during video calls. The story has been further updated to make clear that Amazon Web Services explicitly says it will not use "personal data" to improve its AI.