"Nutrition labels" aim to boost trust in AI
As adoption of generative AI grows, providers are hoping that greater transparency about how they do and don't use customers' data will increase those clients' trust in the technology.
Why it matters: There's a mad scramble to add AI features across the board in the software world — but worries about privacy and security are prompting some businesses to discourage employees from using the new features.
Driving the news: Twilio, which helps businesses automate communications with their customers, announced Wednesday it will place "nutrition labels" on the AI services it offers those businesses, clearly outlining how their data will be used.
- The labels report what AI models Twilio is using, whether those models are being trained on customer data, whether features are optional and whether there is a "human in the loop."
- A "privacy ladder" distinguishes between company data that is used only for customers' internal projects and data that is also being used to train models used by other customers, as well as whether personally identifiable information is included in the data.
- In addition to offering such labels with its own data collection, Twilio is providing an online tool that other companies can use to generate similar AI nutrition labels for their own products.
- Among the practices banned are using the technology to generate weapons, pornography or political campaigns, according to a copy of the policy shared first with Axios.
- Salesforce also prohibits using AI to offer individualized advice that would normally require a licensed professional, such as a lawyer or financial adviser.
- Salesforce customers must also disclose when people are interacting directly with a bot and are forbidden from providing AI-generated content to users under the pretense that it is human-made.
Between the lines: While there is still an air of excitement around the potential of generative AI to improve productivity, many companies have been taking a cautious approach, warning employees not to put company data into tools like ChatGPT.
Transparency is key to increasing trust, both Salesforce and Twilio say.
- And so far trust is low, Twilio says, citing research it conducted that shows more than 9 in 10 businesses are offering AI-based personalization but only 41% of customers are comfortable with such practices and only half of consumers trust brands to keep their data secure and to use it responsibly.
- "Against this backdrop, Twilio is calling on technology leaders and peers everywhere to more proactively display and disclose exactly how data is being trained and deployed in AI products," the company says.
The big picture: Paula Goldman, Salesforce's chief ethical and humane use officer, told Axios that establishing the acceptable use policy is important but not enough on its own to ensure that powerful AI technologies aren't misused.
- Other steps that Salesforce has taken include adversarial testing of both its models and the AI features it builds, as well as incorporating filters that try to stop generative AI systems from sharing toxic content or personal information.
Goldman also notes there's a big difference between AI services that a company delivers directly to longstanding business customers and free services offered direct to consumers online or made broadly available via open source releases.
- Salesforce, Microsoft and others, for example, have started commercially testing their generative AI tools in close collaboration with small groups of known customers.
- Facebook parent Meta, on the other hand, broadly released its Llama 2 model for commercial use by anyone — although it, too, says it requires users of the tool to abide by an acceptable use policy.
What's next: Twilio and Salesforce both said they hope other firms will follow their lead, but Salesforce's Goldman also called for broader regulation and continued discussion across society.
- "This is a place where there is a big need for public policy," she said, adding that some of Salesforce's prohibitions on certain uses, such as for political campaigns, may ease over time as industry norms and legislative guardrails emerge.
- "We wanted to be cautious about how the AI is used and what it's ready for right now," she said.