Salesforce pitches a safer AI for business
Salesforce Monday pitched its new AI Cloud product as a one-stop shop for nervous CEOs looking to take advantage of large language models “without their data ever leaving Salesforce,” per CEO Marc Benioff.
Why it matters: Salesforce has upped the ante on AI privacy and security — with Benioff zeroing in on demand for AI in regulated industries, and delivering thinly-veiled snark aimed at market leader OpenAI wrapped in the message that “data is not our product.”
- Calling out an "AI trust gap," Benioff pinpointed lack of privacy, lack of data control, hallucinations, bias, and toxicity as the five biggest problems with leading large language models.
How it works: Salesforce's vision is that customers will use different language models for different purposes, mediated through a "trust layer" component of Einstein, the company's seven-year-old AI-powered CRM tool.
- AI Cloud will work with any large language model.
- Salesforce executives said the Einstein trust layer offers cell-level security and data masking to guard commercially sensitive and personal data sent to large language models, with a promise of zero data retention by Salesforce once answers are received.
- One eye-catching feature is a "toxicity detector" that will generate warnings about any content generated by a given language model falling below an unspecified threshold.
Yes, but: Salesforce isn't against using ChatGPT — the company is working with OpenAI to add the chatbot to Slack.
- Amazon Bedrock works on a similar model, offering access to a mix of models built in-house and by partners.
What they're saying: "Every company needs to become AI-first," Benioff said.
- “We have been testing how to Guccify the AI,” said Vasilis Dimitropoulos, vice president for global client services at Gucci.
- “I don’t want someone who has the greatest thing, but leaves my back door open,” said Shohreh Abedi, AAA executive vice president, explaining why data protection drives her tech purchasing decisions.