AI that forgets?: Meet Salesforce’s new privacy-first tech
Salesforce has recently introduced a generative AI system that prioritises privacy. The system is designed to “forget” the data it processes through large language models (LLMs).
This third wave of generative AI has taken the form of digital ‘agents’: intelligent systems, almost corporate users, that can understand and respond to customer inquiries without human intervention.
Salesforce’s CEO, Marc Benioff, referred to it as “the biggest breakthrough” he had ever seen. But what exactly does he mean?
Keeping costs down
At a recent event, Patrick Stokes, Salesforce’s EVP of products and industries, said: “There is a lot of demand for something that can help you accelerate your work.”
He added that this new AI system is specifically designed to help companies avoid the costs and risks of developing their own LLMs.
“They don’t want to DIY their own AI”, he explained. “It’s too costly, and companies can’t do it in a sustainable way.”
What companies have missed so far is that this generative model is only one component of the system—like a brain in a human or an engine in a car. And a brain, as Stokes explained, “isn’t very useful unless it has arms, legs, eyes, and ears.”
However, despite the opportunity AI presents, concerns around data privacy have impacted its development and adaptation.
AI and data security
Stokes explained that instead of building the most powerful LLM, Salesforce focuses on creating AI agents which connect data and actions securely.
This addresses concerns around privacy and security that usually deter customers from using or relying on generative AI in the workplace.
“We’re not trying to build a stronger LLM”, Stokes said. “What we want to achieve is to connect the data and the action because if we can do that, we can leverage this in a way that drives real value”.
Zahra Bahrololoumi, CEO of Salesforce UK and Ireland, also addressed growing concerns about AI adoption, especially regarding privacy and security.
“People are fearful”, she said. “What it comes down to is essentially being able to trust AI. Having everything integrated within a single platform, with the corporate memory and guardrails built it, allows people to feel much more confident and secure about how generative AI is being used.
“Customers are thinking about what role they could give an AI agent”, Bahrololoumi said, noting that the 1,000 agents that have already been created are helping companies improve productivity and reduce burnout.
So… how does it work?
Addressing the issue of privacy, Salesforce has implemented an indispensable ‘trust layer’ at the bottom of its generative AI system.
Stokes described it as follows: ” It will make sure that when you’re asking a question, and it goes out and retrieves some data to add as additional context to the prompt, it’s going to make sure that the data it adds is data that you as the user is able to have in the first place.”
He further emphasised, “You’ve probably used ChatGPT at work and maybe shouldn’t have because you might be giving away sensitive data.”
He added: “100 per cent of what goes into the LLM is not retained…We give the LLM everything it needs on every individual call, and then it forgets it”.
“We get the data when we need it, we add it to the prompt, we give the LLM context, and then it forgets it”, which is very different to trying to train an LLM to know everything about your business at any time.
‘It’s a much safer environment because you can maintain governance over that environment.’’
This ‘zero retention’ approach avoids storing any data that the LLM has processed.