Skip to Content

Learn how to get the most out of AI with our latest tips and resources.

Generative AI Creates More Urgency for Data Privacy – Are You Ready?

businessmen trusting
60% of people who plan to use AI said they don't know how to do so while keeping sensitive data secure. [Nuthawut Somsuk/Getty]

Some companies are AI-ifying their products without ensuring customer data is 100% protected. Here's how your business can leverage the productivity gains of AI without giving away your most sensitive data.

You’d be forgiven for declining to hold your breath waiting for a comprehensive federal data privacy law in the U.S., where the lack of a clear national privacy standard threatens adoption of generative artificial intelligence services.

In the absence of a comprehensive federal data privacy law, individual states (13 and counting) have enacted their own, as some lawmakers recently pledged, once again, to pursue federal privacy legislation in 2024. 

Generative AI’s use of enormous volumes of data has created more urgency for data privacy. At the same time, some companies are AI-ifying their products without ensuring customer data is 100% protected, and doing the hard work of removing bias and providing transparency.

But as more and more leaders are realizing, it’s that very work that will make the difference in whether people trust your AI.

“All our departments have great ideas about how to use generative AI,” said Shohreh Abedi, EVP, COO, and CTO of AAA. “I don’t want to stifle creativity, but I also don’t want the greatest thing if it leaves my back door open.” 

Here’s how your business can leverage the productivity gains of AI without giving away your most sensitive data.

The employee view

  • In a survey of more than 4,000 employees across sales, service, marketing, and commerce, 73% said they believe generative AI introduces new security risks.
  • 60% who plan to use AI said they don’t know how to do so while keeping sensitive data secure. 
  • These risks are of particular concern in highly regulated industries like financial services and health.

What you can do now

Partner with companies that build safeguards into the fabric of their AI systems. Why is this important?

  • Large language models (LLMs) contain huge amounts of data, but they are unlike other data repositories like databases or Excel spreadsheets. They don’t have the security safeguards and access controls of large databases, or the cell-lock privacy features of a spreadsheet. 
  • “You’re not going to take all your customers’ … most sensitive and private and secure information and put it into a large language model that all of your employees are going to have access to,” said Salesforce CEO Marc Benioff.

The State of IT

A new survey of 4,000 IT leaders reveals the potential, and the challenges, of generative AI. 86% say it will have a prominent role in their org, but 64% are concerned about the ethics of it all. Want to know more?

How to build AI data privacy into the fabric of your systems

Patrick Stokes, EVP, Product and Industries Marketing at Salesforce, explained that with AI Cloud, you can leverage the productivity gains of AI without giving away your company’s data. AI data privacy, he said, requires different safeguards:  

Dynamic grounding steers an LLM’s answers using the correct and the most up-to-date information, “grounding” the model in factual data and relevant context. This prevents AI hallucinations, or incorrect responses not grounded in fact or reality. 

Data masking replaces sensitive data with anonymized data to protect private information and comply with privacy requirements. Data masking is particularly useful in ensuring you’ve eliminated all personally identifiable information like names, phone numbers and addresses, when writing AI prompts.

Toxicity detection is a method of flagging toxic content such as hate speech and negative stereotypes. It does this by using a machine learning (ML) model to scan and score the answers an LLM provides, ensuring that generations from a model are usable in a business context. 

Zero retention means that no customer data is stored outside of Salesforce. Generative AI prompts and outputs are never stored in the LLM, and are not learned by the LLM. They simply disappear.

Auditing continually evaluates systems to make sure they are working as expected, without bias, with high quality data, and in-line with regulatory and organizational frameworks. Auditing also helps organizations meet compliance needs by logging the prompts, data used, outputs, and end user modifications in a secure audit trail. 

Secure data retrieval is how Einstein GPT – the world’s first generative AI for customer relationship management (CRM) – pulls data from Salesforce, including Data Cloud, as part of the dynamic grounding process. This lets you bring in the data you need to build contextual prompts. In every interaction, governance policies and permissions are enforced to ensure only those with clearance have access to the data.

Julie Sweet, CEO of Accenture, said risks are mitigated when privacy and security safeguards like these are built right into the technology.

Trust in AI data privacy starts with good governance

All the technology in the world isn’t enough to ensure transparent, responsible, safe AI. It also requires good governance, starting at the top, and human oversight of AI systems. 

A KPMG report found that 75% of respondents would be more willing to trust AI systems when assurance mechanisms are in place to support ethical and responsible use. 

According to the report, “These mechanisms include monitoring system accuracy and reliability, using an AI code of conduct, oversight by an independent AI ethical review board, adhering to standards for explainable AI and transparent AI, and an AI ethics certification to establish governance principles.” 

At Accenture, the audit committee of its board of directors oversees an AI compliance program for the company’s 55,000 employees. 

“If you are not able to call someone in your company, and have them tell you where AI is being used, what the risks are, how they’re being mitigated, and who is accountable, you do not yet have responsible AI,” said Sweet. 

Trusted AI is especially crucial in regulated industries

Leaders in highly regulated industries like financial services and health are cautious about how to implement AI without sacrificing customer trust and safety. To ensure compliance with regulations, these industries must explore use cases that don’t put data privacy and security at risk.

  • For banking, this might include using automation to make routine tasks and processes more efficient, such as transaction disputes or using AI to power more intelligent chatbots that personalize customer interactions and improve self-services.
  • For health providers, AI can help them segment patient populations more efficiently so they can send more personalized, timely communications like tips for diabetes patients to lower blood sugar when their glucose levels spike.

Benioff said recently that generative AI could be “the most important technology of any lifetime.” For sure, generative AI could make e-commerce, which forever changed consumer behavior, industries, and business models, look like small potatoes. McKinsey recently estimated that AI’s impact on productivity could add trillions, annually, to the global economy. But we must have a high degree of trust in these systems for that to happen. 

“Trustworthy AI matters,” KPMG wrote in its survey. “If people perceive AI systems as trustworthy and are willing to trust them, then they are more likely to accept them.” 

Lisa Lee Contributing Editor, Salesforce

Lisa Lee is a contributing editor at Salesforce. She has written about technology and its impact on business for more than 25 years. Prior to Salesforce, she was an award-winning journalist with Forbes.com and other publications.

More by Lisa

Get the latest articles in your inbox.