A graphic showing how Einstein Trust Layer creates content for CRM apps.

Einstein Trust Layer

The Einstein Trust Layer is a secure AI architecture, natively built into the Salesforce Platform. Built on Hyperforce for data residency and compliance, the Einstein Trust Layer is equipped with best-in-class security guardrails from the product to our policies. Designed for enterprise security standards, the Einstein Trust Layer allows teams to benefit from generative AI without compromising their customer data.

A template of a user prompt being generated into an Einstein AI repsonse.

Trusted AI starts with securely grounded prompts.

A prompt is a canvas to provide detailed context and instructions to Large Language Models. The Einstein Trust layer allows you to responsibly ground all of your prompts in customer data and mask that data when the prompt is shared with Large Language Models*. With our Zero Retention architecture, none of your data is stored outside of Salesforce.

* Coming soon

Seamless privacy and data controls.

Automatically shield sensitive data from external large language models.

Securely access and dynamically ground generative AI prompts with type, quality, and scope of relevant data needed to learn and provide the most reliable outputs. Merge fields with Salesforce records securely. Use semantic retrieval to bring in relevant information from support knowledge articles.

A graphic showing how Einstein Trust Layer uses CRM data.

Add domain-specific knowledge and customer information to the prompt to give the model the context it needs to respond more accurately. Reduce the chance of hallucinations with added context from CRM data, knowledge articles, service chats, and more. Automatically retrieve and use that data to generate trusted responses.

* Coming soon

A template for Einstein generated content.

Mask the data used in your prompts to add an additional layer of protection. Automatically detect PII and payment data, and redact it from the prompt before it is sent to a Large Language model. Subsequenty de-mask that data after the response is created so that the proper context is shared with internal users.

A chat window with an example of a conversation with Einstein AI.
A graphic showing how Einstein Trust Layer uses CRM data.
A template for Einstein generated content.
A chat window with an example of a conversation with Einstein AI.

Your data is not our product.

Salesforce gives customers control over the use of their data for AI. Whether using our own Salesforce-hosted models or external models that are part of our Shared Trust Boundary, like OpenAI, no context is stored. The large language model forgets both the prompt and the output as soon as the output is processed.

Mitigate toxicity and harmful outputs.

Empower employees to prevent the sharing of inappropriate or harmful content by scanning and scoring every prompt and output for toxicity. Ensure no output is shared before a human accepts or rejects it and record every step as metadata in our audit trail, simplifying compliance at scale.*

* Coming soon

Deploy AI with Ethics by Design

Salesforce is committed to the delivery of software and solutions that are intentionally ethical and humane in use, particularly when it comes to data and AI. In order to empower customers and users to use AI responsibly, we have developed an AI Acceptable Use Policy to address the highest areas of risk. Generating individualised medical, legal, or financial advice is prohibited in an effort to maintain human decision-making in those areas. At Salesforce, we care about the real-world impact of our products, and that’s why we have specific protections in place to uphold our values, while empowering customers with the latest tools on the market.

Get the most out of secure AI with trusted partners on AppExchange.

Accenture logo
Deloitte logo
Slalom logo

Einstein Trust Layer FAQ.

Secure AI is AI that protects your customer data without compromising the quality of its outputs. Customer and company data are key to enriching and personalising the results of AI models, but it's important to trust how that data is being used.

One of the top safety concerns of AI is data privacy and security, given that many customers don't trust companies with their data. Generative AI specifically also introduces new concerns around accuracy, toxicity, and bias of content that it generates.

Salesforce keeps its customers' data secure using the Einstein Trust Layer, which is built directly into the Platform. The Einstein Trust Layer includes a number of data security guardrails such as data masking, TLS in-flight encryption, and Zero Data Retention with Large Language Models.

Deploying trusted AI will empower you to reap the benefits of AI without compromising on your data security and privacy controls. The Einstein Trust Layer allows you achieve peace of mind when it comes to where your data is going and who has access to it, so that you can focus on achieving the best outcomes with AI.

How to design trusted generative AI.

First, understand how AI can address your business needs and identify your use cases. Next, choose a partner that you can trust with a proven track record of delivering trusted AI solutions. Lastly, determine the regulatory and compliance requirements for your specific industry to inform your approach to your trusted AI journey.