A graphic showing how Einstein Trust Layer creates content for CRM apps.

Einstein Trust Layer

The Einstein Trust Layer is a robust set of features and guardrails that protect data privacy and security, improve the safety and accuracy of your AI results and promote the responsible use of AI across the Salesforce ecosystem. The Einstein Trust Layer is designed to help you unleash the power of generative AI with features like dynamic grounding, zero data retention and toxicity detection, without compromising your safety or security standards.

A template of a user prompt being generated into an Einstein AI repsonse.

Trusted AI starts with securely grounded prompts.

A prompt is a set of instructions that steers a large language model (LLM) to return a result that is useful. The more context you give the prompt, the better the result will be. Features of the Einstein Trust Layer like secure data retrieval and dynamic grounding enable you to safely provide AI prompts with context about your business, while data masking and zero data retention protect the privacy and security of that data when the prompt is sent to a third-party LLM.

Seamless privacy and data controls.

Benefit from the scale and cost-effectiveness of third-party foundation LLMs while protecting the privacy and security of your data at each step of the generation process.

Allow users to securely access the data to ground generative AI prompts in context about your business while maintaining permissions and data access controls.

A graphic showing how Einstein Trust Layer uses CRM data.

Securely infuse AI prompts with business context from structured or unstructured data sources, utilising multiple grounding techniques that work with prompt templates for scaling across your business.

A template for Einstein generated content.

Mask sensitive data types like personal identifiable information (PII) and payment card industry (PCI) information before sending AI prompts to third-party large language models (LLMs) and configure masking settings to your organisation’s needs.

*Availability varies by feature, language and geographic region.

A chat window with an example of a conversation with Einstein AI.
A graphic showing how Einstein Trust Layer uses CRM data.
A template for Einstein generated content.
A chat window with an example of a conversation with Einstein AI.

Your data is not our product.

Salesforce gives customers control over the use of their data for AI. Whether using our own Salesforce-hosted models or external models that are part of our Shared Trust Boundary, like OpenAI, no context is stored. The large language model forgets both the prompt and the output as soon as the output is processed.

Mitigate toxicity and harmful outputs.

Empower employees to prevent the sharing of inappropriate or harmful content by scanning and scoring every prompt and output for toxicity. Ensure no output is shared before a human accepts or rejects it and records every step as metadata in our audit trail, simplifying compliance at scale.*

Get the most out of secure AI with trusted partners on AppExchange.

Accenture logo
Deloitte logo
Slalom logo

Einstein Trust Layer FAQs

Customer and company data are key to enriching and personalising the results of AI models, but it's important to trust how that data is being used. Secure AI is artificial intelligence that ensures your customer and business data remains protected, even as AI helps personalise and enrich your services. In India, where data sensitivity is rising across sectors, especially BFSI, healthcare, and government, secure AI helps organisations adopt innovation without compromising customer trust.

In India, where digital adoption is fast but trust is still being built, concerns around AI include data misuse, privacy breaches, and the accuracy of AI-generated content. One of the top safety concerns of AI is data privacy and security, given that many customers don't trust companies with their data. With generative AI, organisations also need to guard against bias, toxicity, and harmful outputs, especially when engaging large, diverse customer bases.

Salesforce keeps its customers' data secure using the Einstein Trust Layer, which is built directly into the Platform. The Einstein Trust Layer includes a number of data security guardrails such as data masking, TLS in-flight encryption and Zero Data Retention with Large Language Models which are essential for sectors with evolving compliance frameworks like Digital Personal Data Protection Act (DPDP) in India.

Deploying trusted AI will empower you to reap the benefits of AI without compromising your data security and privacy controls. The Einstein Trust Layer allows you to achieve peace of mind when it comes to where your data is going and who has access to it, so that you can focus on achieving the best outcomes with AI. Trusted AI gives Indian businesses the confidence to scale innovation while meeting rising expectations around privacy and compliance. Whether you're a fintech startup or a legacy Public Sector Unit bank, the Einstein Trust Layer helps you focus on growth, knowing your data stays safe and secure within your CRM.

How to design trusted generative AI

Start by identifying how AI aligns with your use cases -- be it customer engagement, sales forecasting, or compliance. Next, choose a partner with a proven global and local track record of delivering trusted AI solutions. Lastly, determine the regulatory and compliance requirements for your specific industry to inform your approach to your trusted AI journey and ensure that the AI framework supports your industry’s data governance, RBI regulations, and sectoral standards.