It is not enough to deliver only the technological capabilities of AI – we also have an important responsibility to ensure that AI is safe and inclusive for all. That’s why we built the Salesforce Trust Layer, a robust set of features and guardrails that protect the privacy and security of your data, improve the safety and accuracy of your AI results, and promote the responsible use of AI across the Salesforce ecosystem.
At Salesforce, we are guided by our five core values: trust, customer success, innovation, equality, and sustainability. We are also deeply committed to the responsible development and deployment of our technology, driven by our Office of Ethical and Humane Use.
One of the emerging technologies with great potential to improve the state of the world is artificial intelligence (AI) as it augments human intelligence, amplifies human capabilities, and provides actionable insights that drive better outcomes for our employees, customers, and partners.
We believe that the benefits of AI should be accessible to everyone. But it is not enough to deliver only the technological capabilities of AI, we have an important responsibility to ensure that AI is safe and inclusive for all. We take that responsibility seriously and are committed to providing our employees, customers, and partners with the tools they need to develop and use AI safely, accurately, and ethically.
That’s why we’ve built the Salesforce Trust Layer to act as the architectural foundation and set of technical guardrails that ensures every AI interaction in Salesforce is secure, private, accurate, and ethically aligned with our values.
Five Core Principles of Salesforce Trusted AI
In 2018, we began articulating our trusted AI principles and wanted to ensure that they were specific to Salesforce’s products, use cases, and customers. It was a year-long journey of soliciting feedback from individual contributors, managers, and executives across the company in every organization including engineering, product development, UX, data science, legal, equality, government affairs, and marketing. Executives across clouds and roles approved the principles including our then co-CEOs Marc Benioff and Keith Block. As we discuss in our Ethical AI Maturity Model , though that journey may feel long, it is an important formative experience in ensuring that everyone in the company has contributed, understands their responsibility for living those principles, and has bought into implementing them in their daily work.
Responsible
We strive to safeguard human rights, to protect the data we are trusted with, observe scientific standards and enforce policies against abuse. We expect our customers to use our AI responsibly, and in compliance with their agreements with us, including our Acceptable Use Policy .
Accountable
We believe in holding ourselves accountable to our customers, partners, and society. We will seek independent feedback for continuous improvement of our practice and policies and work to mitigate harm to customers and consumers.
Transparent
We strive to ensure our customers understand the “why” behind each AI-driven recommendation and prediction so they can make informed decisions, identify unintended outcomes and mitigate harm.
Empowering
We believe AI is best utilized when paired with human ability, augmenting people, and enabling them to make better decisions. We aspire to create technology that empowers everyone to be more productive and drive greater impact within their organizations.
Inclusive
AI should improve the human condition and represent the values of all those impacted, not just the creators. We will advance diversity, promote equality, and foster equity through AI.
Salesforce Trusted AI Principles in Practice
It’s not enough to have a set of principles. As the potential of AI grows exponentially, so does the critical responsibility to ensure its use is safe and inclusive for everyone. Below are examples of how we have translated our trusted AI principle into practice.
Responsible
We begin by asking not only can we do something but also should we do it, before building any AI features. We work with external human rights experts to continually learn, grow, and discover new ways to protect human rights.
We educate and empower our customers to make informed decisions about how to use our AI responsibly. We will do this by creating tools and resources to help Salesforce employees, customers, and partners identify and mitigate bias in the systems that they are building and using (e.g., flagging when protected data categories and proxy variables are being used in a model, providing transparency into factors that most influence individual predictions) enabling customers and partners to understand the responsibility that they have to adopt AI in a safe and reliable way.
We adhere to the highest security and privacy practices to help anticipate and mitigate unintended harm and keep our products safe. We comply with applicable laws governing AI research and use. We also strive to meet the highest scientific and quality standards in our research, ensuring its safety and sharing it through peer-reviewed publications, conferences, and industry events.
Accountable
We engage with external human rights and technology ethics experts through our Ethical Use Advisory Council and workshops. We also invite feedback from our customers through Customer Advisory Boards and open dialog and incorporate it into our deliberation process.
We believe in the importance of giving back to our industry and society by collaborating with our peers through industry groups, civil society forums, and governmental organizations (e.g., US National Institute of Standards and Technology, US National AI Advisory Committee, Singapore’s Advisory Council on the Ethical Use of AI and Data ) to continuously improve our practices and policies.
We enable employees to raise questions and concerns through channels like our anonymous online corporate reporting and governance system, Slack channels, and group email address.
Transparent
Transparency includes not only how we build our models but also why they made the prediction or recommendation they did. We publish model cards that describe how models were created, intended and unintended use cases, known ethical or societal implications, and performance scores. We also provide model explainability when an AI prediction or recommendation is made.
We enable customers to remain in control of their data and models at all times. The data we manage does not belong to Salesforce—it belongs to the customer. We also provide customers with a clear disclosure of terms of use and the intended applications of Salesforce’s AI capability.
Empowering
We strive to abstract away the complexity of AI to make it possible for people of all technical skill levels — not only advanced data scientists — to build AI applications with just clicks, not code. Additionally, we create and deliver free AI education via Trailhead to enable anyone to gain the skills needed for the jobs of the Fourth Industrial Revolution. This includes tools like measuring disparate impact (one definition of bias) and automatically populating model cards (like nutrition labels for models), as well as providing in-app guidance so customers know how to use our AI responsibly.
The Salesforce AI team is committed to delivering AI research breakthroughs to inform new product categories and ensure that our customers stay at the forefront of technological advancements.
Inclusive
We test our models using diverse and representative data sets that are most appropriate for how the model is being used. We seek to understand the impact of AI services on a broad range of customers, end users, and contexts. This includes conducting Consequence Scanning Workshops and Build with Intention Workshops. And we strive to build inclusive teams that represent diverse experiences and points of view by aligning with our core value of equality.
How the Salesforce Trust Layer Works
The Salesforce Trust Layer addresses core enterprise concerns like data security, regulatory compliance, and ethical governance by building a comprehensive set of safeguards directly into the platform's architecture. It is the answer to the enterprise need for AI that is both powerful and inherently trustworthy.
The Trust Layer is a robust set of features and guardrails that form the technical implementation of our ethical principles. It is engineered to proactively mitigate the risks associated with generative AI, specifically by protecting the privacy and security of your data while improving the safety and accuracy of the AI-generated results. This architecture moves our commitment to trust from a policy statement into a functional, platform-wide reality.
Secure Data Retrieval and Dynamic Grounding
The foundation of secure and accurate AI outputs relies on dynamic grounding and secure data retrieval. Trusted AI starts with securely grounded prompts which are sets of instructions sent to a large language model (LLM) to return a result that is useful. Dynamic grounding securely connects the LLM to your enterprise data (such as information stored in Data 360 or internal knowledge bases) and dynamically retrieves the validated information. This prevents the LLM from producing "hallucinations" by forcing the model to cite and use trusted, customer-approved sources. Secure data retrieval allows users to securely access the data to ground generative AI prompts in context about your business while maintaining permissions and data access controls.
Zero Data Retention and Data Masking
Crucially, the Trust Layer implements zero data retention and data masking to ensure the utmost security and privacy for customer data. Zero data retention is a strict policy where the prompts and generated responses are never stored or used to train the underlying third-party large language models, guaranteeing the data remains exclusively the customer's property. Complementing this, data masking is the process that replaces sensitive Personally Identifiable Information (PII) or proprietary business data with non-identifiable tokens before the prompt is sent to the LLM. This shields confidential information while still providing the necessary context for the LLM to generate a personalized and useful response.
Toxicity Detection and Security Guardrails
To maintain safety and uphold ethical use, the Trust Layer includes toxicity detection and security guardrails. These features provide continuous monitoring and filtering to ensure generated content is safe, appropriate, and aligned with company and ethical standards. Toxicity detection employs advanced classification models to scan and categorize generated content in real time for signs of hate speech, bias, harassment, or other policy violations. Should harmful content be identified, the system's content filtering will automatically filter, block, or flag the output before it is displayed to the end-user, ensuring a safer and more inclusive experience for all.
We are in this together
At Salesforce, we believe the ethical use of advanced technology such as AI is an increasingly complex issue. It must be clearly addressed — not only by us, but by our entire industry. We welcome a multi-stakeholder dialog that includes our employees, customers, partners, and communities.
By coming together to solve emerging challenges and ensure that these new advances take diverse experiences into account, we can drive positive change with the power of AI – and drive the development of AI with the perspective that help us make the most of human potential.
FAQs
Secure AI is AI that protects your customer data without compromising the quality of its outputs. Customer and company data are key to enriching and personalizing the results of AI models, but it's important to trust how that data is being used.
One of the top safety concerns of AI is data privacy and security, given that many customers don't trust companies with their data. Generative AI specifically also introduces new concerns around accuracy, toxicity, and bias of content that it generates.
Salesforce keeps its customers' data secure using the Salesforce Trust Layer, which is built directly into the Salesforce Platform. The Trust Layer includes a number of data security guardrails such as data masking, TLS in-flight encryption, and Zero Data Retention with Large Language Models.
Deploying trusted AI will empower you to reap the benefits of AI without compromising on your data security and privacy controls. The Einstein Trust Layer allows you achieve peace of mind when it comes to where your data is going and who has access to it, so that you can focus on achieving the best outcomes with AI.
First, understand how AI can address your business needs and identify your use cases. Next, choose a partner that you can trust with a proven track record of delivering trusted AI solutions. Lastly, determine the regulatory and compliance requirements for your specific industry to inform your approach to your trusted AI journey.
Learn more about AI agents and how they can help your business.
Ready to take the next step with Agentforce?
Build agents fast.
Take a closer look at how agent building works in our library.
Get expert guidance.
Launch Agentforce with speed, confidence, and ROI you can measure.
Talk to a rep.
Tell us about your business needs, and we’ll help you find answers.