Trust is Salesforce’s #1 value

Building enterprise AI you can trust

Salesforce’s Office of Ethical and Humane Use

The AI revolution is a trust revolution. At Salesforce, we build trust into AI through governance, safeguards, and responsible, accessible product design.

Principles for trusted agentic AI

Agentic AI introduces powerful new capabilities and responsibilities. At Salesforce, we design and govern AI systems in line with principles that prioritize trust, transparency, and human oversight. These principles guide how we build and deploy our AI technologies.

Tabs

Accuracy

Placeholder headline

Placeholder description

Prioritize accurate results for agents

Develop agents with thoughtful constraints like subagent classification, a process where user inputs are mapped to subagents that contain a relevant set of instructions, business policies, and tools to fulfill that request. This provides clear guidance on what tools an agent can and cannot use on behalf of a human. If there is uncertainty about the accuracy of a response, users should be able to validate the information through citations, explainability, or other methods.

How it shows up in product

Agentforce and Slackbot ground generated responses in verifiable data sources so users can cross-check and validate information. Powered by the Atlas Reasoning Engine, Agentforce uses subagent classification to set guardrails and support reliable responses. AI-generated responses also respect real-time access controls, so users only see what they are authorized to access.

Responsible AI requires more than principles. It also requires governance, shared standards, and collaboration across industry and public institutions.

How the Office of Ethical and Humane Use advances trusted AI

Salesforce advances trusted AI through internal governance, responsible product development, and collaboration with industry, policymakers, and global organizations.

The commitments below highlight the standards, partnerships, and public initiatives that help guide our work. Other sections of this site explore how we build, govern, and apply responsible AI in practice.

Internal accountability

Salesforce has various cross-functional governance structures and policies that guide how AI systems are designed, reviewed, and deployed at Salesforce.

Responsible product review

AI systems undergo structured testing and review before launch to identify risks, evaluate behavior, and improve system performance.

  • Responsible AI testing programs
  • Product risk assessments
  • Red teaming and scenario testing
  • Cross-functional review processes
  • Sociotechnical harms framework

Explore the trust journey

Three ways to go deeper.