The AI revolution is a trust revolution. At Salesforce, we build trust into AI through governance, safeguards, and responsible, accessible product design.
The Office of Ethical and Humane Use guides how Salesforce designs, develops, and deploys emerging technologies responsibly.
We help teams across the company identify risks, establish safeguards, and embed responsible practices into product development and operations.
Our work translates Salesforce’s #1 value of trust into accessible standards, practical policies, testing programs, and governance structures that support trusted and responsible innovation.
Principles for trusted agentic AI
Agentic AI introduces powerful new capabilities and responsibilities. At Salesforce, we design and govern AI systems in line with principles that prioritize trust, transparency, and human oversight. These principles guide how we build and deploy our AI technologies.
Tabs
Placeholder headline
Placeholder description
Prioritize accurate results for agents
Develop agents with thoughtful constraints like subagent classification, a process where user inputs are mapped to subagents that contain a relevant set of instructions, business policies, and tools to fulfill that request. This provides clear guidance on what tools an agent can and cannot use on behalf of a human. If there is uncertainty about the accuracy of a response, users should be able to validate the information through citations, explainability, or other methods.
How it shows up in product
Agentforce and Slackbot ground generated responses in verifiable data sources so users can cross-check and validate information. Powered by the Atlas Reasoning Engine, Agentforce uses subagent classification to set guardrails and support reliable responses. AI-generated responses also respect real-time access controls, so users only see what they are authorized to access.
Placeholder headline
Placeholder description
Test systems to reduce harmful outputs and promote safe, reliable performance
Agentic systems should be designed and evaluated to reduce the risk of toxic responses or unsafe actions across different users and situations. This includes testing systems across a range of scenarios and identifying risks before they scale, via red teaming.
How it shows up in product
Agentforce includes built-in guardrails to reduce harmful outputs, including toxicity detection through the Trust Layer, model containment policies, and prompt instructions that limit unsafe responses before they reach the end user. These safety guardrails extend to Slack, where AI guardrails help detect and mitigate harmful or policy-violating content in real-world interactions.
Placeholder headline
Placeholder description
Be transparent about AI interactions
People should know when they are interacting with AI systems, especially when those systems generate responses, take action, or influence outcomes. We design clear, context-appropriate disclosures that help users understand when AI is involved in meaningful ways, while avoiding unnecessary or excessive signals that could reduce clarity. We also respect data provenance by using data responsibly, honoring permissions and consent, and making it clear when AI outputs are grounded in relevant data sources.
How it shows up in product
Disclosure patterns are built into agent experiences so customers can inform users when they are interacting with AI. Agentforce Sales Development Representative and Agentforce Service Agent, for example, clearly disclose when content is AI-generated to help ensure transparency with users and recipients. Slackbot also labels AI-generated persistent content, such as canvases, documents, and images, so users know when content has been created by AI, providing additional transparency in contexts where it matters most.
Placeholder headline
Placeholder description
Design AI to support human judgment, oversight and accessibility
Prioritize the human-AI partnership and design meaningful and effective hand-offs. Agentic systems should support decision-making with well-defined handoffs, keeping people accountable and in control of important outcomes. Accessibility efforts make sure these systems work in practice for the people using them.
How it shows up in product
Agentforce enables teams to delegate work to AI systems while defining when and how humans stay involved. For higher-impact actions, people remain responsible for final decisions and can review or intervene when needed. Customers can also customize these guardrails and oversight settings to match their specific use cases and risk tolerance, giving organizations control over what is delegated to AI and what requires human involvement.
Placeholder headline
Placeholder description
Build AI systems that use resources responsibly
AI systems should be developed with attention to energy, water, and compute use. This means treating compute as a finite resource, applying efficient AI techniques where they’re most effective, and designing systems that leverage leaner architectures to reduce environmental impact while maintaining performance.
How it shows up in product
Agentic systems can route tasks to right-sized, specialized models and avoid unnecessary or repeated compute. Tools like model cards and the AI energy score help provide visibility into energy and carbon impact, enabling more informed and responsible AI deployment decisions.
Responsible AI requires more than principles. It also requires governance, shared standards, and collaboration across industry and public institutions.
How the Office of Ethical and Humane Use advances trusted AI
Salesforce advances trusted AI through internal governance, responsible product development, and collaboration with industry, policymakers, and global organizations.
The commitments below highlight the standards, partnerships, and public initiatives that help guide our work. Other sections of this site explore how we build, govern, and apply responsible AI in practice.
Salesforce has various cross-functional governance structures and policies that guide how AI systems are designed, reviewed, and deployed at Salesforce.
- Salesforce’s Cybersecurity and Privacy Committee of the Board of Directors
- Ethical Use Advisory Council
- Internal AI Use Case Reviews
- Human Rights Steering Committee
- AI Acceptable Use Policy
- Acceptable Use Policy
- Human Rights Policy
AI systems undergo structured testing and review before launch to identify risks, evaluate behavior, and improve system performance.
- Responsible AI testing programs
- Product risk assessments
- Red teaming and scenario testing
- Cross-functional review processes
- Sociotechnical harms framework
Salesforce integrates accessibility and responsible AI governance standards into product development and internal processes.
- Accessibility conformance to WCAG 2.2 AA
- AI management standard (ISO/IEC 42001)
Salesforce collaborates with industry organizations to advance responsible AI standards, transparency, and shared governance practices.
Salesforce supports international initiatives that promote transparency, accountability, and responsible development of AI technologies.
Salesforce works with governments and international organizations to help shape responsible AI governance and public policy.
Explore the trust journey
Three ways to learn more.