At Salesforce, trust is our #1 value and has stayed at the forefront of our work from the predictive, generative to now agentic AI era. As enterprise AI shifts from assistants to autonomous agents, Salesforce is advancing this commitment with the release of our Second Annual Trusted AI Impact Report, showcasing our commitment to transparency and operationalizing responsible AI to help empower the agentic enterprise.
As AI systems become more capable and autonomous, trust must be embedded across the entire AI lifecycle — from governance and testing to product design, deployment, and human oversight.
This year’s report continues to provide a comprehensive overview of Salesforce’s trusted AI strategy, covering the foundational principles, policies, governance structures, and technical safeguards that guide our AI initiatives.
What’s new this year: We’re sharing expanded detail on our processes for ethics review, testing, and mitigation, along with in-depth case studies that illustrate how our principles are applied in practice across AI agent design, development, and deployment. Key updates include:
Building trust directly into the platform
The report explores how Salesforce builds trust directly into the platform through Data 360, Agentforce guardrails, safety instructions, auditability, and our broader shared responsibility security framework for the era of agentic AI. This year, Agentforce, AI Platform, and Slack AI achieved ISO 42001 certification — the world’s first international AI management standard.
Trusted AI reviews and testing in practice
This year’s report provides a deeper look into our Trusted AI Review Process, which standardizes intake, triage, review, testing, and implementation of responsible AI mitigations. The report also includes case studies from Public Sector Solutions and Education Cloud that demonstrate how these reviews directly shape product decisions and user experiences. In FY26, Salesforce reviewed 370+ internal AI use cases and 240+ customer-facing AI capabilities prior to deployment.
Policy and governance for the agentic era
The report also includes a deep dive into our AI Acceptable Use Policy and the governance processes that help guide responsible product use. One highlight is our Individualized Advice Policy, which prohibits the use of AI services to generate or deliver individualized medical, legal, or financial advice directly to end users without qualified professional review and approval. The policy reflects Salesforce’s broader approach to human oversight in high-stakes domains, helping ensure AI systems are used to augment professional expertise rather than replace it.
Advancing accessible AI
We also highlight how Salesforce is advancing accessibility in AI through conformance to global accessibility standards, usability studies with people with disabilities, automated accessibility testing, and tools like the Accessibility Agent that help developers create and test more accessible code directly within their workflows. This year, Salesforce announced its commitment to WCAG 2.2 AA conformance.
Dive in to learn more
Beyond our products, we also illuminate our broader role in the AI ecosystem — from fostering employee success and engaging with customers, industry partners, and global government stakeholders to advance more responsible AI.
Alongside the report, we are also launching a new website that brings together our work across Responsible AI, Policy, and Accessibility into one cohesive learning platform. The website is designed to help employees, customers, builders, and policymakers better understand how trusted AI is built and governed in practice.
By sharing our learnings, frameworks, and real-world examples, we hope to help advance a future where AI systems are not only innovative, but also ethical, safe, accessible, and trustworthy by design.


