Chess knight on a blueprint with protractor, data cloud icon, cursor, and sparkle stars for how Salesforce governs AI responsibly.

How we govern AI responsibly

As AI moves from assistance to autonomous action, governance becomes essential. Salesforce defines clear policy boundaries, embeds oversight through the design and deployment of AI, and sets expectations for responsible use across teams.

Principles that guide ethical use policy

Within the Office of Ethical and Humane Use, policy frameworks translate ethical commitments into clear expectations for how customers may use Salesforce AI. These principles shape how risks are evaluated, decisions are made, and safeguards are applied. 

Scales of justice for the Human Rights principle.

Human rights

We design and deploy AI in alignment with internationally recognized human rights.

Lock with puzzle pieces for the Privacy principle.

Privacy

We advance privacy-by-design into AI systems, facilitating responsible data use, appropriate controls, and protection of personal information.

Building blocks for the Safety principle.

Safety

We design AI to reduce harm, prevent misuse, and operate within defined safety boundaries.

Magnifying glass with eye for the Transparency principle.

Transparency

We provide clarity when AI is used and insight into how outputs are generated, enabling informed and accountable use.

Headset for the Inclusion principle.

Inclusion

We design AI to be accessible and equitable for people of all abilities.

From principles to policy

Governance only works if it’s operational. Our Ethical Use Policy team runs a structured, repeatable process to operationalize our principles into tailored policies, use-case decisions, and safeguards and controls that protect our platforms and the people who rely on them.

Define the risk

Identification

Identify emerging risks, potential harms, and impacts to people — as well as gaps where new or refined policy guidance may be needed.

Analysis

Research and analyze industry standards, third-party policies, applicable laws and regulations related to the policy area, and internationally recognized human rights frameworks.

Stakeholder engagement

Gather input from across the company, customers, and external stakeholders, including civil society and domain experts, where appropriate.

Ethical use advisory council

Engage our Advisory Council to stress-test proposed policy direction and integrate wide-ranging, global feedback.

Recommendation to leadership

Present policy recommendations to senior leadership and refine direction based on executive input.

Implementation

Operationalize policies through enforcement mechanisms, contractual terms, product controls, and training.

Ongoing learning

Continuously evolve guidance based on emerging risks, technological change, and stakeholder feedback.

AI acceptable use policy

Clear rules that define how customers can — and cannot — use AI across Salesforce products. Policies are developed and reviewed with cross-functional stakeholders and evolve as technologies and risks change. Below are examples of how these policies are applied. The policies are designed to ensure every customer, end user, and person who interacts with our AI can trust that it is being used as intended. For full details, see the AI Acceptable Use Policy.

What's required

AI disclosure

People are informed when they are interacting with AI-generated content.

Qualified human review

High-impact decisions involve appropriate human oversight.

Responsible implementation

AI is used in alignment with applicable laws and regulations.

AI accountability: governance at every level

AI governance is reinforced through multiple layers of oversight, from day-to-day operational reviews to executive accountability and independent external input. These structures guide decision-making and help surface and address risk across AI systems.

Provides Board-level oversight of AI trust, cybersecurity, and privacy priorities.

Receives regular updates from the Chief Ethical and Humane Use Officer and reviews key AI risk considerations.

Executive-level committee overseeing human rights impacts across Salesforce.

Cross-functional leads and subject matter experts from Legal, Ethical and Humane Use, Privacy, Employee Success, Equality, Sustainability, Procurement, and Government Affairs who operationalize human rights commitments across Salesforce.

An advisory body composed of internal and external experts guiding the Office of Ethical and Humane Use.

Provides strategic guidance and counsel on policy and product recommendations. Includes representation from front-line employees, executives across the business, and external experts in academia and civil society.

Governing trusted AI FAQs

Policies are developed through a structured process that includes risk identification, research, multistakeholder input, advisory review, and leadership alignment. They are continuously updated as technologies, risks, and regulations evolve.

The AI Acceptable Use Policy, part of our broader Acceptable Use Policy for all technologies, defines clear boundaries for AI use. It outlines required practices, such as disclosure and human oversight, and restricts high-risk uses, such as fully automated, high-impact decisions.

High-risk AI use cases go through a structured review process that includes risk assessment, cross-functional input, and escalation to governance bodies, when needed. These reviews evaluate potential impacts, required mitigations, and alignment with Salesforce’s AI policies before approval.

AI governance is a shared responsibility. Salesforce builds safeguards into the platform, including system-level guardrails, monitoring, and audit capabilities, and defines policies such as the AI Acceptable Use Policy that govern how our services can be used. Customers are responsible for configuring and using AI in their environments, including defining what the AI is allowed to do, setting access controls, reviewing outputs, and applying their own internal policies and compliance requirements. To support this, Salesforce provides tools that help customers monitor, control, and audit AI behavior to manage risk and meet their own compliance requirements.

AI governance is supported by multiple layers of oversight, including executive leadership and cross-functional governance bodies. These groups guide decision-making, review risk, and hold teams accountable for how AI systems are designed, deployed, and used.

Governance continues after deployment through monitoring, audit trails, and ongoing review. Observability tools provide visibility into system behavior, and governance bodies review performance and emerging risks over time.

AI systems are evaluated through structured testing, including red teaming, benchmarking, and scenario-based validation. These processes help identify risks and improve system behavior before and after deployment.