Responsible AI in practice

Real examples of how responsible AI is applied across products. Each case shows the risk identified, what changed, and how those changes improved outcomes, informed by our trusted AI reviews and governance processes.

Graduation cap with admissions dashboard panels for the Student Recruitment Agent.
Education cloud

Student recruitment agent

AI-powered transfer credit evaluation to support faster, self-service student admissions workflows. Course articulation is the process by which one institution matches its courses or requirements to coursework completed at another institution. Students rely on course articulation to assure themselves that the courses they have completed or intend to complete will not need to be repeated at the institution to which they are transferring.

What we learned
What we learned:

What we learned:

  • Accuracy: Training data may include less complete articulation information for smaller institutions, which can reduce the accuracy of transfer credit evaluations.
  • Accuracy: Minor differences in course names, numbering, or format can lead to incorrect “no match” outcomes.
  • Empowerment: Students may misinterpret AI-generated estimates as final decisions, especially when they are unfamiliar with academic terminology.
What we changed:
What we changed:

What we changed:

  • Improved matching accuracy by shifting from fuzzy matching to structured data models.
  • Added clear messaging that frames outputs as estimates and prompts formal human review.
  • Introduced confirmation steps and customizable glossaries to reduce confusion and support student understanding of the system’s results.
Outcome Icon
Outcome

Outcome

More accurate credit evaluations, with clearer user understanding and better-informed decision-making, as students consider transfer options.

Public sector solutions interface with shield, checkmark, and envelope for the Complaint Agent.
Public sector

Complaint agent

AI-powered complaint intake and routing for public sector services.

What we learned
What we learned:

What we learned:

  • Accuracy: Inaccurate data extraction can lead to flawed case records and misdirected investigations.
  • Safety: Handling sensitive topics, such as reports of abuse, without proper guardrails can cause harm to users or produce inappropriate responses.
  • Empowerment: Default configurations risked being deployed without proper customization, limiting effectiveness for constituents.
What we changed:
What we changed:

What we changed:

  • Implemented rigorous extraction testing using diverse and emotionally complex inputs to validate accuracy.
  • Introduced clear human handoff options at any point in the experience, including fallback paths for system limitations.
  • Restricted sensitive complaint types and removed predefined configurations to require intentional customer customization.
Outcome Icon
Outcome

Outcome

Improved accuracy and safer handling of sensitive cases, with stronger protections for vulnerable users and more consistent, reliable routing to the appropriate human responders.

Agent Astro with mobile chat interface declining individualized advice for the Individualized Advice Policy.
Regulated industries

Individualized advice policy

Policy to prevent AI from delivering regulated, individualized advice without qualified professional oversight.

What we learned
What we learned:

What we learned:

  • Safety: AI-generated medical, legal, or financial advice without oversight could lead to real-world harm.
  • Compliance: These domains require licensed professionals under established legal frameworks.
  • Trust: Users may over-rely on AI outputs in high-stakes situations without understanding limitations, especially when advice appears tailored to their situation.
What we changed:
What we changed:

What we changed:

  • Established a policy prohibiting customers from using AI to generate individualized advice that would usually be provided by a qualified professional.
  • Defined clear boundaries for allowed use cases, such as summarization or support for professionals, rather than direct end-user advice.
  • Aligned policy with existing legal and regulatory standards across financial, medical, and legal domains.
Outcome Icon
Outcome

Outcome

Reduced risk of harm in high-stakes use cases while enabling AI to support professional expertise, not replace it.

User profile icon with checkmark and headset emoji for co-creating AI with users with disabilities.
Agentforce accessibility

Co-creating AI with users with disabilities

User research and usability testing with people with disabilities to improve Agentforce Chat and Agentforce Voice experiences.

What we learned
What we learned:

What we learned:

  • Inclusion: Standard interaction patterns can create barriers for users with disabilities — especially across chat and voice interfaces, where users rely on different input methods, pacing, and feedback cues.
  • Usability: In both chat and voice experiences, response structure matters. Long, dense, or poorly paced responses increase cognitive load, while unclear turn-taking in voice interactions creates friction and uncertainty.
  • Trust: Users with disabilities quickly recognize when accessibility has been intentionally designed — and when it has not — reflected in how naturally and comfortably they can interact with the agent.
What we changed:
What we changed:

What we changed:

  • Conducted in-depth usability studies with people using assistive technologies across Agentforce Chat and Voice, validating interaction patterns with real users in real-world scenarios.
  • Refined response design to improve clarity, structure, and pacing across both written and spoken interactions.
  • Introduced more intuitive voice interaction patterns, including clearer conversation flow and feedback, to support more natural, accessible conversations.
Outcome Icon
Outcome

Outcome

More usable, accessible Agentforce Chat and Voice experiences that users with disabilities can recognize and navigate with confidence — improving outcomes for all users.

Code brackets icon with accessibility testing interface panels for the Accessibility Agent.
Development workflows

Accessibility agent

AI-powered tooling to help developers identify and fix accessibility issues during development.

What we learned
What we learned:

What we learned:

  • Inclusion: Accessibility issues are often identified too late in the development process, increasing effort and limiting impact.
  • Empowerment: Developers need integrated, real-time guidance to build accessible experiences effectively.
What we changed:
What we changed:

What we changed:

  • Embedded accessibility checks directly into development workflows, enabling early detection and agent guided recommended resolutions in real time.
  • Enabled accessibility detection across our automated testing suite, where an A11Y Agent can discover issues, recommend fixes, and deliver the recommendations to the right engineer for their approval.
Outcome Icon
Outcome

Outcome

Accessibility becomes a built-in part of how products are developed, improving consistency and enabling more inclusive user experiences at scale.

Slack logo with heart icon and checkmark shield for the Slack content policy.
Content safety

Slack content policy

Content safety enforcement and platform integrity operations across the Slack platform.

What we learned
What we learned:

What we learned:

  • Safety: Platforms at Slack's scale require proactive — not just reactive — detection of illegal content, including child sexual abuse material, to maintain platform integrity and protect users.
  • Policy and Product Strategy: When policy and safety operations are not developed alongside product features, platform safety and integrity can lag behind product innovation — especially with AI-powered capabilities.
  • Platform Integrity: Without dedicated enforcement infrastructure, policy commitments are aspirational rather than operational.
What we changed:
What we changed:

What we changed:

  • Implemented proactive scanning for both known and AI-generated child sexual abuse material across Slack, with confirmed violations reported to the National Center for Missing and Exploited Children in compliance with federal law.
  • Built and maintained a dedicated safety and enforcement function that monitors for platform abuse, responds to user reports, and enforces Salesforce's Acceptable Use Policy at scale.