Skip to Content

Beyond the Algorithm: Learn How to Build Trusted AI With Trailhead

Beyond the Algorithm: Learn How to Build Trusted AI With Trailhead

With the new Responsible Creation of Artificial Intelligence module, we aim to empower developers, designers, researchers, writers, product managers to build AI in a responsible and trusted way.

With the advent of low-code developer tools, people of varying skill levels and backgrounds — not just those with PhDs — are not only able to leverage the benefits of artificial intelligence (AI), but also build intelligent systems.

As access to AI widens and the depth of its impact begins to reveal itself, we are faced with new questions about how to ensure the future of AI is responsible, accountable, and fair — even when it’s built by people without technical training or AI expertise. It’s critical that anyone building AI consider the domino effect that AI-driven outcomes can have on people and society, intended or not. The first step in building responsible AI solutions is to understand the biases that can lurk within its models and training data, a big challenge in its own right.

This is why I’m excited to announce that Salesforce has published the Responsible Creation of Artificial Intelligence module on Trailhead to empower anyone to skill up on how to build responsible and fair AI.

New Trailhead module: Responsible Creation of Artificial Intelligence

With Trailhead, Salesforce’s free online learning platform, more than 1.4 million people have already skilled up on the technology of tomorrow, having already earned 14 million badges on everything from blockchain, AI, and beyond. With the new Responsible Creation of Artificial Intelligence module, we aim to empower developers, designers, researchers, writers, product managers — everyone involved in the creation of AI systems — to learn how to use and build AI in a responsible and trusted way and understand the impact it can have on end users, business, and society.

By helping people learn what AI bias is and how to avoid it, we can build a better future with AI.

After completing the module, people will be able to:

  • Explore the ethical and humane use of technology
  • Understand AI
  • Recognize Bias in AI
  • Remove Exclusion From Your Data And Algorithms

Building a better future with responsible AI

The ability to build and deploy an AI model at scale in a matter of days, versus weeks or months, means that more people are able to access the benefits of AI faster than ever. For example, with Einstein Platform Services, any developer or admin can build a custom AI model in a matter of clicks to predict any business outcome. But we recognize that it is not enough to only democratize AI, we have a responsibility to ensure that the technology we deliver is used appropriately and responsibly.

We believe AI has great potential to improve the state of the world and are committed to making the benefits of AI accessible to everyone, including the necessary tools for using these innovations appropriately. Through the Responsible Creation of Artificial Intelligence module, people of all skill levels and backgrounds will be able to learn the fundamentals of using the power of AI for good. Education is a critical component to AI’s success and as the technology continues to evolve, so will the tools and support we provide to our employees, customers, and partners to help them keep pace with innovation.

With the right tools, everyone can be a steward of responsible AI. Check out the new module to get started.

Kathy Baxter
Kathy Baxter Principal Architect, Ethical AI Practice

As principal architect of ethical AI practice at Salesforce, Kathy develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute of Technology. The second edition of her book, "Understanding your users," was published in May 2015. You can read about her current research at 

More by Kathy

Get the latest articles in your inbox.