Skip to Content
Skip to Footer

Technology Ethics

How Salesforce Infuses Ethics into its AI

What happens if AI incorrectly identifies someone due to inaccuracies in facial recognition technology? Companies must recognize the unintended effect

Kathy Baxter

“I’m constantly thinking about what the potential intended and unintended consequences are of what we’re developing,” Kathy Baxter explains from her home office in Fremont, California. As Salesforce’s Architect of Ethical Artificial Intelligence Practice, overanalyzing is practically part of Baxter’s job description. Because while AI ranks atop the most promising technologies of the era, there are also plenty of ways it can go wrong.

“What happens when AI incorrectly identifies someone in a criminal investigation due to inaccuracies in AI-based facial recognition technology? Or when you lose a job because a hiring AI system believes you have low EQ? Or when voice recognition in a bank’s customer-service system can’t decipher a caller’s accent?” Baxter rattles off a string of potential pitfalls when AI lacks ethical underpinning.

For all the good that AI can bring, responsible tech companies understand they must recognize, prepare for, and mitigate the potential unintended, harmful effects. That’s why Salesforce sees ethics as foundational to AI — and why we’re sharing a closer look at how we infuse an ethical process into our AI.

The backstory: ethical AI

Salesforce’s commitment to becoming an ethics-focused AI company started when CEO Marc Benioff stated his AI vision for the company. “Einstein, the first comprehensive AI solution for CRM, will give our customers the ability to experience the power of AI right inside the applications they use everyday,” said Benioff. Part of that vision was also to build AI that customers could trust.

Baxter, who at the time worked as a Salesforce User Experience Researcher, was inspired by this vision and began helping Einstein product teams identify potential ethical risks. Recognizing the need for a dedicated team, in 2018 Baxter wrote a job description and soon found herself with a new role. At the end of the first year, the groundwork had been laid for what would become Salesforce’s Trusted AI Principles: a commitment to developing AI that’s responsible, accountable, transparent, empowering and inclusive.

Expanding the ethics umbrella: The Office of Ethical and Humane Use

Before long, Salesforce broadened its focus on ethics, founding its Office of Ethical and Humane Use. In 2019, the company hired Paula Goldman as Chief Ethical and Humane Officer to focus on “Ethics by Design” – incorporating ethical principles into the process of designing, building, and selling all Salesforce software and services.

The trusted AI principles became the basis for building trusted AI around three pillars: employee engagement, product development, and empowering customers.

From academic research on topics like bias in AI, to principles and processes that infuse ethics into the product development lifecycle, this organizational shift has been a company-wide effort to generate trusted AI and foster a culture of cross-examination of Salesforce’s products and their impact on stakeholders.

Salesforce’s Trusted AI Principles

Here’s a look at how each pillar takes on the challenge of ethical AI:

1. Employee engagement to support ethical AI

This work began when Baxter began talking with Einstein teams that, like her, were passionate about wanting to build ethics into Salesforce products. Together, they’d work to identify the risks of these products.

For example, one common insight was that customers needed to understand why the AI made the recommendation or prediction that it did in order for it to be trusted. In other words, they struggled to trust the verdict unless they understood the deciding criteria. However, different user types had different levels of expertise. Einstein Discovery users are more likely to be data scientists or statisticians so they will want to see all the factors used in a model, how strong the factors are, and in which direction.

An example of a prediction explained via Einstein Discovery Story Insights

Einstein for Sales Cloud, on the other hand, is often used by sales reps without a background in data science or statistics so they might be overwhelmed by the level of detail shown to Einstein Discovery users. Teams needed to understand for each product how to make predictions interpretable for disparate user types to inspire confidence rather than confusion.

In order to build a culture in which employees have the right mindset to create ethical products, Salesforce offers programs to help employees put ethics at the core of their respective workflows. These programs are designed to empower the entire organization to think critically about every step of the process of building AI solutions.

Yoav Schlesinger, Principal in the Ethical AI Practice agrees. “Having a sense of ethics embedded culturally across the organization enables our success. Developing what I call a ‘moral muscle memory’ allows more risk spotters, enables more people to engage in tough conversations with their teams, and shifts the responsibility for ethics from one central hub to the teams building our products.”

For example, a new hire “bootcamp” trains employees to cultivate an ethics-by-design mindset from the very start of their Salesforce career. Salesforce provides comprehensive employee resources such as training from the Institute of Electrical and Electronics Engineers (IEEE) Ethically Aligned Design, which focuses on the most important ethical concepts anyone who builds autonomous and intelligent systems should know.

Baxter notes a key part of the employee-engagement strategy is involving internal and external experts in conversations, so that employees are challenged by a mix of perspectives. For example, prior to publishing potentially high risk research and code, teams are encouraged to seek feedback from a range of external ethics and domain experts on whether it is safe to publish, if all potential risks have been identified and if proper mitigation strategies are in place. The most recent example of this is the publish of Salesforce’s AI Economist research where Baxter engaged external experts prior to publish.

2. Product development to support ethical AI

“We want our teams to ask themselves questions such as, ‘How do our products live in the world? How do they impact customers and society? How do we make it as easy as possible for our customers to do the right thing, and to make it as hard – or even impossible – for them to do the wrong thing?,’” Baxter explains.

Sometimes it’s as simple as articulating a series of accountability questions at the beginning of the product cycle in order to keep ethics by design top of mind. Another tool is “consequence scanning,” an exercise that asks participants to envision potentially unintended outcomes of a new feature and how to mitigate harm. This framework is one that has been adopted across all product teams through the Office of Ethical and Humane Use’s ethics by design process, and is implemented for AI as well to help teams think creatively about potential problems and how to mitigate risk for customers.

Baxter and Goldman discuss ethics with Tech Republic

Another ethics checkpoint is the dedicated Data Science Review Board, which encourages and enforces best practices in data quality and model building across the organization. From prototyping to production and then product, the DSRB helps gauge whether teams are effectively removing bias in training data, understanding where any unintended biases may have crept in, and mitigating possible scenarios in the future. The review board, managed by leaders across research and data science, partner with teams to create transparency in how they collect data used by machine-learning algorithms.

In one case, the ethics teams worked with the Marketing Cloud Einstein team to move from demographic-based targeting to interest and behavior-based targeting, to prevent bias against a group of people. For example, although more women than men may buy makeup, targeting makeup ads only to women is not inclusive or as accurate as it could be, because it excludes men who buy or wear makeup and the transgender or non-binary community.

3. Empowering customers with ethical AI

Perhaps most importantly, the end user of the product must have all the tools needed to use AI responsibly. To that end, Salesforce builds a number of features into its products to help guide the customer toward ethical choices.

Einstein Discovery contains a feature called “sensitive fields” that allows an admin to indicate which fields are “sensitive,” meaning there may be regulatory restrictions on their use or they may add bias to a model, like age, race, or gender. Einstein then looks for fields that are highly correlated with the sensitive fields, called proxy variables, and flags those for the admin to review. Take ZIP code, for example, a factor that is often highly correlated with race in the US. While ZIP codes may be a helpful variable for a college admissions office with a mandate to admit in-state students, ZIP codes, if used by a bank to determine eligibility for a loan, could result in bias against a certain race. An admin can then decide whether or not to exclude those sensitive fields and their proxies from the model.

In addition to these features that identify and mitigate ethical risk, Trailhead learning modules also help customers better understand the products they’re using, what trusted AI means, and how they can be champions of implementing it in an ethical way. The goal, says Baxter, is to be as transparent as possible around how an AI model was built so the end user has a better sense of the safeguards in place to minimize bias. Salesforce has also started publishing model cards for global models, which are models that combine data from multiple sources to create a predictive model that can be used by many customers.

“As a B2B brand, we have even a greater responsibility because of the number of people our customers are also reaching,” says Schlesinger. “Safeguarding our customers’ customers’ privacy and ensuring their interests are also accounted for is a huge responsibility for us, and not one we can or should take lightly.”

Ethical AI in the COVID-19 era and beyond

As organizations turn to technology in their response to COVID-19, it’s more important than ever that we continue advancing AI that is safe and inclusive. A study in BMJ found that of 66 models designed to predict the risk of COVID-19 infection, all performed poorly, were at high risk of bias, or were too optimistic about expected performance.

“The use of AI to fight COVID-19 invites the question: Do organizations have the right resources and proper guardrails to ensure they’re using AI, responsibly?” Baxter asks.

Salesforce’s own journey shows that infusing ethics into its AI is a work in progress and a non-linear process. It involves a cultural shift, design thinking, and engagement with employees, customers, stakeholders, and community.

“We all have a responsibility to ask not just, ‘can we do this?, but ‘should we do this?’” Baxter explains. “By rooting our approach in our core values — trust, customer success, innovation, and equality — we hold ourselves accountable for the technology that we are creating and implementing at Salesforce and beyond.”

For a closer look at ethics across Salesforce, read How Salesforce is Building a Responsible of Ethical Technology.

Astro

Get the latest Salesforce News