Skip to Content
Skip to Footer

Business as a Platform for Change

How Salesforce Is Building a Culture of Responsible Technology — and Why it Matters

Salesforce Tower

Learn how our Office of Ethical and Humane Use is approaching today’s most urgent questions around the responsibility of Tech and how we started on this journey.

What is ethical and humane use?

This was the question on our employees’ minds when our CEO and Chairman Marc Benioff announced a new department in 2018. The Office of Ethical and Humane Use, created as part of our Office of Equality, was formed with a mandate to ensure that our technology was used to help, not harm, society and uphold the basic human rights of every human being. As Benioff said at the time, “Our industry has reached an inflection point that must be supported by a strong set of guiding values… we know technology isn’t inherently good or bad; it’s what you do with it that matters. And that’s why we’re making the Ethical and Humane Use of technology a strategic focus at Salesforce.”

The inflection point he referred to included the way technology has become inextricably intertwined in our daily lives, and how technology companies were being called upon by consumers to prioritize the impact of their technology on society — as they continue to do today.

Shortly after the announcement, we hired our first-ever Chief Ethical and Humane Use Officer, Paula Goldman, who came to us from Omidyar Network, where she served as Vice President Global Lead, Tech, and Society Solutions Lab. She shares, “When I arrived, there was already an incredible foundation created by the Equality team including an ethical use framework shaped by our employees.” Goldman explains, “We have to remember that ethics is not a checklist, it’s a mindset — the prize is a culture in which everyone owns thinking through the consequences of our technology. And so we began on a path of scaling such a culture in the company.”

Why does this matter?

As Goldman said, ethics is a mindset, not a checklist. But how do we develop this muscle when most product organizations are trained to be “sprint-oriented”, with clear goals and checklists to help get there?

“Well first, we had to educate on the why — why does this matter?” Goldman explains. “To understand the why, we have to also turn to history and learn from a pivotal moment in our past. We’ve been here before: the tech industry faced similar issues in the ‘80s, which was an exciting time for tech with the onset of consumer internet and personal computers. However with all the wonderful things that came with PCs and the Internet, also came the proliferation of worms and viruses, exploiting existing vulnerabilities and causing the industry to rethink security practices and their responsibility with urgency. The result was new standard protocols and best practices, like mandatory red-teaming,” said Goldman.

Today, the technology industry is in the middle of another pivotal moment. As a society, we’ve become increasingly connected and the potential for technology to make our lives better is evident. However, it has also highlighted new potential harms. For example, it’s come to the forefront that some of the algorithms in which key decisions or policies can be built on are biased. Incidents like voice recognition software that struggled to understand women, or an algorithm making loan decisions based on an applicant’s race and neighborhood have illuminated the danger of creating technology without considering the potential consequences.

At Salesforce we anchor on the principle of “intention vs. impact” in the work that we do — meaning regardless of our intent, it’s the impact of our actions that truly matter. In order to hold ourselves accountable, maintain responsibility, and create trust, we create space for courageous, difficult conversations about the potential impact of our technologies while building thoughtful processes to scale.

Creating a trusted process

As Goldman says, “The question that often emerges when talking about ethical and humane use is — whose ethics?” As companies, we serve multiple stakeholders with many different perspectives — and we recognized early on that those stakeholders need to have a seat at the table to help guide us through these complex decisions.

Establishing an Advisory Council

With this in mind, the Office of Ethical and Humane Use began in the first few months with setting up an advisory council composed of diverse frontline and executive employees, as well as academics, industry experts, and society leaders.

“The Council is a critical component of our office and our policy making process,” says Rachel Gillum, Director of Ethical and Humane Use, who helps manage the Council. “We meet regularly with this group to share key issues we are grappling with and receive feedback that makes our work better. Our Council engages in rigorous debate and detailed discussions that help us to consider a wide set of perspectives, catch deficiencies, and avoid unintended consequences of technological and policy decisions. They don’t always agree and that’s actually the beauty — it helps us make thoughtful, informed, measured, and inclusive decisions.”

Building a framework

The team also recognized they needed a framework to guide the process on difficult topics. “When we started down this journey, we realized that our company’s core values of trust, customer success, innovation, and equality — while incredibly important in our guidance — weren’t specific enough to help us navigate the difficult situations that arise.” Goldman shares, “We surveyed employees to understand how they view the ethical and humane use of technology, and worked closely with Salesforce leadership, our Advisory Council, and external consultations to develop our Guiding Principles: human rights, privacy, safety, honesty, and inclusion.”

Our goal is to guide difficult decision making informed by research, data, and ethical principles.” – Paula Goldman, Chief Ethical and Humane Use Office

When a question gets raised about an issue, the Office of Ethical and Humane Use process includes communicating with relevant stakeholders such as members of the affected community, and experts in the industry. They then go into a very specific analysis process, which includes understanding the ethical framing, the customer use cases, multi-stakeholder learnings, and even counter-perspectives before getting to a recommendation. It is this framework that has enabled Salesforce to create new policy safeguards — such as mandating that Salesforce bots can not be used in a way that misleads end users into thinking they are interacting with human beings, and prohibiting the use of our e-commerce platform for the sale of assault weapons to private individuals.

All stakeholders and employees are able to report concerns through various channels, such as a confidential hotline, office hours, an internal social media page (“Chatter”), and a direct email handle. “This is part of fostering our culture of transparency and providing people the space to feel comfortable raising questions,” Goldman explains.

Putting the process into action: ethics by design

In pushing technology to be a force for good, it is essential to empower our engineering and product teams to consider the impact of what they create. This is where the principle of ethical design comes into play. “Teams apply the ethical use principles in how they design, develop, and deliver Salesforce products and services. This helps mitigate risk, but it also helps teams position our product as being built with ethics in mind from the ground up,” shares Rob Katz, Senior Director of the Office of Ethical & Humane Use. The Office partners with tech and product teams to foster ethics by design by focusing on three key processes:

1. Start with a risk assessment framework

Help teams think through the different types of risks that might be associated with a product. “We want teams to be able to apply a tool that could help them grade risk levels and decide, ‘There’s a risk this feature will lead to negative outcomes. We won’t ship the product until it’s been tweaked,’” says Senior User Researcher, Emily Witt.

2. Build training and awareness

Empower employees by incorporating concrete examples of what an ethics by design mindset is into trainings and onboardings. One of the ways Salesforce has done this is through Consequence Scanning, a process where participants are asked to consider all the potential consequences of their product or service on people and communities, and how to mitigate potential harms.

“This process is useful in prompting our teams to think about potential problems they might not be seeing,” says Senior User Experience Researcher Emily Witt. “That then helps them think creatively about solving those problems and building in a risk-mitigation process for our customers.”

3. Work deeply with product teams

Working deeply in consultation with product teams will allow you to anticipate and mitigate ethical questions before they become issues.

Ethics by Design processes have led to a number of feature-level breakthroughs that make it easier for Salesforce customers to use technology responsibly. For example, algorithms can perpetuate harmful stereotypes if left unchecked, which is why Salesforce Einstein, our artificial intelligence platform, has features baked into the platform that help users identify and remove these predictions. “When Salesforce customers use Einstein Prediction Builder to build customized AI predictions, we encourage users to take an online course in ethical AI through our training platform,” explains Kathy Baxter, Architect, Ethical AI Practice.

Another example that emerged from this work was a feature added to Salesforce’s Education Cloud that allows students to select what personal information they disclose to different instructors or administrators so they receive tailored support while maintaining privacy, which reflects the power of designing for the end user and with an inclusive lens.

How ethics guides us through crises

As the COVID-19 pandemic continues to permeate all aspects of our lives, it is more important than ever to create and build responsible technology, especially as we’re experiencing multiple intersecting crises. There’s the health crisis and economic crisis — both of which disproportionately impact marginalized communities. Salesforce, like many other companies, moved quickly to leverage technology to help the world recover through action.

Working in close partnership with the Privacy Legal and Product teams, the Ethical Use team extensively explored the responsible creation of technology in the context of a pandemic. Salesforce quickly developed technology to help communities and businesses stay resilient and build trust with customers and employees to protect their health and safety, and the teams also made sure that they were thinking through the impact on vulnerable groups, embedding timely Privacy and Ethical Use Principles into key product design decisions.

“We partnered with teams to identify risks and opportunities during the product development lifecycle, marrying speed and thoughtfulness to differentiate it with the amount of ethical care and concern that went into its development,” Katz shares. “We designed safeguards against ways the platform could be unintentionally misused, for example, to discriminate against workers based on protected characteristics such as race, gender, and disability status. We also partitioned sensitive data, to preserve employees’ privacy and keep access to a need-to-know basis.”

How you can adopt ethical use in your organization

As we enter this next chapter in history, consider how your technology will work to make the world a better, more inclusive place for everyone. “It’s not about finding an immediate solution,” says Goldman, “but rather enabling individuals and teams to start asking the right questions, have courageous conversations, and practice this mindset in everything they do.”

Elevate diverse perspectives

These are uncharted times, and while tech has an increased responsibility and need to act quickly, we have to be thoughtful about who’s at the table when making decisions, designing, and reviewing our products.

Everything we create represents our values, experiences, and biases, and it’s important that an ethical culture begins with a diverse team. You should strive to gather a wide range of perspectives across different experiences, race, genders, religion, ability status, and other aspects of identity, and elevate them at every stage of development so that you can build solutions that are inclusive and more beneficial for all.

Make room and encourage difficult conversations

Now more than ever we have to understand the unintended consequences of the products we put into the world, and this starts with asking hard questions and having difficult conversations. There are three questions the Office asks in their Consequence Scanning training sessions that tend to be particularly relevant:

  • What are the intended and unintended consequences of this product or feature?
  • What are the positive consequences we want to focus on?
  • And what are the consequences we want to mitigate?

Empower your employees

Empowering employees starts with having a strong “why”, guiding principles, and framework. Also ensure that you’re able to collect feedback from stakeholders and employees through multiple channels, such as office hours or an anonymous hotline.

And our work to lead with ethical and humane use has only just begun. “We recognize that we’re on a journey,” Goldman explains. “And while we’re entering a new frontier to guiding our communities through recovery, we are committed to our responsibility of building technology that does no harm.” Join us on this path.