Skip to Content
Skip to Footer

How Five Years of Tech Ethics Will Inform the Era of AI

The explosion of generative AI has captivated individuals and businesses, and is already transforming the way we live and work — including here at Salesforce.

In 2023, the possibilities and perils of generative AI have brought ethics into the limelight in exciting and unsettling ways. From killer robots to bias to hallucinations, we’re seeing topics once considered wonky and technical become part of mainstream discussion around AI.

While I don’t necessarily subscribe to all the hype — or hysteria — around AI, I do believe in AI’s transformative potential and I’m encouraged to see Trust become as central to the AI conversation as the technology itself. And I feel heartened that more and more of us think about the ethical implications of today’s most exciting innovations, and take steps today to ensure safe, trustworthy AI tomorrow.

I feel heartened that more and more of us think about the ethical implications of today’s most exciting innovations, and take steps today to ensure safe, trustworthy AI tomorrow.

Paula Goldman, Chief ethical and humane use offer

As we come up on the five-year anniversary of Salesforce’s Office of Ethical and Humane Use, I’m proud of the way our team has pioneered what responsible technology looks like in the enterprise — and that we’re doing it again for the era of AI. Here’s a look back at our Office’s first five years and what we’re focused on next.

Operationalizing ethical technology

Five years ago, tech ethics was definitely not mainstream. It didn’t even have a seat at the table.

Questions and controversies were beginning to emerge around companies’ use of technology. But as an industry, we had no established mechanism or discourse to deal with them. Our Chair and CEO, Marc Benioff, saw this gap emerging and knew we needed to act to continue building trust with our customers and communities. In August 2018, he announced Salesforce’s Office of Ethical and Humane Use and hired me to create a framework for ethical use of technology across the company.

Our office was the first of its kind for the enterprise, so there was no roadmap. But to be successful, we knew we had to lead with transparency and trust.

We first set a goal to define what “good” looks like for trusted tech in the enterprise. My team began by building an infrastructure for the responsible development and deployment of our technologies. We developed guidelines like our Trusted AI Principles to help our product teams understand and mitigate the risks of AI. And we stood up policies and processes to guide this work throughout the business, including new guardrails in our Acceptable Use Policy (AUP) and an Ethical Use Advisory Council to help us understand and examine ethical complexities.

Navigating AI innovation cycles

Over the years, our Office has learned and grown a lot. And we’ve helped guide Salesforce through some of the most significant moments in our industry — and our society. AI in particular has — and continues to be — a huge focus for my team. We’re still in the early stages of the AI revolution, but our Office has already seen and helped navigate two distinct eras of AI innovation:

AI 1.0: Predictive AI

AI 1.0 took center stage in a commercial context with the emergence of predictive AI. For our Office, that meant examining the implications of AI — some for the first time — during one of the most dynamic periods in history. AI risks like data security and concerns about bias took on new meaning against the backdrop of the COVID-19 pandemic and the movement for racial justice. Our Office had to make sure that advances in AI-enabled personalization didn’t come at the expense of privacy — or worse, exacerbate bias.

During this era, trustworthy AI, data ethics and inclusive technology were a huge focus for our team. We created guidelines and in-app features around ethical personalization and data privacy, and incorporated safeguards into our products to ensure that our AI was grounded in good data. We also ensured responsible design of technology for pandemic response, ensured that the data model in our product was inclusive for the diversity of our end users, and put protections in place to ensure our AI products could not be used for facial recognition.

AI 2.0: Generative AI

AI 2.0 exploded onto the scene commercially late last year with the rise of generative AI. The speed and sophistication of GPT tools have captivated the world but they’ve also created a lot of anxiety about the risks of advanced AI. AI discussions over the past year have become quite polarized, divided between techno-optimists determined to accelerate AI innovation and critics cautioning against a range of risks, some near term and some existential.

Our Office has worked to ensure that trust keeps pace with this rapidly evolving technology — arming our customers with the guidelines and guardrails needed to feel comfortable adopting these new innovations. This year, we updated our AI Principles with 5 Guidelines for Responsible Generative AI to help organizations build and deploy AI safely. We also created a first-ever AI AUP with new protections like ensuring proper safeguards and restrictions around high-risk decision making, such as medical or legal advice.

Salesforce’s top three priorities for ethical AI in 2024

These are the first of what we know will be many more phases of the AI revolution, and each innovation cycle will come with its own unique challenges. As the era of AI continues to unfold, here are three areas my team is focusing on to advance ethical AI in 2024:

  • Human at the Helm. Adoption of AI will depend on people trusting it. We know automation can unlock incredible efficiencies, but there will always be cases where human judgement is required. It’s important that we build AI so that people understand and know what to trust — and when to take a second look.
  • Risk-Based Regulation. It’s been energizing to see governments start to take definitive action to build trustworthy AI. From the United States’ AI Executive Order to last week’s EU AI Act, governments are showing it’s possible to address immediate issues and put frameworks in place for more advanced systems in the future. As regulatory momentum continues, it’s important that we embrace risk-based frameworks that distinguish systems from models, and continue to encourage innovation.
  • Board Governance. The speed and scale of innovation has brought AI into the boardroom like never before. Whether companies are making or using AI, it’s critical that their board members understand how AI is being used, the high-risk use cases at play, and how to ensure data governance. Salesforce’s board has been discussing AI for nearly a decade but for many, this is entirely new territory and will require education.

Technology is always changing but the past five years have taught me that one thing remains consistent: trustworthy AI is as much about the people as it is about the technology. As we continue to explore this exciting technology, we must empower people to harness AI responsibly and consider the footprint of the AI revolution on our society.

Go deeper

Paula Goldman Chief Ethical and Humane Use Officer, Salesforce More by Paula
Astro

Get the latest Salesforce News