Skip to Content
Skip to Footer

Salesforce Outlines 7 Opportunities to Deepen Trust in AI in Response to White House Executive Order

It has been a historic week for addressing the rapid development of artificial intelligence (AI), with proactive steps to mitigate risk. The White House released an AI Executive Order, marking the most significant action a government has taken on AI to date. In addition, the G7 also agreed this week to a landmark code of conduct, outlining how companies should mitigate risks as they develop advanced AI.

It’s energizing to see governments take definitive and coordinated action toward building trust in AI.

It’s energizing to see governments take definitive and coordinated action toward building trust in AI. From the the EU’s AI Act in 2021 to this week’s U.S. Executive Order, governments recognize that they have an essential role to play at the intersection of technology and society. Creating risk-based frameworks, pushing for commitments to ethical AI design and development, and convening multi-stakeholder groups are just a few key areas where policymakers must help lead the way.

How the Executive Order aligns with Salesforce’s AI approach

For years, Salesforce has understood and helped unlock the incredible potential of AI for the enterprise, but we have also seen a trust gap take shape. Our customers — including 90% of Fortune 100 companies — are enthusiastic about AI but concerned about risks like data privacy and data ethics. Businesses are eager for guardrails and guidance, and looking to government to create policies and standards that will help ensure trustworthy AI.

Salesforce has been active in creating guardrails in line with what the White House has proposed, including:

  • Privacy: Like the White House, we have long called for comprehensive data privacy legislation. This week’s Executive Order goes a step further, calling for privacy-related research, guidance to federal agencies, and preservation of privacy throughout AI systems training.
  • Safety: We are glad to see that the National Institute of Standards and Technology (NIST) — whose AI Framework informed much of the White House’s Executive Order — will be setting rigorous standards for red-team testing to ensure that AI systems are safe, secure, and trustworthy.
  • Equity: It’s great to see the Executive Order prioritize equity by addressing algorithmic bias and discrimination. The order will also provide guidance throughout the criminal justice system, federal benefits programs, and with federal contractors to ensure that AI is used safely and fairly.
  • Global Cooperation: We regularly provide guidance and expertise to governing bodies around the world at national and multilateral levels. The Executive Order reinforces the need to work with other nations and practitioners to advance safety and responsibility, as well as promote AI’s benefits.
  • Government Adoption: The Executive Order highlights that AI can help the government better serve its constituents but also outlines the need for guidance of usage to protect privacy and security; the need for AI talent; and the ability to procure technology efficiently. Salesforce has been working with government agencies to utilize AI for modernizing public service.

Seven ways we can build trust, together

At Salesforce, Trust has always been our #1 value. We’ve spent over a decade investing in ethical AI, both in our business and with our customers. Our Office of Ethical & Humane Use has been guiding the responsible development and deployment of AI for years — first through our Trusted AI Principles and more recently with our Guidelines for Generative AI. We have in-house AI researchers, more than 300 AI patents, and are actively investing in AI startups through our $500 million ventures fund.

We’ve spent over a decade investing in ethical AI, both in our business and with our customers.

It’s not just about asking more of AI, it’s also about asking more of each other — our governments, businesses, and civil society — to come together and harness the power of AI in safe, responsible ways. We don’t have all the answers, but we know that leading with trust and transparency is the best path forward. In that spirit, we’re sharing seven ways we can build trust in AI, here at Salesforce and beyond.

1. Companies should protect people’s privacy. At Salesforce, we believe companies should not use any datasets that fail to respect privacy and consent. The AI revolution is a data revolution, and we need comprehensive privacy legislation to protect people’s data and help pave the way for other AI legislation.

2. Companies should let users know when they’re interacting with AI systems. That means helping users understand when and what AI is recommending, especially for high risk or consequential decisions. We must ensure that end users have access to information about how AI-driven decisions are made.

3. Bigger is not always better. Smaller models offer high quality responses, especially for domain-specific purposes, and can be better for the planet. Governments should incentivize carbon footprint transparency and help scientists advance carbon efficiency for AI.

4. Policy should address AI systems, not just models. A lot of attention is being paid to models but to address high risk use cases, we must focus on the whole layer cake: data, models, and apps. Every entity in the AI value chain must play a role in responsible AI development and use.

5. AI is not one-size-fits-all. Governments should protect their citizens while encouraging inclusive innovation. This means creating and giving access to privacy-preserving datasets that are specific to their countries and cultures.

6. Responsibility today fosters safety tomorrow. Many talk about the risks of advanced AI as if they are separate or in conflict with addressing the risks that AI poses today. But solutions build on each other, providing us with technical know-how and muscle memory to handle new risks as they emerge. We shouldn’t let fears of the future stop us from taking action today.

7. Appropriate guardrails unlock innovation. The first questions our customers ask about AI is always about trust and control of their data. Today’s businesses worry that AI may not be safe and secure, and they want governments to prioritize data privacy and create standards for AI systems transparency.

It is exciting to see governments and businesses from around the world working together to navigate a future that leverages the power of AI while ensuring trust is at the center. We hope this guidance will serve as a useful resource as these important efforts continue.

Go deeper: Learn more about trusted AI at Salesforce, including our policy advocacy and the tools we’re providing to our employees, customers, communities, and partners to develop and use AI safely and responsibly.

Paula Goldman Chief Ethical and Humane Use Officer, Salesforce
Astro

Get the latest Salesforce News