Skip to Content
Skip to Footer

How Salesforce Develops Ethical Generative AI from the Start

Across countless verticals, companies are actively embracing generative AI to power business growth. In fact, generative AI may inject over $4 trilion annually into the world’s economy.

While 67% of senior IT leaders prioritize generative AI for their businesses — with more than 30% naming it as their key consideration — a lack of trust among their customers may pose a challenge. According to recent data, nearly 75% of consumers remain skeptical of AI-generated content’s accuracy and trustworthiness. Similarly, more than 60% of skilled professionals believe they lack the necessary skills to effectively and safely use AI.

Building trustworthy generative AI requires a firm foundation at the inception of AI development. Earlier this year, Salesforce published an overview of our five guidelines for the ethical development of generative AI that builds on our Trusted AI Principles and AI Acceptable Use Policy. The guidelines focus on accuracy, safety, transparency, empowerment, and sustainability — helping Salesforce AI engineers create ethical generative AI right from the start.

Powering generative AI model accuracy

Salesforce customers use generative AI models to handle tasks ranging from automating sales processes to delivering personalizing customer service interactions to creating customized buyer and merchant experiences — and that’s just the tip of the iceberg. Consequently, highly accurate model responses typically result in enhanced communications and improved customer experiences. To validate model accuracy, Salesforce uses the dual approach of grounding output in customer data points with continuous iterative improvement through customer input.

Dynamic grounding — the process of restricting the generative AI model to use factual, up-to-date data instead of older datasets — remains key for delivering highly relevant and accurate output. For example, by grounding a generative AI model with a particular customer’s data — such as their knowledge articles about service queries — the answers being generated are accurate and come from a trusted source.

Customer feedback also helps validate model accuracy. Oftentimes, Salesforce customers provide explicit qualitative and quantitative feedback on how they use or alter their AI outputs, which helps refine our product design and improve the end result — a process known as reinforcement learning.

Ensuring safety: Avoiding generative AI bias and preserving customer data privacy

According to a recent survey, 73% of IT leaders have concerns about the potential for bias in generative AI. To mitigate harmful bias, Salesforce performs rigorous AI testing in simulated environments.

To mitigate harmful bias, Salesforce performs rigorous AI testing in simulated environments.

We kicked off our efforts with predictive AI, which analyzes patterns of historical data to forecast a company’s future outcomes. We mitigate bias by testing these algorithms over time and developing quantitative measures of bias and fairness. Our AI researchers then use this information to correct any bias-related issues prior to launching an AI system. This helped AI researchers detect and correct any bias-related issues prior to market launch.

Tackling bias in generative AI remains a more complex problem, requiring new evaluation methods such as adversarial testing. With adversarial testing, AI researchers intentionally push generative AI models’ and applications’ boundaries to understand where the system may be weak or susceptible to bias, toxicity, or inaccuracy.

Adversarial testing introduces thousands of adversarial inputs to an AI model and trains it to recognize and reject those inputs so that models do not produce toxic, biased, unsafe, or inaccurate outputs. If the testing reveals patterns or issues, engineers retrain the algorithms to rectify anomalies.

Driving transparency in generative AI development and deployment

Salesforce prioritizes transparency throughout the development and deployment of our generative AI systems. This includes providing:

  • Disclaimers. Clarifying systems, such as popups, help set expectations — advising customers that generative AI continues to be an evolving technology.
  • Documentation. Model cards and product documentation provide a wealth of information to customers, sharing key information such as the purpose behind particular AI models, how Salesforce trains AI models, and any potential risks associated with using AI.
  • Responsible usage guidance. When using model builders, customers are notified when the system receives biased inputs — such as a zip code that could be correlated with race — preserving transparency in AI interactions.

Using generative AI to empower employees and users with limited AI experience

Salesforce’s generative AI is designed to support customers with augmentation and empowerment, helping people elevate their productivity and skill sets.

For instance, a Service Cloud customer recently piloted Salesforce’s new generative AI Einstein Service Replies product. It gave service reps a new tool to rapidly resolve issues that could otherwise be very time-consuming. It ended up being so effective that the reps were able to expand their horizons and explore new opportunities, using the tool to upsell and cross-sell new services for their customers.

Salesforce also ensures its AI technology remains user-friendly and inclusive — offering no-code and low-code products that democratize access to AI for people with various levels of expertise. For example, Prompt Studio enables users to explore prompt engineering — the art of creating prompts that a generative AI model understands — while providing them with guidance to use parameters and prompts for creating the best results.

Promoting sustainability in generative AI development

Environmental concerns loom large in generative AI development. Many models create a large carbon footprint, resulting from the significant data and compute required to build and run them. In fact, training a large language model (LLM) may require up to 10 gigawatt-hour electricity consumption, which equates to the annual power consumption of more than 1,000 U.S. households.

That’s why Salesforce develops generative AI technologies with sustainability in mind, ranging from right-sizing models to avoiding unnecessary compute to maximizing hardware efficiency and renewable energy supply. Additionally, our open source approach enables the AI ecosystem to reuse models and iterate on our work, removing the need to train new models from scratch. Adopting these strategies has enabled us to significantly reduce generative AI development emissions, reinforce our longstanding commitment to environmental leadership, and maintain net zero residual emissions across our entire value chain.

That’s why Salesforce develops generative AI technologies with sustainability in mind, ranging from right-sizing models to avoiding unnecessary compute to maximizing hardware efficiency and renewable energy supply.

To create generative AI that’s built on trust, Salesforce remains laser-focused on ensuring accuracy, safety, transparency, empowerment, and sustainability. Leveraging these five guidelines will help drive generative AI adoption across industries while preserving customer trust and environmental well-being.

Learn more

  • Learn about Einstein AI
  • Check out this piece on generative AI’s future, penned by Salesforce AI’s Chief Scientist
  • Read this report summary to learn more about the customer trust gap in AI
Paula Goldman Chief Ethical and Humane Use Officer, Salesforce More by Paula
Astro

Get the latest Salesforce News