Skip to Content

5 Easy Steps to Build Trust in AI

5 steps to build trust in AI

Worried about building trust in AI while leveraging it to its full potential? Here’s everything you need to know and how to achieve it.

It’s smart and creative, it brings our ideas to life, and it’s the #1 priority for CEOs. Everyone’s talking about generative AI. It can help you build stronger customer relationships, work more efficiently, and drive business growth. But you need to get buy-in from your staff and customers before you get started. So, how do you build trust in AI?

Studies show up to 62% of UK employees are worried they don’t have the right skills to use AI accurately and safely. They’re also worried it will introduce risks around privacy, data control, bias, toxicity, and could generate false information known as ‘hallucinations’.

While executives understand that adopting generative AI without the right governance and risk management tools could harm their reputation, 72% believe it can actually play a critical role in building and maintaining trust. The key is working together to solve the challenges of this emerging technology, demonstrating to your stakeholders that you’re in control of AI.

So, how do you put a framework in place to make sure AI is safe, inclusive, and trustworthy?

5 ways to build trust in AI

1. Define your principles and create guidelines

At Salesforce, we can help you start your AI journey safely and quickly. We’ve put together a set of guidelines to help you develop and use AI accurately and ethically. 

  • Accuracy: People need reassurance that AI generates results that are accurate and relevant. The best way to achieve that is to train models using your own data. The better data you put in, the better results you’ll get—and this should be an ongoing process as the amount of data grows and your business evolves over time. When you’re typing your prompt into the AI tool, include wording like this: “If you are unsure of the validity of the response or don’t have data to base it on, say you don’t know.” This is called a prompt defense guardrail.

    Next, you need some human input to check the response and foster trust in AI. If the person isn’t sure about the answer that’s been generated, type a question into the prompt window. Ask the AI to list the sources it’s used and explain why it gave the response. Always double-check statistics are correct, and when you’re automating tasks, make sure your whole team knows that a member of staff needs to check that the task has been carried out correctly before moving on to the next step.
  • Safety: When you’re training AI models, make sure personally identifiable information (PII) is protected and doesn’t get submitted. To avoid bias, toxicity, and harmful output, carry out assessments and use red teaming—i.e. ethical hackers—to check nothing will get through. This is a team of experts who will deliberately try and force the AI to produce harmful or offensive answers. Ideally, they won’t be able to, but if they can, they can advise on how to stop it from happening again. To be safe, ask developers to publish code to a safe sandbox environment for testing before it goes live.
  • Honesty: Make sure you get permission to use people’s data for training and evaluating your models. Be transparent when people are interacting with AI. For example, if you’re using a chatbot or posting AI-generated content, include a notification in the chat or a line at the end of your article explaining where you’ve used AI.
  • Empowerment: Know when to fully automate processes and when AI should play a supporting role. Sometimes human judgement is required, and it’s important to strike the right balance between supercharging productivity and prioritising staff expertise.
  • Sustainability: Think about how much energy is being used to run your AI models. Bigger models have a higher carbon footprint, and they’re not always better. Try to right-size models and train them more accurately. For example, if you’re training your model on how to answer common customer queries, feeding in knowledge articles and information on how those cases were resolved before will get good results. You don’t need to put in every piece of information from customer support that isn’t sensitive or protected, which will take longer and use more energy to process.

Guidelines for Building Trust in AI

Given the tremendous opportunities and challenges emerging in the space of Generative AI, we’re building on our Trusted AI Principles with a new set of guidelines focused on responsible development.

2. Build AI data privacy into your systems and purchasing guidelines

Data privacy requires different safeguards when it comes to AI. There are trust standards you can adopt to keep your data safe, and you should make sure any vendors you work with follow the same guidance.

Here’s how to keep data safe:

  • Dynamic grounding: Prevent hallucinations by making sure your AI uses the most recent and accurate information in its responses. If you shipped to the UK in 2021 but you started delivering worldwide in 2022, don’t train your AI to answer customer support queries around delivery times using data from before 2022.
  • Data masking: Replicate and anonymise sensitive data so you can use it without including PII or breaching regulations. Take out names, addresses, financial information, and other private information.
  • Toxicity detection: Use machine learning to flag harmful content by scanning responses for toxicity and scoring them. For example, if it generates content that always assumes a CEO is a man, flag that this isn’t inclusive and that it should refer to a CEO as ‘they’ not ‘he’.
  • Zero retention: Make sure customer data from prompts and outputs isn’t stored by the vendor or learnt by the AI model. You can program it to make sure this type of data isn’t retained.
  • Auditing: Evaluate systems to make sure they’re working as expected, following frameworks, and without bias. Run tests every now and then to check how your AI is working and what type of content it’s generating.
  • Secure data retrieval: Bring the data you need to build prompts into your model securely. The easiest way to do that is to use a vendor you trust and have information available on how the product works and which security protocols it follows. Einstein uses a Trust Layer, for example.

3. Set up a diverse team of experts to lead risk reviews and detect bias

You can’t detect bias without bringing together a diverse team that represents a broad range of people. There’s a risk that AI will replicate unconscious bias and reinforce harmful stereotypes. Making sure you train your model on sets of clean, unbiased data will help to get the best output. Get your team to carry out risk reviews.

There’s a free Trailhead module you can use to teach them about recognising bias in artificial intelligence. This covers how to identify different types of bias that can enter AI and where those entry points are.

4. Involve and educate your people

As with any transformation, the journey is smoother if you take your people with you. Your AI strategy should include onboarding and change management. Educate your staff on your goals, listen to their concerns, and check in with them once you’ve gone live.

People have different mindsets. Some can’t wait to get started, whereas others need more reassurance to trust AI. The best way to get them on board quickly is to show them how you intend to use it to make their life better. Think about quick wins—start with a common pain point you can solve with generative AI and gradually expand it. 

When your staff are comfortable using generative AI, get them involved with spotting bias and reducing risks. This will show them that you’re being proactive and keeping data safe is a company-wide effort. 

Generative AI Basics

Discover the capabilities of generative AI and the technology that powers it. One of the many Trailhead modules that can help you in your journey.

blog-offer-trailhead

5. Build trust in AI by being transparent with customers

According to research, customers have a high level of trust in generative AI: 74% trust written content, 67% believe it can give helpful medical advice, 53% would use it for financial planning, and 66% would trust it to give relationship advice. While this is promising, it doesn’t mean you can rest on your laurels.

To keep building trust with customers, use generative AI to address their pain points or help them achieve their growth potential. Be ready to show them that you’ve implemented effective data governance and frameworks to keep data protected and AI-generated content accurate. Offer transparency around your AI model’s inputs, outputs, and potential biases. Reassure them that as a company, you’re proactively monitoring responses for risks.

AI Strategy Guide

Make Data + AI + CRM your Trusted Formula. Discover how to get ready for Generative AI.

Get our bi-weekly newsletter for the latest business insights.