Skip to Content

How To Create Impactful Customer Experiences With Ethical AI

How To Create Impactful Customer Experiences With Ethical AI

AI can help businesses in many ways — but only if it’s built on a foundation of trust and transparency. Salesforce’s Rob Newell explains.

Rob Newell is Vice President, Solution Engineering and Cloud Sales, at Salesforce. He’s passionate about helping organisations throughout the Asia Pacific region surpass their business objectives by employing ethical AI and enterprise cloud technology.

The challenges around ethics, trust, and artificial intelligence (AI) are real. They can have a significant impact on customer experience and on the market’s perception of brands and businesses. Fortunately, there’s plenty businesses can do to create AI experiences that enhance trust and have a positive impact on customer experience.

We know from Salesforce research that top performing sales teams are 2.7 times more likely to use AI to determine what action to take next. Compared to underperforming organisations, they’re also 2.4 times more likely to use AI to prioritise leads and 1.9 times more likely to use AI to manage admin tasks.

At the same time, there’s a crisis of trust in AI.

Why is this? Whenever AI delivers flawed outcomes, customers and employees lose trust. AI will deliver erroneous outcomes as a result of one of three main things:

  • Bias in the data
  • Bias in the algorithms
  • Teams responsible for managing the data and algorithms lack diversity within their ranks

Salesforce and ethical AI

At Salesforce, trust is our number one value, and we consider it our responsibility to develop ethical AI. We must also help guide organisations on the principles of building a trusted AI capability that enhances customer experience and enables customer success.

This begins with purpose and values, and covers five key areas. AI that builds trust must be:

  • Responsible: by safeguarding human rights and protecting the customer data the business has been entrusted with.
  • Accountable: by seeking and leveraging feedback from stakeholders — including customers, regulators, employees, and communities — to constantly evolve and improve the AI.
  • Transparent: by being clear about how the technology arrived at its predictions or recommendations, as well as how and what data was used to drive the outcome.
  • Empowering: by adding positively to the lives and economies of customers, communities, and societies.
  • Inclusive: by ensuring all teams and stakeholders bring a variety of perspectives.

AI drives impactful customer experiences. That’s something we all understand. To build trust with customers still wary of AI, businesses need to be transparent about how they use customer data to create seamless and personalised experiences.

Make AI ingredients clear

Independent bodies test new cars and give them safety ratings so customers can understand exactly what they’re paying for in terms of protection. Food products on supermarket shelves contain nutrition information to help the customer make healthy choices. Why shouldn’t AI engines that drive various customer outcomes be treated with the same level of transparency?

Now, at Salesforce, they are.

One of the developments driven by our Office of Ethical and Humane Use of Technology is what we call ‘Model Cards’. They communicate the critical information from customers or prospects that we’re using to build a specific model. It might be information around the training data we use, the ethical considerations, or the performance metrics.

We publish this information so customers can clearly see how a particular piece of AI capability thinks and works. They can see how it was built and what they can expect as a customer. It’s a simple and effective way of driving transparency.

This sets a standard for the way our business wants to operate in the realms of AI. Being a leader in technology, we have an ability to influence the market around us. This is part of our vision for transparency and the ethical use of AI.

The danger of ignoring challenges around AI ethics

There are numerous influences around ethical behaviour with AI. They’re driven by various forces, including employee activism, increased regulation, increased expectations from customers, and more.

When these influences are not enabled and enacted, various serious issues can arise.

First and foremost is privacy violations. If you’re not clear with a customer as to why you’re collecting data, how you’re using that data, or the benefits you expect to derive from it, that in itself is a privacy violation. It erodes trust very quickly.

Second, you’ll find there will be bias in the data, which could come from originally using an incorrect or undiversified training data set.

Third, you’ll see inequality. Your AI will not be pervasive and available to everyone, or it will treat certain individuals or groups differently.

This will bring the business to a point where it doesn’t have the desired level of loyalty from the customers. Those customers will churn more quickly.

How do businesses ensure this doesn’t happen? They must be clear on what their purpose is and put purpose at the centre of everything they do. Customers want businesses to look beyond profit generation. They want to know what impact a business is having on society. If a business has a purpose, all of its actions will flow from that, including ethics in their AI models and their correct execution.

Next, it’s about the cultivation of an ethical mindset throughout the business. That involves putting the customer at the core of everything the business does and ensuring absolute transparency. This might involve the creation of diversity advisory boards and the like, which helps to drive accountability.

Finally, it involves the diversification of technology via the diversity of the teams that build technology.

Any technological advancement or transformation intended to create seamless, personalised customer experiences must contain these ingredients. Those that do will be well rewarded by the market.

Learn more about how businesses like yours are using ethical AI in the real world. Check out our customer success stories here.

Rob Newell

Rob joined Salesforce in 2012 and is currently serving as Area Vice President, ASEAN, for the Solution Engineering and Specialist Sales business units based in Singapore.    Solution Engineering is charged with providing innovative solutions to help customers on their digital transformation journeys through Salesforce’s Customer 360 platform. Specialist Sales is charged with helping customers realise value and impact through the Salesforce Platform. The aim is to develop meaningful, digital experiences faster on a trusted, intelligent platform and the Salesforce Service Cloud by allowing customers to quickly adapt and scale their customer service experiences.    Rob brings more than 20 years of strategy, consulting and leadership experience across Asia Pacific to his role at   Salesforce. Prior to Salesforce Rob worked in the tech start-up community helping seed and grow businesses developing data, payment, and delivery platforms. Post his work in startups, Rob spent time in roles as Chief Architect and Chief Technology Officer in the tech industry before a stint in Mergers and Acquisitions.    Rob is an avid wellness advocate and firm believer in giving back to the community with a passion for using technology to make the world a better place. A great example of this is the Early Detection of Autism App (ASDetect), which was joint pro bono project with Salesforce and the University of Melbourne to help uncover the early signs of Autism in young children.  Rob holds a Bachelor of Computer Science and Master of Business Administration. 

More by Rob

Want Trailblazer tips and thought leadership straight to your inbox?