Skip to Content

AI Is Everywhere — But Are You Building It Responsibly?

AI Is Everywhere — But Are You Building it Responsibly?

The biggest barrier to widespread artificial intelligence isn’t immature technology — it’s skepticism and lack of trust. Here’s what business leaders can do about it.

Artificial intelligence (AI) touches billions of people daily. It suggests recommended content on your favourite streaming service and helps you avoid traffic while you drive. AI can also help businesses predict how likely someone is to repay a loan or determine the most efficient routes for distributors to ship goods quickly and reliably. 

There is no doubt the predictive capabilities brought about by these AI have helped businesses to scale rapidly. With applications in fields from retail services to logistics and personal finance, the global AI industry is projected to reach annual revenue of US$291.5 billion by 2026. In Asia Pacific, the size of the AI market is also growing. It is estimated to be worth around US$450 million by 2025. But for all the good AI has contributed to the world, the technology isn’t perfect. AI algorithms can cause many business and societal pitfalls if not kept in check. 

AI is trained on a large amount of data that has been collected over time. If that data collected shows bias, or is not representative of the people the system will impact, it can amplify those biases. For example, the recently launched BlenderBot 3, a conversational AI, perpetuated negative bias by generating unsafe and offensive remarks during a public demo. Research has also shown that many popular open-source benchmark training datasets — ones that many new machine learning models are measured against — are either not valid for the context in which they have been widely reused or contain data that is inaccurate or mislabeled.

Innovate with AI

Einstein AI helps to empower your teams with built-in intelligence to engage with empathy, increase productivity, and scale customer experiences.

Ways to combat bias in AI

As a result, governments around the world have begun drafting and implementing AI regulations. Singapore has a law focused on accountable and responsible development of AI, while Thailand has most AI initiatives embedded within policies and strategies to strengthen the development of AI-related technologies.

Regulation, when well-crafted and appropriately applied, ensures an ethical AI system that is inclusive and unbiased. 

In the meantime, business leaders need to pave the way to a more equitable AI infrastructure. Here’s how:

Be transparent

To increase trustworthiness, many regulations require businesses to be transparent about how they trained their AI model (a program trained on a set of data to recognise certain types of patterns), the factors used in the model, its intended and unintended uses, and any known bias. Policymakers usually request this in the form of data sheets or model cards, which act like nutrition labels for AI models. Salesforce, for example, publishes its model cards so customers and prospects can learn how the models were trained to make predictions.

Make your AI interpretable

Why does an AI system make the recommendation or prediction it does? You might shrug when AI recommends you watch a new movie, but you’ll definitely want to know how AI weighed the pros and cons of your loan application. Those explanations need to be understood by the person receiving the information — such as a lender or loan officer — who then must decide how to act upon the recommendation an AI system is making.

That said, one study conducted by researchers at IBM Research AI, Cornell University, and Georgia Institute of Technology found that even people well versed in AI systems often over-relied on and misinterpreted the system’s results. Misunderstanding how the AI systems work can result in disastrous consequences in situations that require more human attention. The bottom line? More real-life testing needs to occur with the people using the AI to ensure they understand the explanations.

Keep a human in the loop

Some regulations call for a human to make the final decision about anything with legal or similarly significant effects, such as hiring, loans, school acceptance, and criminal justice recommendations. By requiring human review rather than automating the decision, regulators expect bias and harm to be more easily caught and mitigated. 

In some cases, humans may defer to AI recommendations rather than rely on their own judgment. Combine this tendency with the difficulty of grasping the rationale behind an AI decision, humans don’t actually provide that safety mechanism against bias.

That doesn’t mean high-risk, critical decisions should simply be automated. It means we need to ensure that people can interpret AI explanations and are incentivised to flag a decision for bias or harm. For instance, the AI governance framework in Singapore, along with its personal protection data policy, allows AI to make recommendations for decisions with legal impact (for example, a loan approval or rejection) but requires a human to make the final decision. This not only promotes AI adoption, but also builds customer confidence and trust.

How to add ethical AI practices to your business

It’s understandable, then, that some executives may be reluctant to collect sensitive data — age, race, and gender, for example — to begin with. Some worry about inadvertently biasing their models or being unable to properly comply with privacy regulations. However, you cannot achieve fairness through inaction. Companies need to collect this data in order to analyse if there is disparate impact for different subpopulations. This sensitive data can be stored in a separate system where ethicists or auditors can access it for the purpose of bias and fairness analysis. 

In a perfect world, AI systems would be built without bias, but this is not a perfect world. While some guardrails can minimise the effects of AI bias on society, no dataset or model can ever be truly bias free. Statistically speaking, bias means “error,” so a bias-free model must make perfect predictions or recommendations every time — and that just isn’t possible. 

There is a lot to understand to ensure companies are creating and implementing AI responsibly and in compliance with regulations. This requires business leaders to build ethical AI practices.

Executives can start by hiring a diverse group of experts from many backgrounds in ethics, psychology, ethnography, critical race theory, and computer science. They can build on that by creating a company culture that rewards employees for flagging risks — empowering them to not only ask, “Can we do this?” but also, “Should we do this?” — and by implementing consequences when harms are ignored.

Prepare for the future of AI. Find out how.


This post originally appeared on the U.S.-version of the Salesforce blog.

Kathy Baxter

As an Architect of Ethical AI Practice at Salesforce, Kathy develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute of Technology. The second edition of her book, "Understanding your users," was published in May 2015. You can read about her current research at 

More by Kathy

Want Trailblazer tips and thought leadership straight to your inbox?