Skip to Content
Skip to Footer

Technology Ethics

How Businesses Can Counter Bias in AI

As artificial intelligence (AI) becomes an increasingly pervasive part of everyday life, experts have concerns that its intelligence can sometimes inadvertently contain biases of the people who control its development. Research into facial recognition technology used by police, for example, has found that systems can disproportionately target racial minorities due to inherent racial bias.

According to AI researcher Timnit Gebru, there’s “a real risk of institutionalizing bias as we institutionalize AI.” Gebru — a research scientist at Google AI — has studied the ethical considerations of machine learning algorithms at Microsoft and elsewhere. After noticing she was one of only a handful of black women at a major AI conference, she co-founded the Black in AI social community in 2017. We spent time with Gebru discussing the future of AI and how companies can counter algorithmic bias in their AI decision-making.

Q: We’re seeing a great deal of debate right now about the impact that AI is having on our everyday lives. Amid all the discussion, it can sometimes be challenging to understand what’s real and what’s hype. What’s your take on this? How do you see AI making an impact on work and business?

It’s also hard for me to understand what’s real and what’s hype! One recent story — widely reported — was that Facebook had shut down certain chatbots after they began communicating with each other in their own language. That turned out to be completely untrue.

More relevant are the breakthroughs we’re seeing in areas like language translation. Researchers are now creating machine learning systems capable of translating from certain languages to others with a much higher degree of accuracy than before. Hitting human parity — translating sentences with the same accuracy and quality as a person — is something that machine translation experts have been working to achieve for decades. These results show we’re getting closer to that goal of parity in certain contexts, which is impressive.

The hope is this will pave the way for translating texts with more complex or niche vocabulary, and more accurate and natural-sounding translations across other languages. That’s exciting because in my experience, tools like Google Translate work well with Western languages like French — but very much less so with non-Western ones like Amharic, even though it is spoken by more than 100 million people worldwide.

Q: We tend to assume that if a translation tool is AI-powered, it will provide an accurate version of a text. But as you point out, such tools can be fallible.

Yes, and that’s why tools like Facebook Translate should come with a disclaimer highlighting that translated texts may contain errors. If an exceedingly narrow or incomplete dataset is being used, perhaps because it’s a non-Western language, it can mean wildly less-accurate translations.

It’s important to highlight this because otherwise people can lean toward “automation bias” — that is, they can choose to accept that a computer-generated solution is accurate even in the face of contradictory evidence and common sense. I’m sure we’re all familiar with those stories of people who have driven into rivers, such is their dedication to trusting their GPS systems.

Q: What other issues do we need to consider as AI’s impact on our lives becomes more far-reaching?

As organizations invest in AI-related products, they should take steps to counter the risk of bias in AI systems more generally.

Companies use technologies such as deep learning to help them feed their computer networks vast quantities of information so that they recognize patterns more quickly. But for all their enormous potential, current success is based on fitting to patterns of historical training data. There’s still much work to do to train models how to reason. They can only be trained to find patterns in historical data.

The problem is that this training data isn’t neutral — it can easily reflect the biases of the people who put it together. That means it can encode trends and patterns that reflect and perpetuate prejudice and harmful stereotypes.

This has led to voice recognition software that struggles to understand women, a crime prediction algorithm that targets black neighborhoods, and an online ad platform that is more likely to show men highly paid executive jobs.

Q: You’re saying that inadvertently biased datasets lead to biased AI.

Yes, absolutely. And in a world where computer science graduates are primarily white or Asian males, that means there is a real risk of institutionalizing bias as we institutionalize AI.

However, unless we take steps to explicitly program algorithms to address the risk of bias, they will continue to be riddled with them.

Q: How can we guard against bias, and ensure that AI is something that can give everyone more opportunities?

We’re seeing a kind of a Wild West situation with AI and regulation right now. The scale at which businesses are adopting AI technologies isn’t matched by clear guidelines to regulate algorithms and help researchers avoid the pitfalls of bias in datasets.

We need to advocate for a better system of checks and balances to test AI for bias and fairness, and to help businesses determine whether certain use cases are even appropriate for this technology at the moment.

I also think it should be standard for companies to be able to explain the rationale behind the automated decision-making processes of their AI systems. That might involve keeping documentation so you can demonstrate how you trained your AI — sharing the examples you provided and highlighting their relevance.

Q: What else can companies do to advocate for a less-biased, more ethical AI?

It’s crucial to make diversity a priority whenever and wherever you’re embedding AI. A lack of diversity in AI affects what kinds of research we think are important, and the direction we think AI should go. It can affect whether tackling the issue of bias is seen as a priority or not.

If people are motivated and willing to spend time and energy on it and make sacrifices, this is an issue that could be turned around, and fast.

Put another way, we need diversity in our sets of technology creators, because otherwise we are not going to address challenges like bias that are likely to affect the majority of people in the world.

When problems don’t affect us, we don’t think they’re important, and we might not even know that these problems exist, because we’re not interacting with the people who are experiencing them. But it’s when we work for inclusion that the exponential benefits of AI can positively affect all of us.

Astro

Get the latest Salesforce News