Skip to Content

AI: The danger’s not the algorithm, it’s the human

I'm not concerned about AI superintelligence ‘going rogue’ and challenging the survival of the human race – that science fiction is unsupported by any scientific research today. But I do believe we have to think about any unintended consequences of using this technology.

“While AI has the potential to do tremendous good, it can also have the potential for unknowingly harming individuals,” says Salesforce’s Ethical AI Practice Architect Kathy Baxter.

AI is not sentient; it’s merely a tool and is morally neutral. Those potentials Baxter mentions are always present, but its use – that is, whether those potentials are realised – depends on the criteria we humans apply to its development. 

AI can amplify human bias

AI can cause harm when algorithms reflect our human biases in the datasets that organisations collect. The effects of these biases can compound as the algorithms continually ‘learn’ from that data.

Let’s imagine, for example, that a bank wants to use an algorithm to predict whether it should give someone a loan. Let’s also imagine that, in the past, this particular bank hasn’t given as many loans to women or people from certain minorities.

These features will be present in that bank’s dataset – which could make it easier for an AI algorithm to draw the conclusion that women or people from minority groups are more likely to be credit risks and should therefore not be given loans.

In other words, the lack of data on loans to certain groups of people in the past could have an impact on how the bank’s AI program will treat their loan applications in the future. The system could pick up a bias and amplify it or, at the very least, perpetuate it.

Of course, AI algorithms could also be gamed by explicit prejudice, where someone curates data in an AI system in a way that excludes, say, women of colour being considered for loans.

Either way, AI is only as good as the data – specifically, the ‘training data’ – we give it. This means it’s vital for anyone involved with training AI programs to consider just how representative any training data they use actually is.

As Baxter put it, by simply plucking data from the internet to train AI programs, there’s a good chance that we will “magnify the stereotypes, the biases, and the false information that already exist”.

Mitigating the risk of bias

So how do we manage the threats of biased AI? We start by proactively identifying and managing any such bias – which includes training AI systems to identify it.

AI bias isn’t the result of the technology being flawed. Algorithms don’t become biased on their own – they learn that from us. So we have to take responsibility for helping to avoid any negative effects of the AI systems that we’ve created. For example, a bank could exclude gender and race from its dataset when setting up an AI system to score the viability of applicants for loans.

At the same time, companies also need to be aware of hidden biases. Say our imaginary bank removed gender and race from its AI model, but left in other details that can often act as proxy for race or gender. Including postcodes in an AI algorithm’s assessment, for example, could still see to the bank’s algorithms producing biased predictions that discriminate against applicants who live in areas associated with racial backgrounds that are underrepresented in the bank’s customer base – the dataset used in training.

To prevent AI systems from allowing bias to influence recommendations in any way at all, we need to rely on human understanding, not just technology.

So the people who are building AI systems must reflect a diverse set of perspectives. A homogenous group may only hear opinions that match their own – they’re limited to what they know, in other words, and are less likely to spot where bias might be present.

With such guidance, organisations can curate bias out of their training data to mitigate negative effects. This can also help to make AI systems more transparent and less inscrutable, which in turn makes it easier for companies to check for any errors that might be taking place.

Taking ethical AI to the next level

I’m seeing efforts to tackle bias currently moving in tandem with a broader global debate about AI ethics. That’s being led by groups such as the Partnership on AI, a consortium of technology and education leaders, including Salesforce, from varied fields.

These groups are joining forces to focus on the responsible use and development of AI technologies by advancing the understanding of AI, establishing best practice and harnessing AI to contribute to solutions for some of humanity’s most challenging problems.

Taking these debates to a wider stage is important because any process that involves the responsible use of AI should not be seen as simply another set of tasks. Instead, we need to fundamentally change people’s thinking and behaviour, and we need to do it now.

To put it another way, if AI does hold a mirror to humanity, it’s up to us to be accountable, now, and ensure what’s reflected shows our best face.

Richard Socher is Chief Scientist at Salesforce. He leads the company’s research efforts and works on bringing state of the art AI solutions to Salesforce. Read more from Richard Socher.

This article was first published by the World Economic Forum in conjunction with the 2019 World Economic Forum Annual Meeting in Davos, Switzerland.

Richard Socher

Richard Socher is Chief Scientist at Salesforce where he leads the company’s research efforts and works on bringing state of the art artificial intelligence solutions to Salesforce. Prior to Salesforce, Richard was the CEO and founder of MetaMind, a startup acquired by Salesforce in April 2016. MetaMind’s deep learning AI platform analyzes, labels and makes predictions on image and text data so businesses can make smarter, faster and more accurate decisions than ever before. Richard was awarded the Distinguished Application Paper Award at the International Conference on Machine Learning (ICML) 2011, the 2011 Yahoo! Key Scientific Challenges Award, a Microsoft Research PhD Fellowship in 2012, a 2013 "Magic Grant" from the Brown Institute for Media Innovation, the 2014 GigaOM Structure Award and is currently a member of the WEF Young Global Leaders Class of 2017. Richard obtained his PhD from Stanford working on deep learning with Chris Manning and Andrew Ng and won the best Stanford CS PhD thesis award. Connect with Richard on Twitter @RichardSocher.

More by Richard

Get the latest articles in your inbox.