Skip to Content

Machine Learning – Issues and Considerations

Machine Learning - Issues and Considerations

Does Machine Learning really pose a risk to how we go about our daily life and business or can it be an ally in our path towards greater personalisation?

There are many apocalyptic tales of intelligent machines becoming our overlords. Whilst many of these stories are more borne from science fiction rather than current science facts, it is wise to consider potential risks.

Machine Learning – Salesforce Solutions

Ever wondered how Salesforce uses Machine Learning in the real world?

As Voltaire stated (or Spiderman’s Uncle Ben if you prefer) – “with great power comes great responsibility”. Machine Learning is a powerful tool – and it’s getting more powerful by the day. There is undoubtedly a lot of value Machine Learning can bring to our everyday existence – but so too are there issues. The internet has been a powerful (and largely positive) force in our lives for decades but it can unintentionally or intentionally cause harm. The same is true for Machine Learning.

This is a large and very important topic – far too much so to be covered in detail. We will look briefly at two main areas of concern though; unintentional harm and intentional harm.

Unintentional Harm

In relation to unintentional harm, we consider consequences of Machine Learning that can be considered harmful but were not the intention of the creator.

One important area where unintentional harm can be prevalent comes in the form of bias. It’s comforting to think that ‘machines’ would actually exhibit less bias than humans. Comforting – but not totally accurate.

So, how does a ‘machine’ become biased? To answer this, let’s revisit something we discussed post two – “Supervised Learning”. As you may recall, in Supervised Learning, the model is trained by being ‘shown’ a large number of past examples of the outcome you want to predict. In our example, we discussed predicting a customer’s propensity to churn – based on large quantities of previous customer data (and whether they churned or not).

So far – so good. However, the predicted outcome is only as good as the historical data provided to train the model. In our ‘customer churn’ model perhaps there’s limited scope for real harm – but what if we were predicting something different? What if we were predicting an outcome that affected a person’s finances – or even liberty? Financial institutions are increasingly using AI (Machine Learning models) to determine the outcome of bank loan applications. Supervised Learning models are trained with large quantities of previous loan applications and the relative success of those applications. The premise is that the trained model could review new applications and apply a prediction as to whether the loan should be approved or rejected. If the historic data contained applications largely from white males, it’s possible that the model could correlate gender and ethnicity to positive loan outcomes – and formulate recommendations on that basis. If left to run, the model would disproportionately approve applications from white male applicants – and further refine the model with these data – thereby amplifying the historic bias.

Think this is unlikely? In 2022 the UK financial regulator already warned UK banks to ensure AI models do not worsen discrimination against minorities.

If financial impact can create issues, the legal impact can be even greater.

In the US, Machine Learning models are used to identify crime hotspots. Police resources are diverted to these identified areas and arrests are made. Arrests being made in the identified areas further reinforce that they are indeed crime hotspots (essentially a type of ‘confirmation bias’) and the cycle continues. More alarmingly, Machine Learning models are also being used in sentencing.

The outcome of the law enforcement example may differ from the loan application example, but the cause is the same. Training a model on poor data can result not only in bias being present in the model – but bias actually being amplified.

Unintentional harm is not limited to Supervised Learning approaches. The apocalyptic tales we started this post with often reference physical machines overpowering human society. Whilst perhaps a little far-fetched this does warrant some consideration. What if machines become more intelligent than us – and ‘refuse’ to be controlled by us?

How could this apocalyptic future become a reality? As the design of some evil megalomaniac? Perhaps, but more probably through a lack of design from some Machine Learning department.

In part two of this blog series, we looked at Reinforcement Learning. In this Machine Learning approach, a reward function is used and the ‘machine’ tries to maximise the reward it receives by constantly evaluating and amending its approach to a problem. The idea is that a well-specified reward function will result in the optimum performance of the ‘machine’ seeking to maximise the attainment of that reward.

Machine Learning – Technical Background

Dive deeper into Machine Learning models and discover the differences between Supervised, Unsupervised, and Reinforcement Learning.

By defining a reward function we are essentially telling the ‘machine’ what is important to us – but a key ingredient may be missing – context. In the context of the problem we are trying to solve, the reward is important to us – but not in the broader context of human existence. If a reward function rewards the making of ‘widgets’ – the ‘machine’ will do everything in its power to maximise the production of widgets. Other things, like the welfare of people, for example, are not important – widget production is. People could become collateral damage in the pursuit of maximum widget production (and consequently maximum reward).

Think this is all a little far-fetched? In July 2022 a (you guessed it) chess-playing-AI-powered robot broke a seven-year-old’s finger. It did this because the child took his move too quickly. This quick move prompted the robot’s reaction because it apparently “contravened the robot’s safety procedures”!

So what? We could just ‘turn it off’ – right? Not necessarily! If an intelligent ’machine’ has been charged with maximising widget production (or chess victories), it’s possible that the ‘machine’ considers obstacles to achieving that goal. Being turned off is a massive obstacle to achieving the goal – so overriding the off switch seems like job number one for the ‘machine’.

This section on potential unintended harm raises an important point. Care should be taken in the design phase of Machine Learning to minimise these unintended consequences. Just as with traditional software development, dealing with security and safety as an afterthought is not an optimal solution. We encountered Prof. Stuart Russell in the first blog in this series. Prof. Russell, a key proponent of AI safety said, “We should stop thinking of this as an ethical issue or an AI safety issue – it’s just AI” – meaning that ethics and safety should be core to AI – not additional to it.

There is an excellent YouTube channel by Robert Miles on AI safety if you want to go into way more depth on this subject.

Intentional Harm

With most technologies, individuals and organisations find ways to exploit the technology for criminal purposes. The same is true of Machine Learning.

If Machine Learning models can be used to legitimately determine when to send an email to maximise its chances of being opened, they can also be used to maximise the effectiveness of a phishing attack. Rather than sending mass identical emails and hoping to catch their phish, criminals are using Machine Learning models to send targeted and therefore more believable communications to potential victims.

We’ve all heard about ‘fake news’. To avoid falling foul of fake news, statements attributed to a source should be examined for their veracity. That’s fine for the written word – did person X really say statement Y? We tend to trust graphical information more – “a picture paints a thousand words”. But, what if the picture is fake? Surely we can trust video evidence? What if the video itself is fake? Machine Learning algorithms are getting so powerful that generating very believable fake photo and video content is not only possible – but simple.

With all these potential issues either unintentional or intentional, is Machine Learning too much of a risk to use? Well, the potential benefits are huge but care needs to be taken to ensure the promise is realised without causing any harm. That’s why, in 2018, Salesforce’s Marc Benioff announced the forming of “The Office of Ethical and Humane Use”.

Shortly after the announcement, Salesforce hired our industry’s first-ever Chief Ethical and Humane Use Officer, Paula Goldman. One of the many outputs from the Ethical and Humane Use group is an “AI Ethics Maturity Model”.

Machine Learning – The Future

Machine Learning is here to stay but what does the future hold?

blog-offer-astro-c360
Martyn Doherty

Martyn is part of the UK Solution Engineering team at Salesforce. Having joined Salesforce in 2010 with experience gained in similar roles at various large technology companies, Martyn is a Distinguished Solution Engineer in the UK team.  Martyn has worked in industries as varied as Media/Entertainment, Healthcare & Life Sciences and Business Services and enjoys the unique business challenges different industries pose.  Martyn has a passion for new and innovative technologies and how these can be used to solve these business challenges.

More by Martyn

Get our bi-weekly newsletter for the latest business insights.