Skip to Content

Machine Learning – How did we get here?

Machine Learning - How did we get here?

Explore a brief history of Machine Learning in the first of five blog posts about the topic defining the Fourth Industrial Revolution.

In certain publications, we see a lot of mentions of industrial revolutions. You wait millennia for one to come along then all of a sudden you are well into the fourth.

The first Industrial Revolution centred around the mechanisation of processes by harnessing steam power. The second Industrial Revolution revolved around electrification with the advent of practical electrical availability. The third Industrial Revolution involved digitisation with the dawn of modern-day computing. Today, according to the World Economic Forum, we are in the midst of the fourth Industrial Revolution – one that brings together diverse technologies such as the Internet of Things (IoT), Augmented Reality (AR) and Artificial Intelligence (AI) into areas that fit around human augmentation.

Why should we care? There is a clue in the name. These events were and are revolutionary for the industry. The fourth Industrial Revolution is upon us and history has proven that ignoring an Industrial Revolution is not a viable option. In 1770 – during the first Industrial Revolution, James Hargreaves invented a water-powered textile-making machine, which automated the textile-making process. The invention met resistance from textile workers. The name of a group of those that famously smashed the machines has become synonymous with backward thinking and not embracing the future – the Luddites.

The fourth Industrial Revolution is powered by a huge collection of independently developing technologies far too broad to tackle in one blog series so we will look at one significant component – Artificial Intelligence.

Quickly build AI-powered apps for employees and customers on a complete artificial intelligence platform.

Machine learning uses past data to predict what will happen in the future with minimal programming.

Is Artificial Intelligence all that important? Many notable intellects seem to think so. British scientist Stuart Russell OBE (Professor of computer science at the University of California, Berkeley) describes AI as potentially “the biggest event in human history”.

There is a lot of talk about Artificial Intelligence but oddly, there’s no real consensus on a definition of what constitutes Artificial Intelligence. Broadly something like – machines doing things that previously could only be done with (human) brains – will suffice. Given this definition, it’s possible to argue that a simple electronic calculator when it was introduced was Artificial Intelligence. Arithmetic calculations could previously only be performed by human brains – and now here was a machine that could perform the same function better than most humans. The electronic calculator could be considered an instance of Narrow Artificial Intelligence (Narrow AI) – where the AI ‘machine’ is specialised to perform a single task – and only that task. Researchers are actively pursuing Artificial General Intelligence (AGI). The goal of AGI is to perform a wide range of tasks through one ‘machine’ which was previously only possible with human intelligence. 

Many would struggle with the definition of an electronic calculator as an example of Artificial Intelligence (‘Narrow’ or otherwise). This is because we have missed one important concept – Machine Learning. Machine Learning can be viewed as a sub-branch of Artificial Intelligence – one where ‘machines’ can learn to perform tasks without being explicitly informed how to satisfy the task. If we are going to deliver true Artificial Intelligence, it’s highly likely Machine Learning will form a part of that. An electronic calculator is not ‘learning’ how to perform a task – it’s been specifically instructed how to perform the task – so can we now discount it as a form of true AI?

Artificial Intelligence is a constituent part of the fourth Industrial Revolution (even in its ‘Narrow’ form). But, the World Economic Forum coined the term ‘Fourth Industrial Revolution’ around 2016. So it may be a surprise to learn that John McCarthy – a Mathematics Professor at Dartmouth College in the USA first used the term Artificial Intelligence (in this context) in 1956.

So, why the 60-year gap? There were many false dawns and unrealistic expectations in that 60-year period. The early enthusiasm for AI was dampened by early failures. At the start of the 1970’s, the British Science Research Council commissioned James Lighthill to conduct a review of AI. The Lighthill Report (as it became known) stated that “in no part of the field have the discoveries made so far produced the major impact that was then promised”. The report formed the basis for the decision by the British government to end support for AI research in most British universities. AI and the AI community were under intense scrutiny and had to re-group and refocus.

Today, Artificial Intelligence is all around us. In our voice-powered home automation devices, our online shopping recommendation service and increasingly in our business lives.

Why now?

During the decades that passed from 1956 to the present, progress was undoubtedly made on the ‘algorithms’ that support Artificial Intelligence but two big gaps remained. The missing ingredients were very large quantities of curated data and very powerful computational resources. Cloud Computing delivers powerful computational resources whilst increasing the adoption of digital services from Social Networks to E-Commerce providing very large quantities of curated data. It should be little surprise then that Salesforce, a Cloud-native company that has been gathering large quantities of curated data for decades, should have an important view on Artificial Intelligence.

Salesforce has access to the computational power and large quantities of curated data – the last ingredient is the ‘algorithms’. As mentioned, the algorithms have progressed over time and the number and complexity of these algorithms are growing at an astounding rate. To use these technologies, typically practitioners need skills such as advanced applied mathematics and statistics plus knowledge of programming languages such as Python and R. Individuals with these skills are few and far between. High demand and low supply drive a hefty premium to hire such skilled personnel.

If only Salesforce, masters of low code/declarative, business-friendly solutions, had an answer to this challenge…

Machine Learning – Technical Background

Dive deeper into Machine Learning models and discover the differences between Supervised, Unsupervised, and Reinforcement Learning.

blog-offer-astro-c360
Martyn Doherty

Martyn is part of the UK Solution Engineering team at Salesforce. Having joined Salesforce in 2010 with experience gained in similar roles at various large technology companies, Martyn is a Distinguished Solution Engineer in the UK team.  Martyn has worked in industries as varied as Media/Entertainment, Healthcare & Life Sciences and Business Services and enjoys the unique business challenges different industries pose.  Martyn has a passion for new and innovative technologies and how these can be used to solve these business challenges.

More by Martyn

Get our bi-weekly newsletter for the latest business insights.