Skip to Content

Machine Learning – The Future

Machine Learning - The Future

Machine Learning is here to stay but what does the future hold?

Some concerns from the First Industrial Revolution extend to the Fourth Industrial Revolution.

In this first blog in this series, we mentioned The Luddites. The Luddites destroyed James Hargreaves’ water-powered textile-making machine during the first Industrial Revolution. They had genuine concerns about the impact of mechanisation on jobs. Those same concerns exist in the fourth Industrial Revolution.

Machine Learning – How did we get here?

Explore a brief history of Machine Learning in the first of five blog posts about the topic defining the Fourth Industrial Revolution.

Will AI ‘steal’ our jobs? As with any prediction, opinions differ widely.

It’s hard to imagine Narrow Artificial Intelligence making us all unemployed – perhaps unless you play chess for a living! Also, the Machine Learning models we have described previously are great at predicting things – does this photo contain a cat, is this a good lead, which customers are likely to enjoy this product, etc? So procedural types of jobs could potentially be replaced – but not creative ones – right?

Recent advances in ‘Generative’ Machine Learning models may make the creative folks amongst us less comfortable about their long-term employment prospects. We briefly looked at Generative model output in the fourth blog in this series when discussing ’fake news’. If a model can create a fake photo or even video content, why not other ‘creative’ content?

Machine Learning – Issues and Considerations

Does Machine Learning really pose a risk to how we go about our daily life and business or can it be an ally in our path towards greater personalisation?

Thomas Young, an English Scientist who died in 1829 has been described as “the last man who knew everything”. Whilst this is almost certainly not accurate – there was a lot less of everything to know in the 1820s.

Recent Machine Learning models have been massive in scale. For example, ‘GPT-3’ (a Generative Pre-Trained Transformer model) released in 2020 had a huge 175 billion parameters. The 45 terabytes of data the model was trained on included a large proportion of the pages on the internet, a vast collection of books and all of Wikipedia. To put this into some context, the entirety of Wikipedia represented just 3% of the GPT-3 training data – that’s a huge training dataset.

If Thomas Young can be considered the last person to know everything, GPT-3 can be considered one of the first Machine Learning models to know everything.

So, what do models like GPT-3 allow us to do?

Imagine being charged with writing some creative text for a new product launch. Models such as GPT-3 could take a few brief text prompts from you and create the entire content copy.

Imagine you wanted to create photo realistic artwork to support your product launch – but the correct image does not exist. Write a few words describing the image you want and let the model create a photo-realistic image as you described.

Imagine you want to create an app to accompany your new product. You guessed it, write a few lines prompting the model as to what the app should do and let the model create the code for you.

Want some music for the product’s advertising campaign – yep, get the model to create that.

But we are still just looking at Narrow Artificial Intelligence. Even if a model as complex as GPT3 can perform tasks with human-like ability, these tasks are from a common set. Models such as GPT-3 are great for tasks such as question-answering, text summarisation, and machine translation – essentially transforming given text into something else.

What about achieving General Artificial Intelligence (AGI) – also sometimes referred to as “The Singularity”? We briefly discussed AGI in the first blog. According to Wikipedia, AGI is defined as, “the ability of an intelligent agent to understand or learn any intellectual task that a human being can”. There is some ambiguity in this definition – “any intellectual task” should perhaps read “all intellectual tasks”. The key point here is that the machine can learn a task it has not been previously trained to do – and do it as well as (or better than) a human.

The answer to General Artificial Intelligence may not be ever bigger Machine Learning models with more neural layers, more terabytes of training data and billions of parameters. The solution may be conceptually more simple than that.

Machine Learning models are often described as being analogous to how the human brain works. In blog post two we discussed Supervised Learning in comparison to a human infant learning how to identify images containing cats. But the human brain doesn’t really work like that.

The human brain is not one monster neural network. Neuroscientists have known for a long time that different regions of the brain perform different functions. One region of the brain may well deal with image recognition – but the rest of the brain fulfils other functions. The structure of the brain regions doesn’t differ from each other – but the tasks they perform do. There is current research in this area by companies such as Google. Assembling relatively small-scale domain-specific neural networks to create multi-domain intelligence logically seems to get us closer to the human brain – and AGI.

Whilst all of this points to the potential for human-centric tasks to be completed by ‘intelligent’ machines – that leaves us humans with the upper hand in emotions. Machines can’t come close to having human-style emotion – or can they?

In the summer of 2022, a Google Engineer working on Google’s Conversational AI (LaMDA) made an incredible claim. Blake Lemoine, the engineer in question, claimed that Google’s Conversational AI was ‘sentient’. Essentially Lemoine asserted that LaMDA “is sentient because it has feelings, emotions and subjective experience”.

Transcripts from Lemoine’s interactions with LaMDA are unnervingly human. There is little doubt these conversations easily pass the famous “Turing Test” – the “test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human” – but do they demonstrate true feelings? The jury is currently out on this one.

So, when should we worry about (or celebrate) no longer working?

As mentioned previously, predicting the future is tricky. A good option would be to poll experts in the field of AI to see if a consensus is present. Many such polls have been conducted but unsurprisingly no specific date manifested. In 2019 an organisation called Emerj surveyed 32 PhD researchers in the AI field and asked them about AGI. One question was “When will the singularity occur, if at all”? Interestingly, 45% of respondents predicted a date before 2060. This result is consistent with other similar surveys. In fact, results suggest that, if AGI is going to be achieved (and most think it is inevitable) it should happen before 2060.

For now, we should embrace the assistance that Machine Learning can bring to many areas of our professional lives. Perhaps we can concentrate on the more interesting or innovative elements of our roles prior to our lives of leisure.

Keep up to date on what Salesforce is doing in the Artificial Intelligence research space. Salesforce research areas are wider than you might think – including diverse areas such as Economics, Protein Generation and Shark Tracking. All with ethics front and centre.

Learn more about Artificial Intelligence through Trailhead

Learn how to use artificial intelligence to meet your business needs, target ads, identify new audiences, and track performance or change customer service.

blog-offer-trailhead

Get our bi-weekly newsletter for the latest business insights.