Skip to Content
Skip to Footer

This week, the UK’s AI Safety Summit is convening global leaders from business, policy, and academia to shape the future of AI. It’s an important moment for leaders to discuss how we can develop trusted AI while seizing the transformative opportunities it presents for individuals, organisations, and societies.

Pausing to contemplate the potential risks and benefits of AI in front of us now and in the future, will pay dividends. At Salesforce, we believe optimising this balance — between responsibility and opportunity — is crucial. Gathering varied perspectives and experiences, having open and honest discussions, and collaborating will help create a path forward.

The dawn of the AI revolution

Every UK business leader I talk to believes AI is essential to their business, with the majority (84%) of IT leaders expecting generative AI to play a prominent role at their organisations in the near future. We’re at the dawn of an AI revolution and already businesses are using AI to enhance productivity and customer experiences.

Leading brands are using Salesforce AI technology to personalise billions of customer interactions. For example, Heathrow Airport is using real-time data to improve the traveler experience and keep up with its more than 70 million annual passengers. Looking forward, its service agents will become even more efficient by using generative AI to generate personalised replies to service cases and case summaries.

Salesforce has been investing in safe, responsible AI for over a decade and introduced Einstein — the first generation of AI for CRM — in 2016. Today, we deliver over 1 trillion AI-powered predictions every week. Our AI is purpose-built for the enterprise, and we’re placing trust and safety at the centre of everything we do because the AI revolution is also a trust revolution.

Today, we deliver over 1 trillion AI-powered predictions every week. Our AI is purpose-built for the enterprise, and we’re placing trust and safety at the centre of everything we do because the AI revolution is also a trust revolution.

Defining guardrails alongside innovation

As businesses roll out AI-driven strategies, they must do so collaboratively and responsibly. Privacy, bias, and toxicity all present very real challenges today, and others worry about existential risks.

My colleague Eric Loeb, Executive Vice President of Global Government Affairs, highlights the importance of guardrails without stifling innovation:

“It’s not enough to deliver the technological capabilities of generative AI. We must prioritise responsible innovation to help guide how this transformative technology can and should be used. Organisations need to operate in an environment that both encourages innovation and has the necessary guardrails in place to ensure trust.

We must prioritise responsible innovation to help guide how this transformative technology can and should be used. Organisations need to operate in an environment that both encourages innovation and has the necessary guardrails in place to ensure trust.

Eric Loeb, Executive Vice President of Global Government Affairs, Salesforce

He continued, “The principles developed by the G7 members around safety and trust in AI are an important step, especially alongside U.S. President Biden’s Executive Order. Now, the AI Safety Summit is providing a crucial forum for discussion between the private and public sector. We’re looking forward to seeing what comes out of this week’s conversations, and are ready to partner with government to work towards a risk-based regulatory framework.”

Preparing now for a digitally transforming economy

Whatever outcomes result from the Summit, business leaders can take action now to prepare for a future in which AI — like electricity — powers everyday interactions.

Addressing the digital skills crisis — a global challenge but particularly acute in the UK — is an urgent need. Only 1 in 10 UK workers believe they have AI skills. Yet, the success of AI will depend on having skilled people to effectively implement and nurture it, including new job roles like AI ethicist, AI interaction designer, and AI solution architect.

Recent Salesforce research revealed that although over a third (38%) of UK workers are already using or planning to use generative AI at work, most (62%) say they lack skills to do so effectively and safely. Worryingly, the majority (79%) said their employer does not currently provide any form of generative AI training, even though over half (55%) would like them to.

Recent Salesforce research revealed that although over a third (38%) of UK workers are already using or planning to use generative AI at work, most (62%) say they lack skills to do so effectively and safely.

Businesses and governments need to agree now to training, reskilling, and upskilling to make sure we don’t create a new digital divide. Salesforce is committed to equipping people with the tools to take on jobs that our digitally transforming economy demands. I’m proud that our free online learning tool Trailhead, which takes anyone with a low level of technical knowledge into a Salesforce role in as little as six months, recently expanded to offer AI-specific skills training.

Building trusted AI inclusively

Technology is a reflection of how it is used so ultimately, we will get the AI we deserve. It’s incumbent on all of us that AI is trusted and becomes a force for good.

Those collaborating on AI should represent diverse experiences and perspectives, which is why we created an Office of Ethical and Humane Use to guide us and pioneer the use of ethics in technology.

AI innovation is accelerating and unlocking new opportunities for citizens and customers. It’s essential that key players collaborate now on a common goal and support tailored, risk-based AI regulation — to protect individuals, build trust, and encourage innovation.

This week’s summit aims to provide a space to do just that. 

More information:

  • Go here for more news and stories about Salesforce AI.
Astro

Get the latest Salesforce News