August 2025 marked the moment key aspects of the EU AI Act came into play.
In particular, it saw those related to general-purpose AI models and governance come into effect – from specific rules on regulatory oversight to establishing consistent governance structures.
We sat down with Matt Tonner, Senior Director, AI Legal at Salesforce, to get the latest on the regulation, why it matters and what it means for your UK business. Here’s what he had to say.
Q: Thanks for joining me Matt. I’d love to know in your words, why does the EU AI Act exist?
A: European legislators have been leading global privacy laws via GDPR for several years now. It’s served as a model for lots of other jurisdictions to implement their own comprehensive data protection programmes.
However, there’s been a gap in the regulatory market for effective AI facilitation. And that’s where the EU AI Act comes in.
AI is the next frontier that’s really going to alter the way people interact with technology. AI systems hold a strong potential to bring economic growth, innovation, global competitiveness and societal benefits. However, in certain cases, it poses a potential risk.
There’s an opportunity for comprehensive AI regulation in the form of the EU AI Act to address risks and prevent undesirable outcomes – recognizing the potential for AI systems to affect fundamental citizen rights, which are the bedrock of EU law.
Q: Why do we need to regulate the use of AI?
A: Unlike traditional deterministic software, AI is probabilistic – meaning that there will be variation in how it solves problems and provides answers, even if the input is the same. What we’re seeing is that the AI capabilities hold a lot of potential, but the technology can make mistakes and have a negative influence.
When something is this transformative and simultaneously prone to errors, there’s a trust gap.
There are lots of ways to address the trust gap from a technology side, but it’s also important to get the regulatory framework right. Firstly, so that businesses have the right parameters and incentives to innovate safely. And secondly, so consumers can be confident that the industry has put the proper guardrails in place.
Regulation that’s designed to build trust – like the EU AI Act – is vitally important for managing innovative technologies so that everyone can benefit and potential harm can be minimised.
Q: When will the AI Act be fully applicable?
A: Portions of the Act are already in effect. Earlier this year, the prohibited AI risk tier came into place, and then August saw GPAI (general purpose AI) risk portions go live.
The EU AI Act takes a risk based approach and captures broad categories, both on the model developer side as well as businesses building on top of foundational models – for example, AI system providers.
August 2026 will see the high risk tier come into effect. The European Commission (EC) is expected to release a round of implementing rules and guidelines to clarify a lot of the finer points and questions people have about the requirements for the high risk tier.
The phased approach is something unique to the EU – elsewhere in the world we often see legislation come into immediate effect, with all obligations live simultaneously.
The EC has decided the market needs to mature in order for organisations to become compliant effectively. It’s chosen to address the most drastic risks first, and then work with various representatives from the industry, the public and key stakeholders to provide guidance around the other risk categories.
When you’re dealing with such an evolving and complex technology, such as AI, this phased approach allows time for dialogue between policy makers and model developers – and a lot of perspectives need to be factored into the guidance the EC produces.
Q: How will the AI Act be enforced?
A: The EU AI Act echoes GDPR to some extent where member states have local enforcement authorities over certain risk tiers. One exception is the GPAI rules, which will be enforced through a newly created central authority. If there’s an area that’s really going to affect your organisation, then I’d recommend keeping an eye on the developments from other data protection authorities across the EU to get a sense of how they’re thinking about different things – and get ahead of a consensus emerging so you’re on the frontfoot with the latest updates.
Q: How can the new rules support innovation?
A: The industry is looking for clarity to understand how to invest and allocate resources in both long and short term product roadmaps. For us at Salesforce, the EU is a large market and the AI Act sends a clear signal on how the region intends to regulate the technology for the foreseeable future.
Being able to absorb that guidance and start factoring it into our product pipeline gives us the confidence that the solutions we’re planning will be fit for the market, today and tomorrow.
But the same goes for our customers. We very much strive towards a shared responsibility model – where customers have a lot of control over how they use our AI solutions. For example, our agent builder process – the way that customers configure what their AI agents can and cannot do – varies greatly from customer to customer. While there’s certainly a lot of responsibility on the service provider side – which requires a very robust compliance program at the platform level – customers have a major role to play in their own implementation. Overall, it’s a mindset adjustment for both customers and service providers to tackle AI regulation together.
Another is the open dialogue available with regulators. As the Act becomes established there will be nuances and questions over how it applies to different businesses. This ongoing conversation means we can be sure the regulation is fit to drive innovation forwards safely, not stifle it.
Q: Is the EU AI Act likely to stick around for the long-haul?
A: What’s interesting about the EU AI Act is that, despite it already being in effect, the European Commission has enabled the regulation to be responsive to new developments – allowing it to adapt to future technological changes.
For instance, the high risk tier captures a number of different use cases included in one of the annexes to the Act (e.g. candidate evaluation in recruitment, management of critical infrastructure). While this was very much intended to be comprehensive at the time of initial publication, as time goes on and technology evolves, there’s a process in place for the EC to update and modify different use cases to that annex – so the list is likely to evolve. It’s this flexibility that will enable the EU AI Act to stick around for the foreseeable future.
Find out more about the EU AI Act and why staying compliant matters.