AI – the UK’s current and future regulatory regime

Published on 10 October 2024

The difference between the EU and the UK approach to regulating AI has been stark.  The EU has implemented uniform, specific AI legislation. The UK’s approach has been more diverse, with government, institutions and regulators all grappling with the emerging issues from their own perspectives.

Whilst this means that the UK should be able to take a nimble approach to such a fast-moving topic, it does mean that it is more difficult to navigate the rules/guidance that are out there. 

The EU’s approach

The EU’s flagship AI legislation is the AI Act. It entered into force on 1 August 2024 and its requirements kick in over the next three years. It will have extra-territorial effect in that it applies to both providers and deployers of AI systems that are based in the EU or that are put on the market or used in the EU.

The AI Act categorises AI systems into four categories:

  • prohibited AI practices;

  • high-risk AI systems;

  • limited risk AI systems;

  • and minimal/ no risk AI systems.

It also sets out specific rules for general purpose AI models.

The majority of AI systems are likely to fall into the minimal/no risk category and, as such, will attract no obligations under the AI Act save for the general obligation around AI literacy.

 For other systems, the applicable obligations will depend on the role of the operator supplying/using them and their risk category.

Sir Patrick Vallance’s report

Impetus on changing the regulatory environment in the UK as it applies to AI was given by Sir Patrick Vallance’s “Pro-innovation Regulation of Technologies Review Digital Technologies” in March 2023. 

This report focuses on technologies and applications that require a distinct regulatory approach.  In respect of AI, the issues that were called out were:

  • that the government should work with regulators to develop a multi-regulator sandbox for AI;

  • the need for a clear policy position to be developed on the relationship between intellectual property law and generative AI.  In its response to the report, the government tasked the IPO with developing a code of practice to regulate the position; and

  • that the ICO should update its guidance to clarify when an organisation is a controller, joint controller or processor for processing activities relating to AI as a service.  We have covered this separately in our article: ”ICO consults on controllership across the GenAI supply chain”.

The AI regulation whitepaper

Since February 2024, The Department of Science, Innovation and Technology has had the role of overseeing the implementation of AI-related aspects of the UK’s industrial strategy.

It has incorporated the previous Office for Artificial Intelligence and experts from the Government Digital Service, the Central Digital and Data Office and the Incubator for AI in order to make it the “digital centre of government”.

Its key publication to date has been its “A pro-innovation approach to AI regulation” whitepaper that was published for consultation in March 2023.  The whitepaper’s intent was to put in place an AI regulatory framework (which would not initially be on a statutory footing) underpinned by five principles:

  • safety, security and robustness;

  • appropriate transparency and explainability;

  • fairness;

  • accountability and governance; and

  • contestability and redress.

However, the government reserved the right to introduce a statutory duty on regulators to have "due regard" to the application of the principles following a review of the initial period of their non-statutory implementation.

Following the whitepaper, a number of regulators have delivered work in line with the government’s approach:

  • the CMA published a review of foundation models to understand the opportunities and risks for competition and consumer protection;

  • the ICO updated guidance on how data protection laws apply to AI systems;

  • the DCRF (a multi-regulator forum made up of the ICO, Ofcom, CMA and the FCA) is delivering a pilot AI and Digital Hub, and the FCA is running its own Digital Sandbox and Regulatory Sandbox; and

  • regulators such as the OfGem and CAA are working on their AI strategies, which will build on similar work undertaken by the MHRA on the requirements for software and AI used in medical devices.

By the time of the government’s response to the whitepaper in February 2024, it was clear that no effective intellectual property voluntary code could be agreed.  Accordingly, at present no specific UK rules apply to this issue.

The new government’s approach

Quite how this approach will change under the new Labour government is still unclear. 

The King’s speech identified that the government would “establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence model”.

This is in line with Labour’s manifesto pledge to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes”.  However, it did not name any specific bill that would be introduced.

It would, therefore, seem likely that specific legislation will be introduced to deal with the most high-risk AI models. However:

  • its scope will likely be far narrower than the EU’s AI Act, concentrating on a few key developers rather than deployers; and

  • it will not be introduced any time soon. 

That type of specific legislation is unlikely to replace the work of the UK’s regulators.

Accordingly, whilst the EU progresses its heavy, AI-specific, regulatory regime; the UK, for the short to medium term, is likely to continue its decentralised approach to the topic.

Beyond this, there is little clarity as to the scope and timing of any legislation.  We will continue to review and report on developments as they occur.