The UK’s AI Regulation Bill: A New Direction in AI Governance?

November 28, 2023
Authored by
No items found.
The UK’s AI Regulation Bill: A New Direction in AI Governance?


On the 23rd of November, Lord Chris Holmes introduced The Artificial Intelligence (Regulation) Bill in the UK House of Lords. Currently pending the second reading at the House, the Bill can be considered a step towards implementing the White Paper on AI, as well as realising the commitments made by the UK in the Bletchley Declaration that was published as part of the recent AI Summit.

The AI Authority

In its current form, the Bill provides a general framework rather than governing each specific issue related to AI regulation in detail. The Bill mandates the Secretary of State to establish a dedicated AI Authority, which is envisaged to be responsible for the coordination and enforcement of the regulation.

The Authority has a set of quite broadly drafted functions including, but not limited to:

  • Ensuring that other regulators take account of AI,
  • Undertaking a regulatory gap analysis for AI-related issues,
  • Monitoring the AI-related risks in the economy,
  • Promoting interoperability with international regulatory frameworks.

Definition of Artificial Intelligence

The Bill defines “artificial intelligence” or “AI” as “technology enabling the programming or training of a device or software to— (a) perceive environments through the use of data; (b) interpret data using automated processing designed to approximate cognitive abilities; and (c) make recommendations, predictions or decisions; with a view to achieving a specific objective.”

This definition has common elements with other AI definitions in that it defines that the purpose of these systems is  “making recommendations or predictions” with “the view to achieving a specific objective”. However, it also chooses different terms in expressions, such as “perceiving the environment” and “approximating cognitive abilities”, which may lead to different interpretations.

Regulatory Principles

The Authority’s broad functions are subjected to principles such as safety, explainability, and fairness under Section 2 of the Bill. Most of these principles overlap with the principles provided for high-risk AI systems under the EU AI Act or for all AI systems in the European Parliament’s position. In addition to these, the Bill provides two other sets of principles for businesses developing, deploying, or using AI and all AI applications, respectively. All the principles provided under the Bill are as follows:

Principles of the AI Regulation Principles for AI Businesses Developing, Deploying, or Using AI Requirements for AI Applications
Safety, security and robustness Transparency Compliance with equalities legislation
Appropriate transparency and explainability Thorough and transparent testing Being inclusive by design
Fairness Neither discriminating unlawfully among individuals nor perpetuating unlawful discrimination arising in input data
Accountability and governance Compliance with applicable laws Meeting the needs of those from lower socio-economic groups
Contestability and redress Generating data that are findable, accessible, interoperable, and reusable

Regulatory Sandboxes

The Bill provides that the Authority must collaborate with relevant sectoral regulators to create sandboxes to bolster AI innovation.

According to the Bill, a regulatory sandbox is an arrangement which:

  1. allows businesses to test innovative propositions in the market with real consumers;
  2. is open to authorised firms, unauthorised firms that require authorisation and technology firms partnering with, or providing services to, UK firms doing regulated activities;
  3. provides firms with support in identifying appropriate consumer protection safeguards;
  4. requires tests to have a clear objective and to be conducted on a small scale;
  5. requires firms which want to test products or services which are regulated activities to be authorised by or registered with the relevant regulator before starting the test.

AI Responsible Officers

The Bill proposes that the Secretary of the State, after consulting the Authority and by regulations, must provide that any business which develops, deploys, or uses AI to designate an AI Officer responsible for ensuring the safe, ethical, unbiased, and non-discriminatory use of AI as well as that the data used is unbiased.

The designation of the AI responsible officers certainly renders AI businesses easier to access and target for AI regulation purposes. However, the qualifications of these officers, their appointment procedure, or potential sanctions that will be applicable to them in case of non-compliance are uncertain under the current proposal.

Transparency, IP Obligations, and Labelling

The Bill proposes that the Secretary of State, after consulting the Authority and by regulations, must provide that any person involved in training AI may be required to supply to the Authority a record of all third-party data and IP used in the training and ensure that these are used by informed consent.

This provision likely seeks to address the concerns related to the unlawful processing of digital content and copyrighted material to train AI models and applications and follows a similar approach to the European Parliament’s negotiating position on the EU AI Act. Given the nature of the current data collection practices and the existence of myriad different data sources as well as automated data processing techniques, the enforcement of this provision could indeed be challenging in the case the Bill passes.

Will the Bill be adopted?

Given the lack of any signals either during the AI Safety Summit or during the King’s Speech on the 7th of November, despite the explicit reference to the safe development of AI, the likelihood of the Bill’s getting adopted may be considered low.

On the other hand, the Bill may improve the discussion on how to approach AI Regulation; the Bill’s approach is seemingly different from the so-called risk-based approach of the EU AI Act. However, it must not be overlooked that it provides the Authority and the Secretary of State with sufficient powers to adopt a risk-based approach in practice, which may be a more suitable mechanism to address the dynamic nature of the field as well as the problems therein.

AI Regulation is Gaining Momentum Worldwide

Whether or not the Bill is adopted, AI regulations are spreading around the world, and it is crucial to prioritise the development of AI systems that promote ethical principles, such as fairness and harm mitigation, right from the outset.

At Holistic AI, we have pioneered the field of AI ethics and have carried out over 1000 risk mitigations. Using our interdisciplinary approach that combines expertise from computer science, law, policy, ethics, and social science, we take a comprehensive approach to AI governance, risk, and compliance, ensuring that we understand both the technology and the context in which it is used.

To find out more about how Holistic AI can help you, schedule a call with one of our experts.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call