On the 23rd of November, Lord Chris Holmes introduced The Artificial Intelligence (Regulation) Bill in the UK House of Lords. Currently pending the second reading at the House, the Bill can be considered a step towards implementing the White Paper on AI, as well as realising the commitments made by the UK in the Bletchley Declaration that was published as part of the recent AI Summit.
In its current form, the Bill provides a general framework rather than governing each specific issue related to AI regulation in detail. The Bill mandates the Secretary of State to establish a dedicated AI Authority, which is envisaged to be responsible for the coordination and enforcement of the regulation.
The Authority has a set of quite broadly drafted functions including, but not limited to:
The Bill defines “artificial intelligence” or “AI” as “technology enabling the programming or training of a device or software to— (a) perceive environments through the use of data; (b) interpret data using automated processing designed to approximate cognitive abilities; and (c) make recommendations, predictions or decisions; with a view to achieving a specific objective.”
This definition has common elements with other AI definitions in that it defines that the purpose of these systems is “making recommendations or predictions” with “the view to achieving a specific objective”. However, it also chooses different terms in expressions, such as “perceiving the environment” and “approximating cognitive abilities”, which may lead to different interpretations.
The Authority’s broad functions are subjected to principles such as safety, explainability, and fairness under Section 2 of the Bill. Most of these principles overlap with the principles provided for high-risk AI systems under the EU AI Act or for all AI systems in the European Parliament’s position. In addition to these, the Bill provides two other sets of principles for businesses developing, deploying, or using AI and all AI applications, respectively. All the principles provided under the Bill are as follows:
The Bill provides that the Authority must collaborate with relevant sectoral regulators to create sandboxes to bolster AI innovation.
According to the Bill, a regulatory sandbox is an arrangement which:
The Bill proposes that the Secretary of the State, after consulting the Authority and by regulations, must provide that any business which develops, deploys, or uses AI to designate an AI Officer responsible for ensuring the safe, ethical, unbiased, and non-discriminatory use of AI as well as that the data used is unbiased.
The designation of the AI responsible officers certainly renders AI businesses easier to access and target for AI regulation purposes. However, the qualifications of these officers, their appointment procedure, or potential sanctions that will be applicable to them in case of non-compliance are uncertain under the current proposal.
The Bill proposes that the Secretary of State, after consulting the Authority and by regulations, must provide that any person involved in training AI may be required to supply to the Authority a record of all third-party data and IP used in the training and ensure that these are used by informed consent.
This provision likely seeks to address the concerns related to the unlawful processing of digital content and copyrighted material to train AI models and applications and follows a similar approach to the European Parliament’s negotiating position on the EU AI Act. Given the nature of the current data collection practices and the existence of myriad different data sources as well as automated data processing techniques, the enforcement of this provision could indeed be challenging in the case the Bill passes.
Given the lack of any signals either during the AI Safety Summit or during the King’s Speech on the 7th of November, despite the explicit reference to the safe development of AI, the likelihood of the Bill’s getting adopted may be considered low.
On the other hand, the Bill may improve the discussion on how to approach AI Regulation; the Bill’s approach is seemingly different from the so-called risk-based approach of the EU AI Act. However, it must not be overlooked that it provides the Authority and the Secretary of State with sufficient powers to adopt a risk-based approach in practice, which may be a more suitable mechanism to address the dynamic nature of the field as well as the problems therein.
Whether or not the Bill is adopted, AI regulations are spreading around the world, and it is crucial to prioritise the development of AI systems that promote ethical principles, such as fairness and harm mitigation, right from the outset.
At Holistic AI, we have pioneered the field of AI ethics and have carried out over 1000 risk mitigations. Using our interdisciplinary approach that combines expertise from computer science, law, policy, ethics, and social science, we take a comprehensive approach to AI governance, risk, and compliance, ensuring that we understand both the technology and the context in which it is used.
To find out more about how Holistic AI can help you, schedule a call with one of our experts.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts