Regulation of artificial intelligence (AI) is emerging around the globe, particularly in the US and EU, where laws have been proposed and adopted to manage the risks that AI can pose. However, the UK government still needs to propose any AI-specific regulation. So instead, individual departments have published a series of guidance papers and strategies to provide a framework for those using and developing AI within the UK. This blog post summarises these key publications and their main contributions to the AI regulatory ecosystem.
The Information Commissioner’s Office (ICO) was among the first to issue guidance on the use of AI with their publication of draft guidance on the AI auditing framework on 14th February 2020. The draft, which was open for consultation, aimed to provide a method for AI applications for those responsible for compliance (data protection officers, risk managers, ICO auditors etc.) and technology specialists. The auditing framework states risks in terms of impact on rights and freedoms and offers strategies to mitigate them.
Non-technical responsibilities are to structure governance measures by ensuring that there are appropriate documentation practices and record-keeping and that there is accountability for the system and its outputs. There may also be trade-offs between desirable outcomes, such as privacy vs statistical accuracy, statistical accuracy vs discrimination, explainability vs statistical accuracy, and privacy vs explainability.
On the other hand, technical responsibilities concern the ‘controller’ who makes decisions about the collection and use of personal data, the target output of the model, feature selection, the type of algorithm to use, model parameters, evaluation metrics, and how models will be tested and updated. Risk mitigation strategies can concern key decisions, for example, using data minimisation and privacy-preserving techniques, ensuring representative and high-quality data is used, and post-processing modifications.
Following their Auditing Framework, the ICO published guidance on 20th May 2020 on Explaining decisions made with AI in collaboration with the Alan Turing Institute. The guidance provides enterprises with a framework for selecting the appropriate explainability strategy based on the specific use case and sector, choosing an appropriately explainable model, and how tools can be used to extract an explanation from less interpretable models.
The guidance also contains checklists to support organisations on their journey towards making AI more explainable, which are divided into five tasks:
Following up their efforts with a third publication, the ICO published guidance on AI and data protection on 30th July 2020, aimed at both those focused on compliance and technology specialists. The significant contribution of this publication was an AI Toolkit, which offers a way to assess the effects AI might have on the rights to fundamental rights and freedoms of individuals, from the initial designing of a system to deployment and monitoring.
The privacy and data protection risks posed by an AI system are ranked from low risk to high risk before and after any action is taken, termed inherent and residual risk, respectively. The tool also offers suggestions for controls that can be implemented to reduce risk and practical steps that can be taken, along with a bank of ICO guidance.
Following these early efforts, the Department of Culture, Media, and Sport (DCMS) published a National Data Strategy on 9th December 2020 to outline best practices for the use of personal data, both within the government and beyond, based on four core pillars:
Targeting AI specifically, the Office for Artificial Intelligence published a National AI strategy jointly with the DCMC and the Department of Business, Energy & Industrial Strategy on 22nd September 2021. The strategy outlines how the UK government aims to invest and plan for the long-term requirements of the national AI ecosystem, support the adoption and innovation of AI in the UK across sectors and regions, and ensure that there is appropriate national and international governance to support innovation and investment and adequately protect the public from harm.
The strategy can be read as a vision for innovation and opportunity, underpinned by a trust framework with innovation and opportunity at the forefront. Key takeaways are:
Although the National Data Strategy was not explicitly targeted towards AI, the Central Digital and Data Office expanded this strategy by releasing the Algorithmic Transparency Standard on 29th November 2021. The standard aims to support public sector organisations in being more transparent about the algorithmic tools they are using, how they support decisions, and why they are using them. It provides them with a template for mapping this.
The Standard signals that the UK government is pushing forward with the AI standards agenda and ensuring that those standards benefit from the empirical, practitioner-led experience, enabling coherent, widespread adoption. The two-tier approach of the Algorithmic Transparency Standard encourages transparency inclusivity across distinct audiences, encouraging transparency inclusivity across distinct audiences: tier 1 information is non-technical. In contrast, tier 2 information concerns detailed technical information.
Joining the UK’s publication efforts, the Centre for Data Ethics and Innovation (CDEI) released a Roadmap to an effective AI assurance ecosystem on 8th December 2021, which formed part of a ten-year plan set forth by the National AI strategy. The roadmap outlines the CDEI’s vision of what a mature AI assurance ecosystem would look like, including the introduction of new legislation, AI-related education and accreditation, and the creation of a professional service for the management and implementation of trustworthy AI systems to benefit the UK economy. Echoing the sentiment of the ICO’s Auditing Framework, one of the components of the ecosystem is AI auditing, including examining risk, bias, compliance, and performance with a certification view.
The roadmap also outlines six key activities for the maturation of the ecosystem:
Following the roadmap, the DCMS, Department for Business, Energy & Industrial Strategy, and Office for Artificial Intelligence jointly released a policy paper on 18th July 2022 titled Establishing a pro-innovation approach to regulating AI. Under this framework, in the UK, AI regulation will be context-specific and based on the use and impact of the technology, with responsibility for developing appropriate enforcement strategies delegated to the appropriate regulator(s). The government will broadly define AI to provide regulators with some direction – adopting fundamental principles relating to transparency, fairness, safety, security and privacy, accountability, and mechanisms for redress or contestability. However, the government will ultimately allow regulators to define AI according to the relevant domains or sectors.
Four principles underpin the framework:
Following on from the National AI Strategy, the Department for Business, Energy and Industrial Strategy, DCMS, and Office for Artificial Intelligence jointly published an AI Action Plan on 18th July 2022. Based on the three pillars of investing in the long-term needs of the AI ecosystem, ensuring AI benefits all sectors and regions, and governing AI effectively, the plan outlines the progress government has made towards fulfilling the goals of the AI strategy throughout 2022. These actions include making funding for postgraduate AI studies available, publishing reports and research, and the government’s participation in global AI forums.
The government has made it clear that these initiatives are only the beginning, and over the next ten years, it will aim to ‘cement the UK’s role as an AI superpower'. Doing this will require cooperation between government departments to move the regulatory agenda forward, as well as consultation with technical experts, investment in infrastructure and education, and a dynamic and adaptable approach.
To learn more about how Holistic AI can help you get ahead of this and adopt AI with greater confidence, get in touch with us at email@example.com.
Written by Airlie Hilliard, Senior Researcher at Holistic AI. Follow her on Linkedin.
Subscribe to our Newsletter!
Join our mailing list to receive the latest news and updates.