The UK’s AI Regulation: From Guidance to Strategies
AI Regulations

The UK’s AI Regulation: From Guidance to Strategies

November 25, 2022

Regulation of artificial intelligence (AI) is emerging around the globe, particularly in the US and EU, where laws have been proposed and adopted to manage the risks that AI can pose. However, the UK government still needs to propose any AI-specific regulation. So instead, individual departments have published a series of guidance papers and strategies to provide a framework for those using and developing AI within the UK. This blog post summarises these key publications and their main contributions to the AI regulatory ecosystem.

UK’s AI Regulation
UK’s AI Regulation Timeline

Draft guidance on the UK's AI auditing framework

The Information Commissioner’s Office (ICO) was among the first to issue guidance on the use of AI with their publication of draft guidance on the AI auditing framework on 14th February 2020. The draft, which was open for consultation, aimed to provide a method for AI applications for those responsible for compliance (data protection officers, risk managers, ICO auditors etc.) and technology specialists. The auditing framework states risks in terms of impact on rights and freedoms and offers strategies to mitigate them.

Non-technical responsibilities are to structure governance measures by ensuring that there are appropriate documentation practices and record-keeping and that there is accountability for the system and its outputs. There may also be trade-offs between desirable outcomes, such as privacy vs statistical accuracy, statistical accuracy vs discrimination, explainability vs statistical accuracy, and privacy vs explainability.

On the other hand, technical responsibilities concern the ‘controller’ who makes decisions about the collection and use of personal data, the target output of the model, feature selection, the type of algorithm to use, model parameters, evaluation metrics, and how models will be tested and updated. Risk mitigation strategies can concern key decisions, for example, using data minimisation and privacy-preserving techniques, ensuring representative and high-quality data is used, and post-processing modifications.

Explainability of decisions made with AI

Following their Auditing Framework, the ICO published guidance on 20th May 2020 on Explaining decisions made with AI in collaboration with the Alan Turing Institute. The guidance provides enterprises with a framework for selecting the appropriate explainability strategy based on the specific use case and sector, choosing an appropriately explainable model, and how tools can be used to extract an explanation from less interpretable models.

The guidance also contains checklists to support organisations on their journey towards making AI more explainable, which are divided into five tasks:

  • Task 1: Select priority explanation types by considering the domain, use case and impact on the individuals
  • Task 2: Collect and pre-process your data in an explanation-aware manner
  • Task 3: Build your system to ensure you can extract relevant information for a range of explanation types
  • Task 4: Translate the rationale of your system’s results into useable and easily understandable reasons
  • Task 5: Prepare implementers to deploy your AI system
  • Task 6: Consider how to build and present your explanation

Guidance on AI and data protection

Following up their efforts with a third publication, the ICO published guidance on AI and data protection on 30th July 2020, aimed at both those focused on compliance and technology specialists. The significant contribution of this publication was an AI Toolkit, which offers a way to assess the effects AI might have on the rights to fundamental rights and freedoms of individuals, from the initial designing of a system to deployment and monitoring.

The privacy and data protection risks posed by an AI system are ranked from low risk to high risk before and after any action is taken, termed inherent and residual risk, respectively. The tool also offers suggestions for controls that can be implemented to reduce risk and practical steps that can be taken, along with a bank of ICO guidance.

National Data Strategy

Following these early efforts, the Department of Culture, Media, and Sport (DCMS) published a National Data Strategy on 9th December 2020 to outline best practices for the use of personal data, both within the government and beyond, based on four core pillars:

  • Data foundations – improved data quality ensures more effective use and better insights and outcomes.
  • Data skills – data skills should be strengthened through the education system and offered opportunities to develop these skills throughout an individual’s life.
  • Data availability – coordination, access to and sharing high-quality, and ensuring that there are appropriate data flow protections are vital for ensuring that data has the most effective impact.
  • Responsible data – data should be used responsibly, lawfully, securely, fairly, ethically, and in a sustainable and accountable way that supports innovation and research.

National AI Strategy

Targeting AI specifically, the Office for Artificial Intelligence published a National AI strategy jointly with the DCMC and the Department of Business, Energy & Industrial Strategy on 22nd September 2021. The strategy outlines how the UK government aims to invest and plan for the long-term requirements of the national AI ecosystem, support the adoption and innovation of AI in the UK across sectors and regions, and ensure that there is appropriate national and international governance to support innovation and investment and adequately protect the public from harm.

The strategy can be read as a vision for innovation and opportunity, underpinned by a trust framework with innovation and opportunity at the forefront. Key takeaways are:

  • Innovation First - a clear signal is that innovation is at the forefront of the UK’s data priorities.
  • Alternative Ecosystem of Trust - the UK’s regulatory-market norms becoming the preferred ecosystem depends upon the regulatory system and delivery frameworks required.
  • Defence, Security and Risk - security and risk are discussed regarding the utilisation of AI and governance practices.
  • Revision of Data Protection - the signal is that the UK is indeed seeking to position itself as less stringent regarding data protection and necessary documentation.
  • EU Disalignment—Atlanticism? - questions are raised regarding a step back in terms of data protection rights.

Algorithmic Transparency Standard

Although the National Data Strategy was not explicitly targeted towards AI, the Central Digital and Data Office expanded this strategy by releasing the Algorithmic Transparency Standard on 29th November 2021. The standard aims to support public sector organisations in being more transparent about the algorithmic tools they are using, how they support decisions, and why they are using them. It provides them with a template for mapping this.

The Standard signals that the UK government is pushing forward with the AI standards agenda and ensuring that those standards benefit from the empirical, practitioner-led experience, enabling coherent, widespread adoption. The two-tier approach of the Algorithmic Transparency Standard encourages transparency inclusivity across distinct audiences, encouraging transparency inclusivity across distinct audiences: tier 1 information is non-technical. In contrast, tier 2 information concerns detailed technical information.

The roadmap to an Effective AI assurance ecosystem

Joining the UK’s publication efforts, the Centre for Data Ethics and Innovation (CDEI) released a Roadmap to an effective AI assurance ecosystem on 8th December 2021, which formed part of a ten-year plan set forth by the National AI strategy. The roadmap outlines the CDEI’s vision of what a mature AI assurance ecosystem would look like, including the introduction of new legislation, AI-related education and accreditation, and the creation of a professional service for the management and implementation of trustworthy AI systems to benefit the UK economy. Echoing the sentiment of the ICO’s Auditing Framework, one of the components of the ecosystem is AI auditing, including examining risk, bias, compliance, and performance with a certification view.

The roadmap also outlines six key activities for the maturation of the ecosystem:

  • Generating demand for assurance
  • Supporting the demand for assurance
  • Development of standards
  • Professionalisation, specialised skills, and certification
  • Regulatory oversight
  • Independent research that highlights gaps in regulatory regimes

Establishing a Pro-Innovation approach to regulating AI

Following the roadmap, the DCMS, Department for Business, Energy & Industrial Strategy, and Office for Artificial Intelligence jointly released a policy paper on 18th July 2022 titled Establishing a pro-innovation approach to regulating AI. Under this framework, in the UK, AI regulation will be context-specific and based on the use and impact of the technology, with responsibility for developing appropriate enforcement strategies delegated to the appropriate regulator(s). The government will broadly define AI to provide regulators with some direction – adopting fundamental principles relating to transparency, fairness, safety, security and privacy, accountability, and mechanisms for redress or contestability. However, the government will ultimately allow regulators to define AI according to the relevant domains or sectors.

Four principles underpin the framework:

  • Context-specific – AI should be regulated based on its use and impact, with responsibility for designing and implementing proportionate regulatory responses delegated to regulators.
  • Pro-innovation and risk-based – regulators will focus on high-risk concerns over hypothetical or low risks to encourage innovation and limit barriers.
  • Coherence – a set of cross-sectoral principles tailored to the characteristics of AI will be established, and regulators will interpret, prioritise and implement them within their sectors and domains.
  • Proportionate and adaptable – cross-sectoral principles will initially be set out on a non-statutory basis to allow for a dynamic approach to regulation.

AI action plan

Following on from the National AI Strategy, the Department for Business, Energy and Industrial Strategy, DCMS, and Office for Artificial Intelligence jointly published an AI Action Plan on 18th July 2022. Based on the three pillars of investing in the long-term needs of the AI ecosystem, ensuring AI benefits all sectors and regions, and governing AI effectively, the plan outlines the progress government has made towards fulfilling the goals of the AI strategy throughout 2022. These actions include making funding for postgraduate AI studies available, publishing reports and research, and the government’s participation in global AI forums.

What’s next for the UK?

The government has made it clear that these initiatives are only the beginning, and over the next ten years, it will aim to ‘cement the UK’s role as an AI superpower'. Doing this will require cooperation between government departments to move the regulatory agenda forward, as well as consultation with technical experts, investment in infrastructure and education, and a dynamic and adaptable approach.

To learn more about how Holistic AI can help you get ahead of this and adopt AI with greater confidence, get in touch with us at

Written by Airlie Hilliard, Senior Researcher at Holistic AI. Follow her on Linkedin.

Manage risks. Embrace AI.

Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI

Get Started