The UK’s AI Regulation: From Guidance to Strategies

July 17, 2023
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
The UK’s AI Regulation: From Guidance to Strategies

Regulation of artificial intelligence (AI) is emerging around the globe, particularly in the US and EU, where laws have been proposed and adopted to manage the risks that AI can pose. However, the UK government is yet to propose any AI-specific regulation. Instead, individual departments have published a series of guidance papers and strategies to provide a framework for those using and developing AI within the UK. This blog post summarises these key publications and their main contributions to the AI regulatory ecosystem.

UK’s AI Regulation Timeline

UK’s AI Regulation

Draft guidance on the UK's AI auditing framework

The Information Commissioner’s Office (ICO) was among the first to issue recommendations on the use of AI with their publication of draft guidance on the AI auditing framework on 14th February 2020. The draft, which was open for consultation, aimed to provide a method for AI applications for those responsible for compliance (data protection officers, risk managers, ICO auditors etc.) and technology specialists. The auditing framework states risks in terms of impact on rights and freedoms and offers strategies to mitigate them.

Non-technical responsibilities are to structure governance measures by ensuring that there are appropriate documentation practices and record-keeping and that there is accountability for the system and its outputs. There may also be trade-offs between desirable outcomes, such as privacy vs statistical accuracy, statistical accuracy vs discrimination, explainability vs statistical accuracy, and privacy vs explainability.

On the other hand, technical responsibilities concern the ‘controller’ who makes decisions about the collection and use of personal data, the target output of the model, feature selection, the type of algorithm to use, model parameters, evaluation metrics, and how models will be tested and updated. Risk mitigation strategies can concern key decisions, for example, using data minimisation and privacy-preserving techniques, ensuring representative and high-quality data is used, and post-processing modifications.

Explainability of decisions made with AI

Following their Auditing Framework, the ICO published guidance on 20th May 2020 on explaining decisions made with AI in collaboration with the Alan Turing Institute. The guidance provides enterprises with a framework for selecting the appropriate explainability strategy based on the specific use case and sector, choosing an appropriately explainable model, and how tools can be used to extract an explanation from less interpretable models.

The guidance also contains checklists to support organisations on their journey towards making AI more explainable, which are divided into five tasks:

  • Task 1: Select priority explanation types by considering the domain, use case and impact on the individuals
  • Task 2: Collect and pre-process your data in an explanation-aware manner
  • Task 3: Build your system to ensure you can extract relevant information for a range of explanation types
  • Task 4: Translate the rationale of your system’s results into useable and easily understandable reasons
  • Task 5: Prepare implementers to deploy your AI system
  • Task 6: Consider how to build and present your explanation

Guidance on AI and data protection

Following up their efforts with a third publication, the ICO published guidance on AI and data protection on 30July 2020, aimed at both those focused on compliance and technology specialists. The significant contribution of this publication was an AI Toolkit, which offers a way to assess the effects AI might have on the rights to fundamental rights and freedoms of individuals, from the initial designing of a system to deployment and monitoring.

The privacy and data protection risks posed by an AI system are ranked from low risk to high risk before and after any action is taken, termed inherent and residual risk, respectively. The tool also offers suggestions for controls that can be implemented to reduce risk and practical steps that can be taken, along with a bank of ICO guidance.

National Data Strategy

Following these early efforts, the Department of Culture, Media, and Sport (DCMS) published a National Data Strategy on 9December 2020 to outline best practices for the use of personal data, both within the government and beyond, based on four core pillars:

  • Data foundations – improved data quality ensures more effective use and better insights and outcomes.
  • Data skills – data skills should be strengthened through the education system and offered opportunities to develop these skills throughout an individual’s life.
  • Data availability – coordination, access to and sharing high-quality, and ensuring that there are appropriate data flow protections are vital for ensuring that data has the most effective impact.
  • Responsible data – data should be used responsibly, lawfully, securely, fairly, ethically, and in a sustainable and accountable way that supports innovation and research.

National AI Strategy

Targeting AI specifically, the Office for Artificial Intelligence published a National AI strategy jointly with the DCMC and the Department of Business, Energy & Industrial Strategy on 22 September 2021. The strategy outlines how the UK government aims to invest and plan for the long-term requirements of the national AI ecosystem, support the adoption and innovation of AI in the UK across sectors and regions, and ensure that there is appropriate national and international governance to support innovation and investment and adequately protect the public from harm.

The strategy can be read as a vision for innovation and opportunity, underpinned by a trust framework with innovation and opportunity at the forefront. Key takeaways are:

  • Innovation first - a clear signal is that innovation is at the forefront of the UK’s data priorities.
  • Alternative ecosystem of trust - the UK’s regulatory-market norms becoming the preferred ecosystem depends upon the regulatory system and delivery frameworks required.
  • Defence, security and risk - security and risk are discussed regarding the utilisation of AI and governance practices.
  • Revision of data protection - the signal is that the UK is indeed seeking to position itself as less stringent regarding data protection and necessary documentation.
  • EU Disalignment—Atlanticism? - questions are raised regarding a step back in terms of data protection rights.

Algorithmic Transparency Standard

Although the National Data Strategy was not explicitly targeted towards AI, the Central Digital and Data Office expanded this strategy by releasing the Algorithmic Transparency Standard on 29th November 2021. The standard aims to support public sector organisations in being more transparent about the algorithmic tools they are using, how they support decisions, and why they are using them. It provides them with a template for mapping this.

The Standard signals that the UK government is pushing forward with the AI standards agenda and ensuring that those standards benefit from the empirical, practitioner-led experience, enabling coherent, widespread adoption. The two-tier approach of the Algorithmic Transparency Standard encourages transparency inclusivity across distinct audiences, encouraging transparency inclusivity across distinct audiences: tier 1 information is non-technical. In contrast, tier 2 information concerns detailed technical information.

The roadmap to an effective AI assurance ecosystem

Joining the UK’s publication efforts, the Centre for Data Ethics and Innovation (CDEI) released a oadmap to an effective AI assurance ecosystem on 8 December 2021, which formed part of a ten-year plan set forth by the National AI trategy. The roadmap outlines the CDEI’s vision of what a mature AI assurance ecosystem would look like, including the introduction of new legislation, AI-related education and accreditation, and the creation of a professional service for the management and implementation of trustworthy AI systems to benefit the UK economy. Echoing the sentiment of the ICO’s Auditing Framework, one of the components of the ecosystem is AI auditing, including examining risk, bias, compliance, and performance with a certification view.

The roadmap also outlines six key activities for the maturation of the ecosystem:

  • Generating demand for assurance
  • Supporting the demand for assurance
  • Development of standards
  • Professionalisation, specialised skills, and certification
  • Regulatory oversight
  • Independent research that highlights gaps in regulatory regimes

Policy paper on establishing a pro-innovation approach to regulating AI

Following the roadmap, the DCMS, Department for Business, Energy & Industrial Strategy, and Office for Artificial Intelligence jointly released a policy paper on 18 July 2022 establishing a pro-innovation approach to regulating AI. Under this framework, in the UK, AI regulation will be context-specific and based on the use and impact of the technology, with responsibility for developing appropriate enforcement strategies delegated to the appropriate regulator(s). The government will broadly define AI to provide regulators with some direction – adopting fundamental principles relating to transparency, fairness, safety, security and privacy, accountability, and mechanisms for redress or contestability. However, the government will ultimately allow regulators to define AI according to the relevant domains or sectors.

Four principles underpin the framework:

  • Context-specific – AI should be regulated based on its use and impact, with responsibility for designing and implementing proportionate regulatory responses delegated to regulators.
  • Pro-innovation and risk-based – regulators will focus on high-risk concerns over hypothetical or low risks to encourage innovation and limit barriers.
  • Coherence – a set of cross-sectoral principles tailored to the characteristics of AI will be established, and regulators will interpret, prioritise and implement them within their sectors and domains.
  • Proportionate and adaptable – cross-sectoral principles will initially be set out on a non-statutory basis to allow for a dynamic approach to regulation.

AI action plan

Following on from the National AI Strategy, the Department for Business, Energy and Industrial Strategy, DCMS, and Office for Artificial Intelligence jointly published an AI Action Plan on 18th July 2022. Based on the three pillars of investing in the long-term needs of the AI ecosystem, ensuring AI benefits all sectors and regions, and governing AI effectively, the plan outlines the progress government has made towards fulfilling the goals of the AI strategy throughout 2022. These actions include making funding for postgraduate AI studies available, publishing reports and research, and the government’s participation in global AI forums.

Pro-innovation approach to AI regulation white paper

On 7 February 2023, it was announced that a new department for Science, Innovation, and Technology (DSIT) had been created to support the UK’s efforts to be at the forefront of science and technology innovation. Shortly after, on 29 March 2023, DSIT, along with the Office for Artificial Intelligence, published a white paper on the UK’s pro-innovation approach to AI regulation. The paper highlights the UK government’s aim to take a sector-specific approach involving a number of regulators derived from five overarching principles:

  • Safety, security, and robustness – systems should function reliably and have safeguards to prevent security threats and minimise safety risks
  • Transparency and explainability – relevant information about the system’s inputs and outputs should be communicated to the appropriate stakeholders and technical standards should provide guidance on how to assess, design, and improve AI transparency
  • Fairness – the development of AI across sectors should be fair and equitable to ensure that systems do not discriminate against individuals, exacerbate societal inequalities, or create unfair commercial outcomes.
  • Accountability and governance – responsibility for AI systems throughout their lifecycle should be clearly established to create clear accountability for the system and its outputs. Clear regulatory guidance would be required to ensure that governance practices support regulatory compliance.
  • Contestability and redress – there should be mechanisms in place for individuals to dispute algorithmic outcomes and seek redress in instances of harm to build public trust in such systems and ensure responsible AI development.

With the aim of being an interactive and iterative process, the white paper follows on from the 2022 policy paper and features questions for consultation throughout. The consultation was open from publication to 22 June 2023, allowing various stakeholders to give feedback on the principles proposed by the white paper.

CDEI portfolio of AI assurance techniques

Following the publication of the white paper, the CDEI announced the launch of its Portfolio of AI Assurance Techniques on 7 June 2023. Similar to the OECD’s Catalogue of AI Tools and Metrics, the Portfolio was developed with TechUK and features a range of techniques to support AI assurance, or the evaluation of whether AI systems meet regulatory requirements, relevant standards, ethical guidelines, and organisational values. In particular, the Portfolio captures solutions that can be used across the lifecycle of AI systems, namely impact assessments, impact evaluations, bias audits, compliance audits, certification, conformity assessments, performance testing, and formal verification.

What’s next for the UK?

The UK government is yet to take any decisive action in terms of proposing regulation. While there are signals that they do intend to propose regulation in the future, the government have moved slower than the US and EU. They have, however, made it clear that these initatives are only the begining and, over the next 10 years, they aim to "cement the UK's role as an AI superpower".

Doing this will require cooperation between government departments to move the regulatory agenda forward, as well as consultation with technical experts, investment in infrastructure and education, and a dynamic and adaptable approach.

To learn more about how Holistic AI can help you get ahead of this and adopt AI with greater confidence, get in touch with us at

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call