The UK House of Commons Committee on Science, Innovation and Technology has published an interim report on the governance of artificial intelligence (AI).
Presented on 31 August 2023, the findings were based on an inquiry in consultation with over 100 experts on AI, including researchers, businesses, and civil society representatives.
The Committee – appointed by the House of Commons to examine expenditure, administration, and policy for the newly established Department for Science, Innovation and Technology – focused on the applications of AI in the context of education, healthcare and medicine, exploring the benefits and potential risks in such sensitive use cases.
UK calls for AI legislation
The UK government is currently taking a light-touch approach to AI regulation, having previously published a white paper outlining its pro-innovation, sector-specific stance. However, the Committee recognised that no legislation is likely to be enacted until the end of 2025 unless an AI bill is introduced in the new session of Parliament before the next general election.
According to the Committee, this could see the UK left behind by the EU and US, both of whom have already made significant legislative progress towards regulating AI. The report, therefore, recommends that an AI bill should be introduced into Parliament in the coming months in order to support the UK’s aspirations of becoming an AI governance leader. It is argued the bill should establish ‘due regard’ duties for existing regulators.
12 key challenges of AI governance
The report’s key contribution is the identification of 12 key challenges to AI governance which should be considered by policymakers when developing AI frameworks.
- Bias – Given that AI can introduce or amplify existing societal biases, AI governance frameworks must outline mechanisms to prevent or mitigate this.
- Privacy – Innovation and privacy must be balanced to prevent AI from being used to identify individuals or use their personal information in unacceptable ways.
- Misrepresentation – Safeguards should be put in place to prevent AI from being used to generate material to deliberately misrepresent individuals’ behaviour, opinions, or character.
- Access to data – Considerations should be given to the fact that powerful AI systems require large datasets, which few organisations have access to.
- Access to compute – In addition to data, powerful AI systems require significant computing power, which limited organisations have access to.
- Black box – Explainability and transparency should be maximised so that the way systems produce particular results can be communicated and understood.
- Open-source – Requirements to make code publicly available can promote transparency and innovation, reducing opportunities for market monopolies, but can also make it easier for bad faith actors to cause harm.
- Intellectual property and copyright – AI models can be trained on copyrighted material and complicate the enforcement of rights held by the original creators of content.
- Liability – AI models may not always be developed and deployed by the same entity, meaning that liability for harms is complicated and often unclear.
- Employment – Disruption to the job market must be anticipated and managed to avoid overreliance on AI and disproportionate job displacement.
- International coordination – Given that AI is deployed globally, governance frameworks must consider how international coordination can be implemented.
- Existential challenges – National security must be protected, and concerns about AI as a threat to human life must be managed.
Time to legislate?
While the UK is yet to take any legislative action to enforce its pro-innovation, sector-specific approach to AI regulation, the Committee's calls for the government to introduce a bill in the coming months could accelerate the process.
Schedule a call with our expert governance, risk and compliance team to find out how Holistic AI can help your organisation with existing and emerging proposals to regulate AI both in the UK and beyond.