What is the FCA’s Approach to AI Regulation?

September 11, 2023
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
What is the FCA’s Approach to AI Regulation?

The Financial Conduct Authority (FCA) is the regulator for over 50,000 financial services firms and markets in the UK. It is responsible for promoting competition between financial service providers while also protecting customers’ interests.

While it is already a highly regulated sector, the increasing adoption of artificial intelligence and machine learning within the financial services industry can pose unique challenges that may be unaccounted for by existing regulatory frameworks. Indeed, the Bank of England estimates that 72% of financial services firms in the UK use or develop machine learning applications, with the median number of machine learning applications predicted to increase 3.5 times over the next few years. This proliferation has highlighted the pressing need to ensure that the risks of AI are managed.

Having already spoken about how regulators can harness the power of AI to help them recognise patterns in data to identify bad actors and estimate demand in 2017, as well as providing access to regulatory sandboxes firms using AI from that year, the FCA has become increasingly vocal about the need to regulate AI in the financial services sector and has collaborated with the Bank of England (BoE) on a number of initiatives. In this blog post, we provide a summary of the action taken by the FCA so far towards regulating AI.

Key takeaways:

  • The Artificial Intelligence Public-Private Forum was launched in collaboration with the Bank of England to improve understanding of how AI was being used in the financial services sector.
  • In the Forum meetings, the importance of algorithm auditing and governance frameworks were particularly emphasised.
  • A discussion paper with the Prudential Regulation Authority and Bank of England was published in October 2022, with consultation open until February 2023 to understand the risks of AI and the role of regulation and standards in promoting safe and responsible AI in financial services.
  • A series of speeches by FCA representatives have been published that highlight the need for high-quality data and to explore the use of synthetic data, launching the digital sandbox to support the development and testing of AI models.
  • The FCA is yet to take any concrete action and is favouring a light-touch approach but continues to consult with stakeholders as it consolidates its position.

Launch of the Artificial Intelligence Public-Private Forum

In January 2020, the FCA and BoE jointly announced the launch of the Artificial Intelligence Public-Private Forum to better understand how AI is being used in the financial services sector and how guidance and regulation could help to support its safe adoption, with the first meeting held in October 2020.

Of particular interest in the meeting minutes is the discussion around AI risk management frameworks, where the dynamic nature of AI models and associated datasets as well as privacy and security risks were flagged as areas of concern, with a consensus that any risk management frameworks that could be introduced should be proportionate to the complexity of the AI system and use case. There were also calls to consider the auditing of algorithms as they become increasingly important in the financial sector.

The second meeting of the Forum took place in February 2021 and the third in June 2021. Here, there were increased calls for AI auditing to increase trust and confidence, particularly the need to establish a framework for AI auditing to establish how comprehensive audits should be conducted, whether they should focus on the model, the data, or both, and what the target of audits should be.

The fourth meeting, held in October 2021 ahead of the publication of the final report in February 2022, focused on the adaptation of governance frameworks, including automating documentation and ensuring that there is a human-in-the-loop to review controls and alerts. The need for fluid frameworks and standards was also discussed to ensure that they keep up with the pace of AI development, with strategy-level governance to develop standards for entire firms and execution-level governance to apply standards on a case-by-case basis both needed for effective governance.

Discussion paper on safe and responsible AI adoption

Following the Public-Private Forum, the FCA published Discussion Paper 22/4 in October 2022 with the BoE and Prudential Regulation Authority. The authors of the paper sought consultation on their views around safe and responsible AI adoption – including on the role of policy and regulation – with the feedback period open until February 2023.

With a view to addressing some of the challenges of regulating AI highlighted by the UK Government’s policy paper on Establishing a Pro-Innovation Approach to Regulating AI, the discussion paper poses several issues for debate:

  • Whether sector-specific definitions of AI are required to support sector-specific regulation.
  • Which risks and benefits should be prioritised, and which metrics should be used to measure them.
  • How risks and benefits might change and be mitigated.
  • How existing regulation could be updated to support the safe adoption of AI in the financial services sector.
  • Whether existing governance structures are sufficient to address AI and how they could be adapted.
  • The role regulators could play in promoting safe AI.
  • Whether existing industry standards are applicable to supporting the safe and responsible adoption of AI.

A feedback statement based on the consultation is expected to be published by the end of 2023.

Publication of speeches

While the FCA is yet to implement any regulatory initiatives to support the safe and responsible adoption of AI following its Forum and discussion paper, the FCA has published a series of speeches delivered by its representatives between November 2022 and July 2023.

Regulation and risk management of AI in financial services

In November 2022, a speech by the CFA’s Chief Data, Information and Intelligence Officer Jessica Rusu on the regulation and risk management of AI in financial services was published, with the speech focused on the machine learning survey, where the biggest risks of using AI in financial services were indicated to be data bias, data representativeness, and explainability. Rusu emphasised the importance of using high-quality data to train AI models supported by data quality assessments, as well as the need to comply with existing legislation, highlighting the importance of good governance practices to promote responsibility, explainability, community building and knowledge exchange.

Building better foundations in AI

Subsequently, in January 2023, the FCA published a speech by Rusu on building better foundations in AI. Rusu spoke about the importance of stakeholder consultation in the development of regulation and regulatory sandboxes to test new technologies with access to high-quality synthetic data. To support this, the FCA has launched a number of initiatives to explore the use of synthetic data as well as challenges related to the access and sharing of data. With these initiatives, the aim is to promote responsible AI innovation to benefit both consumers and markets.

Introduction of new digital sandbox

Continuing with the theme of sandboxes, Rusu’s speech in April 2023 announced the launch of a new digital sandbox with access to payments, transactions, consumer data, and company house data, among the 200-plus datasets available to sandbox participants. The digital sandbox provides an environment to develop, prototype, and test solutions with access to academics, government bodies, and venture capitalists. This was launched alongside the May 2023 three-month TechSprint focused on the Financial Services Register to explore how Register Data can be used by third parties to promote innovation and inform investment decisions.

The FCA’s emerging regulatory approach to AI and Big Tech

Most recently, in July 2023, Chief Executive Nikhil Rathi published a speech on the FCA’s emerging regulatory approach to AI and Big Tech, highlighting the shifts seen with the popularity of generative AI and the risks that this can pose – hallucinations, for example. Rathi also clarified that the FCA is not responsible for regulating AI and other technologies, but instead for regulating the use of and effect of technology in financial services.

What are the FCA’s plans?

For now, like other UK regulators, the FCA is taking a light-touch approach, providing resources to test data and models but not imposing strict regulations for the use of AI in financial services. They are likely to follow the lead of the UK government and develop specific rules under the pro-innovation approach as these efforts continue to mature.

It is clear that the FCA is eager to ensure that the views of relevant stakeholders are considered as they consolidate their approach, and several consultations are likely to be held before the FCA introduces any concrete action.

Nevertheless, the Forum emphasised the importance of algorithm auditing, governance mechanisms, and risk management to promote the safe and responsible adoption of AI. Voluntarily taking these steps can provide a competitive edge and mitigate the risk of potential harms, increasing trust and reducing liability.

Holistic AI are world leaders in AI governance, risk and compliance, renowned for our algorithm auditing expertise. Schedule a call with one of our specialists to find out more about how we can help your organisation.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call