Regulating Foundation Models and Generative AI: The EU AI Act Approach

August 16, 2023
Authored by
Siddhant Chatterjee
Public Policy Strategist at Holistic AI
Regulating Foundation Models and Generative AI: The EU AI Act Approach

Foundation models and generative AI occupy centre stage in the public discourse around artificial intelligence today. Touted to usher in a new era of computing, they enable instant automation, access to information, pattern identification and image, audio and video generation. They serve as building blocks for developing sophisticated single-purpose models that have the potential to bring exponential benefits across a variety of use cases, from content creation and commerce to cancer research and climate change.

However, their unchecked proliferation may also bring about potential risks, harms and hazards – such as the seamless generation of mis/disinformation, creation of dangerous content, copyright infringement, hallucinatory outputs, biased results and the harvesting of large quantities of personal data without informed consent. Harms could pervade beyond the digital environment, with increasing concerns over the implications of such models replacing human labour, and large carbon footprints associated with the development and deployment of such models.

Indeed, concerns over these negative consequences have been voiced by a range of stakeholders, spanning civil society, academia, and industry. Scholarly research increasingly highlights the potential harm caused by biased outputs, while global coalitions are issuing alarms about AI's capacity to drive human extinction, advocating for a moratorium on the development of such technologies.

EU AI Act: Framework for Governing Foundation Models and Generative AI

With the growing imperative to regulate foundation models, policymakers around the globe are embracing a range of strategies. Leading this charge is the European Union, through the implementation of the EU AI Act. This legislation has been recently revised to include provisions specifically addressing the usage of foundation models within the EU single market. Given the wide-reaching influence of the EU AI Act due to the ‘Brussels Effect’, this blog explores the EU's approach to regulating foundation models and generative AI.

{{EU}}

Key takeaways:

  • Initial versions of the EU AI Act lacked obligations on foundation models due to their novelty, and its use-case approach proved ineffective in reflecting the multi-purpose and dynamic nature of foundation models.
  • The newly proposed Article 28 b outlines rules for foundation models and generative AI, necessitating compliance with safety, ethics, and transparency requirements.
  • Foundation models, defined in Recital 60 e, are AI models designed for versatility, capable of diverse tasks and trained on varied broad data sources.
  • Generative AI is defined in Article 28 b (4) as AI systems producing complex content such as video, audio and code, with varying autonomy levels.
  • Obligations for foundation model providers include risk reduction, data governance safeguards, expert consultation, compliance with standards, energy efficiency, transparency, and cooperation with downstream operators.
  • Generative AI providers need to additionally ensure legal content, transparent data use, and copyright-protected training data summaries.

How has the EU AI Act been updated to address foundation models?

Initial versions of the EU AI Act proposed by the European Commission did not include obligations on foundation models, partly due to the novelty and relative lack of awareness of the technology. More importantly, the existing structure of the EU AI Act – focused on regulating specific use-cases of technology – proved to be counterproductive for foundation models that have the capability to be flexibly deployed across diverse contexts.

As domain experts have pointed out, limiting these models to specific use-cases that are High-Risk (Annex III) or Prohibited (Article 5) would have been too static an approach, rendering the legislation replete with limitations and discrepancies even before its enforcement. Such concerns, coupled with rising popularity of foundation models like ChatGPT and Bard, and growing public discourse around their many use-cases, implications and potential risks prompted the EU (particularly the Parliament) to draft rules to explicitly cover these models.

On 14 June 2023, Members of the European Parliament (MEPs) passed the latest version of the EU AI Act and introduced a new section, Article 28 b, to govern foundation models and generative AI. Currently progressing through the trilogue stages between the EU Commission, Parliament and the Council, the legislation now mandates a set of nine ex-ante obligations on providers of foundation models to ensure they are safe, secure, ethical and transparent. Significantly, the EU AI Act is cognisant of the many use-cases for which these models can be adapted and in doing so, targets players across the AI value-chain, covering models that can be made available through open-source and licensing channels and can be used in several downstream applications.

How are foundation models and generative AI defined by the AI Act?

Recital 60 e, which was also added in the latest version of the Act’s text defines foundation models as:

“AI models are developed from algorithms designed to optimize for generality and versatility of output. (These) models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained.”

The legislation further clarifies this definition in Recital 60 g, stating that pre-trained models designed for "narrower, less general, more limited set of applications” should not be considered foundation models due to their greater interpretability and predictability.

The EU AI Act also defines Generative AI in Article 28 b (4) as:

“AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video.”

Acknowledging the many complexities and uncertainties pervading in the foundation model ecosystem, clarification of roles of actors in the AI value chain, the lack of expertise in conducting conformity assessments for these models, or the absence of standardised third-party audit and assurance mechanisms, the Commission and the proposed AI Office have been tasked with periodically monitoring and assessing the legislative and governance framework around these systems.

Obligations for providers of foundation models

Under Article 28 b, providers of foundation models are required to ensure compliance before putting their product in the EU market through the following mechanisms:

  1. Ensure the identification, detection and reduction of reasonable risks to health, safety, fundamental rights, the environment and democracy during a model’s development through appropriate design, testing and analysis.
  1. Incorporate only those datasets in training models that are subject to appropriate data governance safeguards, thereby mitigating potential risks arising from low data quality, biases and suitability of data sources.
  1. Employ expert consultation, documentation mechanisms and extensive testing methods in a model’s design and development stages to ensure appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity are maintained throughout the model’s lifecycle.
  1. Following the notification of harmonised standards mentioned in Article 40 of the EU AI Act, comply with relevant standards to reduce energy use, increase energy efficiency, and minimise waste. Additionally, providers are required to design foundation models with capabilities to measure energy and resource consumption, and, where technically feasible, provide in-product metrics on environmental impact across a model’s lifecycle.
  1. Facilitate the compliance of downstream providers with obligations under Articles 16 and 28(1), by providing extensive technical documentation and user-friendly instructions for use.
  1. Similar to High-Risk Systems, establish a Quality Management System (QMS) to ensure and document compliance.
  1. Register foundation models in an EU Database provided under Article 60, following instructions provided in Annex VIII of the AI Act.
  1. Maintain technical documentation for a period of 10 years from when a foundation model was placed on the market and keep a copy of the same with competent national authorities.
  1. Additional obligations for providers of generative AI systems:
    a. Comply with transparency obligations under Article 52 (1).
    b. Embed adequate safeguards in the training, design and development of generative AI systems to ensure that content generated by such systems does not violate EU Law.
    c. Publish a detailed summary of the use of training data protected under copyright law.
Regulating Foundation Models and Generative AI: The EU AI Act Approach

Further, providers of foundation models are expected to cooperate with downstream operators on regulatory compliance throughout the duration of the system’s lifecycle, if the model in question has been provided as a service through Application Programming Interface (API) access. However, if the provider fully transfers the training model along with detailed information on datasets and the development process, or restricts API access, downstream operators are expected to comply with the regulation without further support (Recital 60 f).

What’s next

The EU AI Act is one of many emerging endeavours to govern foundation models and generative AI. Concerted regulatory momentum to legislate these technologies is increasing across the world – and companies desirous of developing and deploying such models must proactively ensure they fulfil the increasing list of compliance obligations.

{{EU}}

Holistic AI takes a comprehensive, interdisciplinary approach to responsible AI. We combine technical expertise with ethical analysis to assess systems from multiple angles. Evaluating AI in context, we identify key issues early, considering both technologies factors and real-world impact to advance the safe and responsible development and use of AI. To find out more about how Holistic AI can help you, schedule a call with our expert team.

Are you ready for the EU AI Act?

Avoid hefty penalties and achieve AI governance

Learn More

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call