Following the EU AI Act, organizations should be on the lookout for an increasing number of international AI regulations. One of the most prominent frameworks for international regulation was actually set in motion well before the EU AI Act (and even before the launch of chatGPT). In 2019 the Council of Europe (“CoE”) granted a mandate to the Ad Hoc Committee on Artificial Intelligence (“CAHAI”).
Today, the ad hoc committee has turned into the standing Committee on Artificial Intelligence (“CAI”), and on December 18th 2023 joined the host of other organizations pushing legislative drafts at the end of the year.
The Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (“DFC”) was the result of several years of research, and it is now a guiding international framework providing principles and norms for artificial intelligence as it relates to human rights, democracy, and the rule of law.
The Holistic AI team is honored to have been consulted in the drafting of the framework and would love to share our insights and what organizations need to know about the DFC in this guide.
The CoE has a long history of impactful regulations drawing on the collective strength of its 46 Member States. A number of the world’s most important treaties as well as the landmark European Convention on Human Rights were both initiated by the CoE. In short, any organizations working the EU should pay attention to regulations proposed by the CoE.
Additionally, the CoE has one of the longest track records of leading AI policy and governance. The CoE adopted the first European Ethical Charter on the use of artificial intelligence in judicial systems in 2018. In 2019, at the consultative committee for Convention 108 (a CoE treaty as well as one of the most important international instruments in the field of data protection) the CoE published the Guideline on Artificial Intelligence and Data Protection. This resource identified a set of baseline measures for governments, AI developers, manufacturers, and service providers to ensure that AI applications do not undermine the fundamental right to privacy and data protection rules.
By fostering a balance between technological innovation and ethical governance, the CoE continues to be a key influencer in the international effort to ensure AI develops in a manner that is respectful of human rights, democratic values, and the rule of law.
The CAHAI (Ad hoc Committee on Artificial Intelligence) and the CAI (Committee on Artificial Intelligence) are significant entities established by the CoE to address the challenges and opportunities presented by artificial intelligence in the context of human rights, democracy, and the rule of law.
The foundations of the CAHAI were laid down during the 1346th meeting of the Committee of Ministers, the decision-making body of the CoE, in Helsinki in May 2019. During this meeting, the Committee of Ministers recognized the need to analyze the adequacy of existing European standards in the face of the rapid advancement of artificial intelligence. The objective was to identify any gaps and to develop sector-specific recommendations, guidelines, and codes of conduct. Additionally, there was an acknowledgment of the potential need for other instruments to govern artificial intelligence. The Committee of Ministers instructed its deputies to explore the feasibility of a legal framework tailored for the development, design, and application of AI. This framework was to be anchored in the Council of Europe's foundational standards of human rights, democracy, and the rule of law.
Subsequently, in September 2019, at its 1353rd meeting, the Committee of Ministers formally established the CAHAI. The mandate of the CAHAI was to examine the feasibility and essential components of a legal framework for AI. This examination was to be based on broad multi-stakeholder consultations, ensuring that the diverse perspectives and concerns related to AI were comprehensively addressed. The framework aimed to align AI development and application with the core principles upheld by the CoE.
The CAHAI was assigned a mandate until 31 December 2021. Following the completion of its term, it was succeeded by the CAI.
The DFC is a draft for an international convention, which is an instrument of public international law. The nature of the DFC, as with other international conventions, is distinct from national or EU-wide legislation.
In principle, international conventions do not directly confer rights upon or impose obligations on natural persons or private organizations. Instead, their primary function is to bind States and other international organizations that become parties to the convention. Upon ratification or accession to such a convention, these parties – typically States – are obligated to integrate the rules and principles of the convention into their own domestic legal and regulatory frameworks.
The Framework Convention, therefore, will act as a guiding framework at an international level, setting out principles and norms for artificial intelligence that are in line with human rights, democracy, and the rule of law. Its impact will be realized when the parties to the convention translate these international norms into concrete actions, laws, and regulations within their own jurisdictions. This process is crucial for ensuring that the global development and deployment of artificial intelligence technologies are conducted in a manner that is consistent with these shared values and principles.
The main purpose of the Framework Convention is to ensure that AI systems align with and uphold the principles of human rights, democracy, and the rule of law throughout their entire lifecycle, as stated in Article 1(1) of the DFC.
The DFC defines an AI system in alignment with the definition provided by the Organisation for Economic Co-operation and Development (OECD). According to this definition, an AI system is:
“a machine system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments”. Adopting the OECD's updated definition ensures consistency and coherence in the international understanding of what constitutes an AI system.”
This definition encapsulates a broad spectrum of AI applications and functionalities, recognizing the diverse ways in which AI systems operate and interact with the world. By focusing on the system's ability to infer and generate influential outputs from received inputs, the definition encompasses a wide range of AI technologies, from simple automated decision-making systems to more complex and sophisticated machine learning models.
In its current form, the DFC shall apply to “activities within the lifecycle of [AI systems] that have potential to interfere with human rights, democracy and the rule of law”. However, there are exceptions to this broad scope, which are subject to debate, similar to the case with the EU AI Act.
The three primary areas where exceptions are introduced in the DFC are:
There are multiple options for the wording of such exceptions in the current version of the DFC. Recognizing the risk of a broad interpretation of these terms, the DFC overrides the R&D exception if the systems are developed, used, tested, or decommissioned in ways that have the potential to interfere with human rights, democracy, and the rule of law.
The DFC imposes two broadly drafted general obligations on the parties. Accordingly, each party shall adopt and maintain measures (1) to ensure that activities of AI systems are compatible with national and international human rights obligations and (2) to protect participation in democratic processes.
In addition to these two general obligations, the DFC provides eight general principles for the parties to implement in their own domestic legal system:
The DFC allocates a specific chapter for risk assessment and mitigation. Pursuant to the newly added Article 16 of DFC, parties are required to “take measures for the identification, assessment, prevention and mitigation of risks and impacts to human rights, democracy and the rule of law arising from the design, development, use and decommissioning of artificial intelligence systems” within the scope of the DFC.
DFC provides that these risk identification, assessment, and mitigation measures shall pursue a risk-based approach and meet eight requirements that overlap, to a certain extent, with the requirements for high-risk systems provided under the EU AI Act:
Despite the reference to the risk-based approach, the DFC, unlike the EU AI Act, does not classify specific use of AI systems as prohibited or high-risk systems, but rather handles this issue at the level of the scope by covering all AI systems “that have potential to interfere with human rights, democracy and the rule of law” and requiring appropriate risk assessment and mitigation measures to be implemented for all of them.
The DFC does not provide a separate list of prohibited AI systems either. Instead, Article 16(3) empowers parties to enact moratoriums, bans, or other measures on certain uses of AI systems deemed incompatible with human rights, democracy, and the rule of law, leaving the decision to ban specific AI applications to the discretion of the contracting States.
The enforcement of the DFC is multifaceted and involves a combination of national implementation, international cooperation, and a follow-up mechanism for oversight and consultation. Since the DFC is a draft and not yet legally binding, its future enforcement will depend on how it is ratified and integrated into the domestic laws of the parties. Nevertheless, the DFC has already outlined some enforcement mechanisms for when it does become legally binding:
The EU does not have a direct organic connection with the CoE. The CoE, which is not to be confused with the Council of the EU (a major EU institution), is an international organization focused on promoting human rights, democracy, and the rule of law. The EU, on the other hand, is a political and economic union. However, despite their structural differences, both the CoE and the EU share the same fundamental values.
All Member States of the EU are Member States of the CoE and signatories to major CoE treaties as well. In fact, the EU itself, possessing a legal personality separate from its Member States, signs some of the CoE treaties as an individual signatory. Thus, when finalized, all EU Member States, and potentially the EU itself, may sign and ratify the Framework Convention. Indeed, recognizing this inter-organizational cooperation and the unique decision-making allocation between the EU and its Member States, the DFC contains specific provisions for the EU and its Member States and explicitly stipulates that the Framework Convention shall be open for signature for the EU as well.
The finalization and the signature of the DFC by the EU and its Member States would not jeopardize or create a conflict with the expected EU AI Act. From the perspective of international law, the EU AI Act shall be a part of the domestic law of the EU and its Member States, which is referred to by the DFC provisions in many instances. Additionally, despite the divergences in details, the EU AI Act and the DFC are based on the same fundamental values and principles, and hence, these two instruments complement each other.
When finalized, it will be one of the Council of Europe’s treaties. However, by nature, its finalization will not make it automatically enter into force or be binding on CoE member states. Once finalized, the DFC will be opened for signature and entered into force only after the quantitative signature criterion is met.
According to the current text of the DFC, it “shall enter into force on the first day of the month following the expiration of a period of three months after the date on which five Signatories, including at least three member States of the Council of Europe, have expressed their consent to be bound by the Convention”.
The Council of Europe, along with other national and international regulators and lawmakers, is ramping up its efforts to regulate AI. The principles as well as the risk assessment requirements to be provided by the Convention are universal in nature and like those that have been instructed in other jurisdictions’ AI governance rules.
Companies developing and deploying AI will soon have a wave of legal requirements to navigate regardless of where they are located, and when finalized, the Framework Convention will fasten this process as the first-of-its-kind international convention.
Getting started early is the best way to maximize alignment with emerging and existing laws. Make sure you are equipped to navigate existing and emerging legislation with Holistic AI and deploy appropriate risk assessment, mitigation, and prevention tools in place.
Schedule a call with our experts to find out how Holistic AI can help you with our visionary AI Governance, Risk Management, and Compliance Platform, as well as our suite of AI audit solutions.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts