Artificial Intelligence (AI) has emerged as a transformative force, revolutionising industries and societies worldwide. However, along with its potential for positive impact, AI also poses significant risks that necessitate robust regulation. As a result, Governments across the globe have actively ramped up efforts to ensure the responsible development and deployment of AI systems. In this blog, we will provide an in-depth overview of AI regulations in three key regions: the European Union (EU), the United States, and Canada.
We will primarily concentrate on the requirements imposed on “High-Risk Systems,” highlighting the sectors in which these systems are applicable, and the regulatory mechanisms implemented to address potential harm, ensure transparency, and protect fundamental rights. It is important to note that AI systems are defined differently by various regulations, and we refer to this emerging group of AI applications as ‘High Risk’ in a broad sense.
Key takeaways:
Seeking to establish global leadership on governing Artificial Intelligence, the EU AI Act lays down a risk-based regulatory framework where AI systems are classed into having low or minimal risk, limited risk, high risk, or unacceptable risk, with obligations proportional to the level of risk posed. While AIs with unacceptable risks are prohibited, the legislation places stringent obligations on High-Risk AI Systems (HRAIS), transparency requirements on systems with limited risk, and no obligations on systems with minimal risks.
A system is considered a HRAIS if it is covered under Annex III of the EU AI Act, and poses significant risks to harm an individual’s health, safety, or fundamental rights. The second sufficiency condition was recently added in the Act’s latest compromise text, which was adopted by leading committees in the European Parliament on 11 May 2023 [update: The text was passed by the European Parliament on 14 June 2024]. Interestingly, the Act allows providers who deem that their system does not pose significant risks to notify supervisory authorities, who then have three months to review and object.
There are 8 broad use-cases that are considered High-Risk in the EU AI Act, which are:

Requirements for HRAIS:
First proposed in 2019 and touted as a key step towards ensuring more AI transparency and accountability in the United States, the Algorithmic Accountability Act (AAA) seeks to mandate companies to identify and resolve AI biases in their systems. If promulgated, the 2022 version of the legislation would be enforced by the Federal Trade Commission (FTC) – empowering it to develop reporting guidelines and assessments, provide annualized aggregated trends on the data it receives, and conduct audits of AI systems developed by vendors and deployed by organisations to facilitate decision-making.
The AAA governs Automated Decision Systems (ADS), and places stricter obligations on ADS used to make Critical Decisions. Categorised as Automated Critical Decision Processes (ACDPs), these involve automated processes that may have any legal, material or similarly significant effect on an individual’s life, and cover the following categories:

Requirements for ADS/ACDPs:
Introduced in 2021, the Stop Discrimination by Algorithms Act (SDAA) seeks to prohibit organisations operating in Washington DC from deploying algorithms that make decisions based on protected characteristics like race, religion, colour, sexual orientation, and income levels, among others. Enforceable by DC’s Attorney General (OAG-DC), the SDAA would mandate audits and specific transparency requirements, with fines amounting to $10,000 for individual violations. The proposed legislation was reintroduced in February 2023, and covers algorithmic processes used to provide ‘important life opportunities’ in the following domains:

Requirements:
Like Washington DC, California is at the forefront of state-level AI regulation, aiming to enhance safety and fairness by proposing legislation to regulate automated tools used to make consequential life decisions. Assembly Bill 331 was introduced in January 2023 and seeks to prohibit the use of Automated Decision Tools (ADTs) that contribute to algorithmic discrimination. This is defined as differential treatment or impact that disfavours people based on their actual or perceived race, colour, ethnicity, sex, religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, reproductive health, or any other classification protected by state law.
The Bill establishes distinct obligations for Developers (entities involved in coding, designing, or substantially modifying) and Deployers (entities utilising) ADTs to facilitate 'Consequential Decisions' in the following domains:

Requirements for ADTs:
Following in the footsteps of the EU AI Act, the Artificial Intelligence and Data Act (AIDA) envisages the creation of a risk-based regulatory approach to enable safety, fairness and transparency in AI systems developed and used in Canada. In doing so, the proposed act establishes ‘High-Impact Systems’ to include AIs that may adversely affect human rights, or bear risks of harm and safety. While specific criteria for identifying systems as High-Impact are yet to be delineated by future regulatory efforts, the current text takes offence to the use of AI systems that may cause biased outputs and serious harm, are created using unlawfully obtained personal data, and are used to intentionally defraud the Canadian public. When enforced, the legislation will be administered by an Artificial Intelligence and Data Commissioner and will penalise those found to be violating and contravening the AIDA’s provisions.
Requirements for High-Impact Systems:
There is a pressing need to develop trustworthy AI systems that are embedded with ethical principles on fairness and harm mitigation from the get-go. With regulatory efforts on AI taking momentum globally, businesses of all sizes will need to act early, and proactively, to be compliant.
At Holistic AI, we have pioneered the fields of AI ethics and AI risk management and have carried out over 1000 risk mitigations. Using our interdisciplinary approach that combines expertise from computer science, law, policy, philosophy, ethics, and social science, we take a comprehensive approach to AI governance, risk, and compliance, ensuring that we understand both the technology and the context it is used in.
To find out more about how Holistic AI can help you get compliant with upcoming AI regulations, schedule a demo with us.