The mass adoption of artificial intelligence (AI) is ubiquitous across sectors. While this can have many benefits, the use of these systems can pose novel risks if left unchecked. Precipitated by high profile cases of harm, such as the glitch in Knight Capital’s trading algorithm where $440 million USD was lost in 30 minutes; the State of Michigan reached a $20 million USD settlement with Michigan residents wrongly accused of fraud by an automated system used by the state; and the Dutch Tax Authority scandal where 10,000s of lives were ruined after an algorithm was used to detect suspected benefits fraud, there has been increased industry, public and regulatory concern, with an impetus to manage the risk.
The European Union (EU) is leading the way with its proposed AI Act, which seeks to ensure that AI systems placed on the EU market are safe and do not pose a risk to the fundamental rights of citizens. Expected to come into force within the next two years, the EU AI Act and developments thereof are already shaping the industry practice along with the formation of new rules as well as standards, precipitating it becoming the de-facto global standard. The EU AI Act (EU AIA) proposes a “risk-based approach” for regulating AI systems, where systems are classed as having (1) low or minimal risk, (2) limited risk, (3) high-risk, or (4) unacceptable risk.
Systems classed as ‘high-risk’ must meet the requirements in Chapter 2 of the AI Act. These broad, stringent requirements related to the development, deployment, and usage of a high-risk AI system’s lifecycle. Important to note, Chapter 2 is not the only source of obligations relating to high-risk systems. Specifically, Chapter 2 sets out the “legal requirements for high-risk AI systems in relation to data and data governance, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security”. However, there are other pertinent and interconnected requirements stipulated throughout the Act.
This paper, though, is not intended as an exhaustive guide to the Act. Instead, we focus primarily on the Chapter 2 requirements, aiming to explain them and to highlight an apparent gap between what is necessary for compliance and what will be considered sufficient to be not liable. As whether a high-risk AI system is compliant with the requirements provided under the EU AI Act is an important question with many practical implications for everyone involved with the lifecycle of an AI system, primarily providers.
Schedule a call with one of our experts