What Enterprises Need to Know About the EU’s AI Liability Directive

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Oct 6, 2022
read time
0
min read
share this
What Enterprises Need to Know About the EU’s AI Liability Directive

Key takeaways

  • The European Commission recently published proposals for an ‘AI Liability Directive’.
  • The EU AI Act and the AI Liability Directive are two sides of the same coin. The AI Act is designed to prevent harm caused by AI, whereas the AI Liability Directive is designed to ensure that victims are fairly compensated if harm occurs.
  • The Directive consists of measures which ease the burden of proof for victims in AI-related liability cases.
  • Enterprises should prepare by establishing comprehensive AI risk management frameworks for the development and deployment of high-risk AI.

What is the EU AI Liability Directive?

The AI Liability Directive is the EU’s proposed new law to make it easier to prove liability in cases where AI systems cause harm.

The proposal was published by the European Commission on 28 September.

This legislation -- which reinforces the EU AI Act -- updates national civil liability rules across Europe, making it easier for victims of AI-induced harm to prove who is liable and to receive compensation for damages.

How does the AI Liability Directive impact enterprises?

The new law has wide-ranging implications for enterprises which develop and/or deploy AI systems in the EU.

Enterprises should be aware of the possibility that they may be liable for harm caused by outputs of their AI systems. As well as paying damages, enterprises may also be required by courts to disclose sensitive information about their AI.

It is therefore vital for enterprises to establish robust AI risk management processes and to prepare for compliance with the EU AI Act, to minimise legal, reputational and commercial risks.

What is the purpose of the AI Liability Directive?

The AI Liability Directive is designed to ensure that victims who suffer harm or damage caused by AI systems enjoy equivalent levels of protection, under European civil liability rules, to victims who suffer harm caused by traditional technologies or products.

The European Commission’s position is that existing product liability rules are inadequate for addressing AI-related harm, given the difficulties in proving a causal link between the harmful output of an AI system and the fault or negligence of an individual or organisation.

By updating civil liability rules to reflect the unique properties of AI systems (i.e., opacity, ‘black box' decision-making and autonomy), the Directive aims to boost trust in the use of AI and promote the safe adoption of AI technologies.

How does the AI Liability Directive relate to the EU AI Act?

The AI Act and the AI Liability Directive are two sides of the same coin.

The AI Act is designed to prevent harm caused by AI, whereas the AI Liability Directive is designed to ensure that victims are fairly compensated if harm occurs.

The Artificial intelligence Act proposes a series of mandatory requirements for the ‘providers’ and ‘users’ of high-risk AI systems, such as those used in the HR or banking contexts.

The mandatory requirements include establishing risk management frameworks, conducting quality assurance testing, and maintaining technical documentation and record logs about the system’s functioning.

By exposing enterprises to the possibility of being held liable for harm caused by their AI systems -- and directly linking non-compliance with AI Act to liability for AI-induced harm -- the Directive incentivises compliance with the AI Act.

What are the AI Liability Directive’s core proposals?

The European Commission is proposing two core measures.

They both serve to ease the burden of proof for victims attempting to prove who is responsible for the harm that an AI has caused them:

  1. Empowering courts to order the disclosure of evidence from organisations regarding their high-risk AI systems
  1. Enabling courts to presume a causal link between non-compliance with relevant laws (e.g., the AI Act) and AI-induced harm or damage

Five years after the Directive has been implemented by EU member states, the European Commission will conduct a review, to establish whether these reforms are sufficient to protect victims of AI-induced harm.

The review will consider whether more robust liability measures should be introduced, such as mandating insurance for certain high-risk AI systems, and strict liability provisions (i.e., where an individual or entity is liable, without the claimant having to prove fault or negligence).

Existing liability rules require victims to prove a wrongful action or negligence by a person or organisation who caused the damage. This places the burden of proof firmly with the claimant. This Directive is trying to address this issue in the AI context.

When will courts be empowered to order the disclosure of evidence regarding AI systems?

The Directive empowers courts across Europe to order enterprises to disclose relevant information about their AI systems in legal proceedings.

This information will assist claimants in proving that defendants are liable for the harm.

Disclosures of evidence will be required in situations where the high-risk AI system in question is suspected of causing damage and the claimant has taken “all proportionate attempts” to gather evidence about the high-risk AI system from the defendant.

Courts must only request the disclosure and preservation of evidence which is “necessary and proportionate” to support the claim for damages. Courts must also consider whether trade secrets will be disclosed, and take steps to maintain the confidentiality of that information.

Under these proposals, enterprises may be obliged to disclose information relating to:

  • Description of the AI risk management framework
  • The intended purpose of the system
  • AI system design specifications and architecture
  • Monitoring and oversight of the AI system

What does the ‘presumption of causality’ between non-compliance and AI-induced harm mean?

To lessen the burden of proof falling on victims of AI-induced harm, the Directive introduces the ‘presumption of a causal link’ between non-compliance with relevant laws and the damage which the AI system has caused.

This means that in situations where the AI ‘provider’ or ‘user’ did not comply with a law which was intended to prevent the harm or damage that was caused, such as certain provisions of the AI Act, courts will assume the defendant is liable for that harm, unless the defendant can prove that they are not.

For the ‘presumption of causality’ to apply, the following conditions must all be met:

  • there is evidence of non-compliance with an EU or member state law which is directly intended to protect against the damage that occurred;
  • it can be considered “reasonably likely” that this non-compliance has influenced the AI system’s output (or lack of an output), which led to harm; and
  • the claimant demonstrates that the AI system’s output (or lack thereof) caused the damage.

In these situations, the burden of proof falls on the defendant to demonstrate to the court that their non-compliance did not cause the harm.

Who can bring claims against defendants?

Claimants can be the injured individual or an individual or entity which has assumed another party’s legal rights to collect damages, such as an insurance company or the heirs of a deceased person.

How should enterprises prepare?

The AI Liability Directive will likely become law within the next two years, while the EU plans to adopt the AI Act within the next year.

These proposals oblige enterprises to establish comprehensive AI risk management frameworks for the development and deployment of high-risk AI. They also make it easier for enterprises to be held liable and pay damages for AI-induced harm, especially where they have not fully complied with the AI Act’s provisions.

Given the spotlight being shone on AI risk by European legislators, and the vast GDPR style fines or damages enterprises may have to pay if they fall foul of EU rules, forward thinking organisations should act now to establish robust AI risk management systems, to ensure that their AI risks are detected, minimised, monitored and prevented.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Take command of your AI ecosystem

Learn more
Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo