EU AI Act: Leading Academics Call for Strengthened Fundamental Rights Impact Assessment and Audit Mechanisms

September 14, 2023
Authored by
No items found.
EU AI Act: Leading Academics Call for Strengthened Fundamental Rights Impact Assessment and Audit Mechanisms

An appeal signed by over 110 AI, data governance, and civil rights academics has called for the EU AI Act to include requirements for all AI systems to undergo Fundamental Rights Impact Assessments (FRIAs).

Published on 12 September by the Brussels Privacy Hub, the appeal is primarily rooted in the signatories' conviction that profound risks stemming from the deployment of AI systems can be effectively mitigated through comprehensive ex-ante safeguards like FRIAs and audits, therefore warranting stronger integration into the AI Act.

Envisaged to establish global regulatory leadership on AI Governance, the AI Act is a landmark piece of horizontal legislation proposed by the European Commission in 2021 to regulate all AI systems available in the EU Market.

Under the AI Act’s risk-based approach, systems are classed as posing minimal, limited, high or an unacceptable level of risk, with corresponding obligations proportionate to the system’s classification.

Passed by the European Parliament with a large majority on 14 June 2023, the AI Act is currently progressing through the final Trilogue stage between the EU Parliament, Council, and Commission. It is expected to be finalised by the end of the year.

The Brussels Privacy Hub’s appeals to EU legislators

The Brussels Privacy Hub’s appeal letter calls for the final version of the Act to ensure the following:

  • Development of well-defined criteria for evaluating how AI affects fundamental rights.
  • Transparency on the results of FRIAs, by providing easily understandable summaries to the public.
  • Participation and active involvement of end-users in the policy development process, particularly those who belong to vulnerable groups.
  • Involvement of independent public authorities in the impact assessment process, as well as in proposed auditing mechanisms.

Seeking to develop regulatory alignment on AI within the EU market, the letter has also demanded that tools like FRIAs be coordinated with the existing impact assessments mandated by other EU regulations.

Finally, signatories have urged EU institutions to ensure that the FRIA process is ‘transparent, participatory and multidisciplinary’ by convening diverse stakeholders from civil society, academia, technical communities and marginalised groups in order to enhance the legitimacy and efficacy of the regulatory framework.

Prepare early

With penalties of up to €40 million or 7% of global turnover (whichever is higher), the consequences of non-compliance with the Act will be profound for organisations using AI in their business.

Early preparation is the best way to secure compliance with the EU AI legislation and ensure that preventable harm does not occur.

At Holistic AI, we specialise in Governance, Risk, and Compliance. Schedule a call with a member of our expert team to learn how we can help you prepare for the AI Act.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call