The European Union’s (EU’s) proposed AI Act aims to harmonise requirements for AI systems across the EU with its risk-based approach. Ahead of this, some countries, like the Netherlands, are pressing ahead with specific national requirements.
The Dutch approach is shaped by the scandal around a biased algorithm that their tax office used to assess benefits claims. The tax office implemented the system in 2013 and, after civil society raised concerns, two formal investigations in 2020 and 2021 uncovered systematic bias affecting 1.4 million people.
Amnesty International’s report on the scandal documents the harms people suffered as a result; some lost their homes, life savings and suffered ill health due to stress. In 2021, then Prime Minister Mark Rutte issued a formal apology and his entire Cabinet resigned over the scandal.
In response, the Dutch government is ramping up algorithm oversight. They committed to a new statutory regime, ensuring that systems are checked for transparency and discrimination. The Dutch Data Protection Authority (Autoriteit Persoonsgegevens/ The AP) will get funding to create a new supervisory function. Alexandra van Huffelen, the State Secretary for Digitisation, updated parliament on the new approach in December 2022.
The data protection regulator (the AP) will get an extra €1 million in 2023 for algorithm supervision, rising to €3.6 million by 2026. This funding is in addition to the future AI authority requirement in the EU AI Act. AP will initially focus on:
The specific approach is still in development, but the direction of travel is clear. The Dutch government wants more transparency about the AI systems deployed in the public sector. Proposals include:
The Dutch tax authority scandal and public debate in mid-2020 about automated decisions in the Dutch covid-19 notification app promoted the Netherlands Court of Audit to intervene. The court audited a selection of automated systems currently in use and published a report in early 2021.
In their report, the court focused on algorithms with both a predictive or prescriptive function and a “substantial impact on government behaviour, or on decision made about specific cases, citizens or businesses”. They developed and tested an audit framework with five components (1) accountability, (2) model and data (3) privacy (4) IT controls and (5) ethics. The framework is closely aligns with the EU’s High Level Expert Group on AI recommendations and mirrors our approach to AI auditing.
The court found that:
The court concluded that “in many cases no action is taken to limit ethical risks such as biases in the selected data”. Auditing exposes gaps and builds trust by delivering recommendations for improvement.
The proposals currently apply to the public sector. However, we think they will impact business in two important ways:
Businesses should consider how they can demonstrate that their systems are fair, robust and explainable. We believe that AI assurance can provide that proof. Auditing can help businesses to comply with their GDPR requirement to show that processing is fair and to get ahead of the risk assessment requirements in the upcoming EU AI Act.
Written by Marcus Grazette, Policy Lead at Holistic AI.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.