The EU Artificial Intelligence (AI) Act is aiming to lead the world in the governance of AI, requiring impact assessments to identify the risk associated with the use AI systems, and continuous management and mitigation of this risk. One of the major contributions of the proposed EU AI Act is the creation of regulatory sandboxes, which would provide participants of the initiative the opportunity to undertake controlled experiments and testing of their products under the supervision of relevant authorities. The sandboxes therefore provide an environment for providers to test the compliance of their product before it is launched on the market.
The EU Act gives priority access to the sandboxes to small and medium sized enterprises (SMEs), including start-ups, supposedly removing some of the barriers that they may face when launching their product. However, participants of the sandbox are still liable from any harm inflicted on third-parties that results from experiments taking place within the sandbox, meaning that not all risk is removed.
The Act encourages member states to develop regulatory sandboxes to facilitate the EU’s vision, and calls for common rules to be established to promote more standardised approaches across the member states, and to facilitate cooperation between overseeing authorities. To this end, Spain has recently announced that they will be piloting a regulatory sandbox aimed at testing the requirements of the legislation, as well as how conformity assessments and post-market activities may be overseen.
Deliverables from this pilot include documentation of obligations and how they can be implemented, and methods for controlling and following-up that can be applied by those supervising national authorities responsible for implementing the regulation.
Reflecting the cross-state approach desired by the Act, other member states will be able to follow or join the pilot. The pilot is expected to begin in October 2022 and results published by the end of 2023. With a budget of 4.3 million euros, the pilot will be financed by the Spanish government’s recovery and resilience funds as part of the Spanish National AI strategy.
Sandboxes are expected to have benefits to both businesses or providers, who are able to develop and test their products in a realistic environment to ensure that they meet the appropriate regulations, and to regulators, who are better able to understand the products they are regulating. This can result in a shorter time to market, with the products that are released potentially being safer for consumers.
However, sandboxes can also have limitations, including the risk of abuse. For example, those with harmful intentions could misuse the sandboxes, meaning that the regulation is made less stringent as a result, potentially resulting in harmful products being released on the market. Further, the use of sandboxes could delay innovation from private actors if the actions of regulators harm their research and development processes.
Notwithstanding these concerns, the presentation of this first regulatory sandbox likely signals what is to come: other Member States may develop and pilot their own sandboxes, or this initiative may become pan-European and the default sandbox for providers and regulators.
Proponents believe that the AI sandbox provision in the European Commission's proposed Artificial Intelligence Act, whose aim is to enable controlled experimentation between AI developers and regulators, will be beneficial for AI regulation in Europe because it will facilitate responsible AI development in EU member states. By providing controlled environments for companies to test compliance with emerging legal frameworks, it is argued that the act would give developers the chance to refine AI systems under real-world conditions, collaborating with authorities to demonstrate compliance. Some commentators, however, have suggested that there should be more interplay between AI sandboxes and the regulatory frameworks outlined by the data protection authorities.
As AI regulation accelerates across the EU and globally, the focus on fairness and harm reduction is intensifying. These are more than buzzwords at Holistic AI – they're the core of what we do. By merging technical expertise with a deep understanding of policy, we ensure a comprehensive grasp of both the technology and its application in various settings.
Schedule a call with our specialised team to explore tailored solutions that align with your specific needs, within the context of the proposed AI Act and beyond.
Updated by Adam Williams on 8 August 2023.
Written by Airlie Hilliard, Senior Researcher at Holistic AI. Follow her on Linkedin.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.