Impact assessments can be used to determine how potentially risky or harmful something is in terms of a low, medium, or high impact. While impact assessments are not novel nor exclusively used in the context of AI - they are used in the context of planning applications to determine the environmental impact of the build, policy and regulation to establish best practices, and when making decisions about public services to determine the impact on equality – they are increasingly being used to determine the impact that AI systems might have on users and other relevant stakeholders.
Most relevant to automated systems are Algorithmic Impact Assessments (AIAs) and Data Protection Impact Assessments (DPIAs), both of which are required by the proposed Workplace Technology Accountability Act in California. The upcoming EU AI Act also requires AIAs to determine whether a system is high-risk and therefore subject to additional regulation. This blog post gives an overview of AIAs, DPIAs, and the difference between them, providing some examples of legislation that requires them.
The purpose of an AIA is to determine the risks associated with the use of an algorithmic or automated decision system. They can take factors such as the type of system and its capabilities, whether it affects external parties, and the type of source of data being used into consideration, and are typically in the form of a questionnaire.
Some applications of algorithms are inherently riskier, or higher impact, than others. For example, algorithms used in a healthcare or recruitment context can have serious implications for an individual’s life chances, while algorithms used in a context like spam classification have little impact on users’ lives.
Determining the impact of a system can inform the steps required to mitigate the risk associated with its use. Those with a higher impact are subject to tighter regulations and stricter requirements than those with a low impact. For example, an algorithmic recruitment tool could be required undergo continuous and rigorous testing for potential problems such as bias and accuracy, while a low impact system such as an inventory management system has a limited impact outside of the context that it is being operated in, and would be subject to much more lenient requirements that may even be voluntary.
Although algorithms rely on data, Impact Assessments of the algorithmic system and the use of data are separate, although they can be undertaken in combination with each other. This is because algorithms have unique challenges that go beyond the use of data, and a low impact algorithmic system could still present risks in terms of data protection if the data used by the system is not managed appropriately. DPIAs address areas of data management such as the use of identifying and sensitive data, data security including provisions to prevent data breaches, the origins of data, and the accuracy of data, allowing appropriate data management strategies to be designed and implemented.
Although Data Protection Impact Assessments have already been codified in legislation such as GDPR, we are starting to see progress towards AIAs being legally required. Indeed, the proposed EU AI Act heavily references the need for impact assessments, with systems being classified as having low, limited, high or unacceptable risk. Those that meet the criteria for a high-risk system require ongoing and continuous monitoring and mitigation of risk while those with the lowest level of risk can voluntarily undergo this process and those with unacceptable levels of risk (e.g., real-time biometric monitoring systems) are prohibited from being used within the EU.
Likewise, the proposed US Algorithmic Accountability Act also requires impact assessments of automated decision tools, which should be documented and submitted to the Commission, which will annually report the lessons learned from these impact assessments and provide additional guidance. Taking a more context-dependent approach, California’s Workplace Technology Accountability Act requires AIAs of automated decision tools used to make decisions about workers, and DPIAs of worker information systems that hold data about workers, therefore distinguishing between the risks associated with the use of algorithms and data. Outside of the EU and US, the Canadian Government has developed its own AIA tool, which complements its recently proposed Artificial Intelligence and Data Act.
Since these laws are only in the proposal stage, AIAs are yet to be fully codified in law, so compliance with performing AIAs is currently voluntary. However, performing a voluntary impact assessment enables greater command and control over algorithmic systems, and increases the opportunity for innovative applications of AI while managing harm and taking steps towards algorithmic assurance.
To find out how Holistic AI can help you identify and mitigate the risks associated with your algorithmic systems, schedule a demo with a member of our team.
Written by Airlie Hilliard, Senior Researcher at Holistic AI. Follow her on Linkedin.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.