In response to concerns about harm that can result from the use of AI, and the calls for greater governance of AI systems by the AI ethics movement, legislation addressing the use of these technologies has begun to emerge. Regulation of AI typically takes one of two approaches:
- Horizontal regulation - applies to all applications of AI across all sectors and applications. Regulative authority for this often lies with the central government.
- Vertical regulation – applies to only a specific application of AI or a specific sector. Regulative authority for this may be delegated to an industry body.
Among other things, the two approaches can be compared in terms of their flexibility, standardisation and coordination, and each have their pros and cons.
- Flexibility - while the horizontal model is uniform and stable, the vertical model is flexible and allows industry bodies to make provision for differences across sectors and to optimise regulation to meet the specific needs of the industry
- Standardisation - the horizontal model enables the central government to guarantee that all sectors meet the standards that are in the public and national interest. It prevents enterprises from ‘shopping around’ between sectors, ensuring that all players conform to the same standards. Horizontal regulation also gives citizens a stable set of assurances and rights in their interactions with AI, whereas rights can vary between applications under a vertical approach. However, this specificity under the vertical approach provides greater certainty to members of the industry where horizontal regulation will necessarily be vague in order to be broad in its application.
- Coordination – under a vertical approach there may be a significant amount of overlap between regulations resulting in a multiplicity of responsible agents, overlap and undue burden upon relevant parties; a single party may be required to report to multiple regulators regarding the same thing. The horizontal approach, on the other hand, has more centralised reporting, preventing the need for submission of multiple reports to various bodies, although these general reporting requirements may be easier to comply with in some sectors than others.
Examples of horizontal regulation
- EU AI Act – proposes a horizontal regulatory framework for the European Union, in which AI across sectors are subject to the same risk-assessment criteria and legal requirements (discussed in the following section). Per the AI Act, all low-risk algorithms are subject to transparency requirements, high-risk algorithms are subject to more strenuous compliance measures, and then a further category of algorithms are prohibited altogether. We discuss the EU AI Act more here, and the updated presidency compromise text here.
- US Algorithmic Accountability Act – adopts a horizontal approach by directing the Federal Trade Commission (FTC) to mandate impact assessments of AI systems across sectors (subject to the size and reach of the enterprise).
- UK’s National AI Strategy – can be seen in terms of a (potential) horizontal approach to regulation. We read this into the text because of the sections within the strategy that relate to AI governance. In fact, one of the three pillars of the text is ‘Governing AI effectively’, in which two imperatives are particularly present: stimulating innovation and enterprise, and designing standards and a regulatory regime that reflects this innovation agenda. We discuss this further here.
Examples of vertical regulation
- NYC bias audit mandate – applies to the use of automated employment decision tools, therefore only regulating the recruitment sector. Under this legislation, employers are required to commission a third-party audit of their systems to identify and mitigate bias. Check out our full-length paper, FAQs, and compliance/exemption quiz to find out more about this legislation.
- Illinois Artificial Intelligence Video Interview Act – applies to the use of artificial intelligence to judge video interviews conducted during the hiring process, therefore only regulating a specific activity in the recruitment sector. The legislation requires employers to give notice of the use of AI, obtain consent, and report demographic data. Check out our summary of this here.
- California Workplace Technology Accountability Act – limits workplace monitoring and the use of automated decision-making systems, also requiring algorithmic impact assessments (and data protection impact assessments). Check out our full-length paper on this here.
Written by Airlie Hilliard, Senior Researcher at Holistic AI. Follow her on Linkedin.