In the fast-paced world of talent management, where artificial intelligence and automation are reshaping the HR landscape, ensuring consistent compliance with an expanding and fluctuating set of regulations has become a top priority.
Two groundbreaking pieces of legislation, NYC Local Law 144 and the EU AI Act, are among the most notable regulations to have emerged in this area in recent years. Both laws have profound implications for the HR industry in their respective jurisdictions.
Local Law 144 was officially enforced on 5 July 2023, while the AI Act has entered the final stage of the legislative process and is expected to become law by 2024.
Local Law 144 applies to employers or employment agencies using automated employment decision tools (AEDTs) for candidate evaluation or employee promotion in New York City. It specifically targets computational processes derived from machine learning, statistical modeling, data analytics, or AI that result in simplified outputs used to assist or replace discretionary decision-making in employment.
The AI Act meanwhile will, if passed into law as expected, apply to providers of AI systems established in the European Union, as well as those outside the economic area that are used within it. In contrast to Local Law 144, which zeroes in specifically on HR tech, the AI Act covers a broad range of AI systems. But the Act's comprehensive approach means it too applies to systems used in employment-related decision-making processes. Both laws aim to ensure fairness, transparency, and accountability in their respective zones of influence.
All organisations within the scope of Local Law 144 are required to obtain an independent, impartial bias audit for their AEDTs, with the aim of mitigating the risk of pre-existing biases being perpetuated by machine learning-powered employment processes. They are also required to make a Summary of Results publicly available on their website and provide notifications to candidates or employees regarding the use of AEDTs in their evaluation. Unlike the AI Act, the law does not explicitly prohibit certain practices, although the text does specify that the instances of discrimination can be referred to the Commission on Human Rights.
The AI Act outlines four categories of risk, with systems deemed to fall within the highest category, Unacceptable Risk, banned entirely. There is a sliding scale of obligations for systems in the remaining categories. Employment decision tools are categorised as High-Risk and have a number of associated obligations, including assessment, risk management, transparency, and documentation/notification requirements.
Here, you can find comprehensive breakdowns of the requirements for employers and employment agencies under Local Law 144 and the AI Act.
In terms of financial penalties, failure to comply with Local Law 144 will land organisations with an initial fine of $500 for their first default and each default on the same day, with fines of up to $1500 for each subsequent default. Every day an AEDT is used without an audit is considered a separate default, while failure to provide public notice of the audit is also considered a separate default. It is important to note that exact fines are determined by the relevant enforcement body or court, and that the law delineates between 'defaults' and 'violations', which carry slightly different penalties. Full details can be found here.
The AI Act meanwhile, whose compliance requirements are more comprehensive than Local Law 144, can lead to penalties of €40 million or 7% of global turnover, whichever figure is higher.
Besides the legal and financial penalties carried by both pieces of legislation, companies using HR in their employment or promotion practices are also liable to suffer reputational and operational damage if they fall foul of the law. If, for example, a company becomes embroiled in a civil lawsuit because their non-compliant use of an AEDT under Local Law 144 leads to an individual or group believing they have been adversely impacted, the organisation's image and brand will likely be tarnished. Human rights actions are also a possibility, with the law not limiting the authority of the New York Commission on Human Rights to enforce its directives. Companies also risk the same reputational harm if they violate the AI Act. On an operational level, non-compliant organisations run the risk of eroding customer trust and disrupting overall business activities. It is crucial for companies to prioritise compliance efforts to mitigate these risks and safeguard their long-term success.
A renowned governance, risk, and compliance scale-up, Holistic AI have been recognised by CB Insights as one of the world's most promising AI companies in their esteemed AI 100 list.
Discover how our expertise in AI auditing can assist your organisation in navigating the complexities of Local Law 144 and the EU AI Act. Schedule a call to learn more about our industry-leading solutions.
Avoid hefty penalties and achieve AI governance
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts