From the east to the west coast, policymakers are beginning to prioritize regulating automated employment decision tools and systems. Illinois was the first to enact enacting the Artificial Intelligence Video Interview Act in 2020. The Act requires employers to inform job applicants that AI will be used to judge their video interviews and the characteristics it will consider. New York City then passed legislation that goes into effect on 1 January 2023, mandating bias audits of automated employment decision tools. California has proposed amendments to existing legislation and introduced new legislation to regulate the use of AI in the workplace. We’ll give an overview of both legislation and compare California’s proposed laws with the New York City legislation.
To regulate the use of AI in recruitment, the New York City Council passed legislation (Local Law 144) requiring employers to commission independent, impartial bias audits of their automated employment decision tools (AEDTs) before using them to evaluate candidates for employment or employees for promotion within the city limits. A summary of the results of the bias audit, including the distribution date of the tool that the audit applies to, must also be made publicly available. Recently proposed rules also require that this summary remains on the employer’s website for at least 6 months after the tool is last used.
Alongside the bias audits, Local Law 144 requires that employers notify candidates or employees that an automated tool will be used to evaluate them, the characteristics that will be used, and that they are able to request an alternative selection procedure or accommodation at least 10 business days before the tool is used. If not publicly available on the website of the employer or employment agency, employers are also required to provide information about the type of data collected, the source of data, and the company’s data retention policy upon written request.
Taking a less prescriptive approach, California has proposed modifications to its employment regulations regarding automated-decision systems (ADSs) to make it unlawful to use automated tools that discriminate on the basis of protected characteristics, unless the discriminatory criteria is consistent with a business necessity. The use of selection tools to inquire about criminal history of a candidate before an offer is made is also prohibited under the act.
The draft revisions expand the liability exposure and obligations of employers and third-party vendors that use, sell, or administer employment-screening tools that automate decision-making. The modifications also outline record-keeping requirements for anyone who advertises, sells, provides or uses a selection tool, including automated tools; records of the assessment criteria used by each system for each employer must be maintained for at least 4 years after the system is last used. This includes the datasets used to train the algorithm, data provided by individual applicants or employees, data about individual applicants and employees that have been analyzed by the algorithm, and data produced from the application of an ADS operation.
While the above regulations focus on hiring or promotion decisions, California has also proposed legislation to regulate the more day-to-day use of technology in the workplace. The proposed Workplace Technology Accountability Act seeks to restrict an employer’s ability to process worker data to only purposes that have a strict business necessity such as: allowing a worker to accomplish an essential job function, monitoring production processes or quality, assessment of worker performance, ensuring compliance with employment, labour or other relevant laws, protecting the health, safety or security of workers, and administering wages and benefits.
The law also seeks to restrict electronic monitoring of employees, stipulating that monitoring is prohibited in private areas such as bathrooms and should be limited to the smallest number of workers possible. The form of monitoring should also be the least invasive form possible and should collect as little data possible to achieve its specified aim.
As well as these restrictions, the law would also require algorithmic impact assessments of automated decision systems used to make employment-related decisions, and data protection impact assessments of worker information systems, both of which should be conducted by an independent assessor with the relevant experience. The purpose of these assessments is to identify risks associated with the system, including bias or discrimination, and inform mitigation and ongoing monitoring procedures.
While all three regulations aim to reduce potential harms associated with HR tech tools using nearly identical definitions for the tools themselves, albeit using slightly different terminology (automated employment decision tool v. automated-decision system), there are some notable contrasts between them including who is in scope and the level of due diligence required to identify and mitigate risks.
California’s Proposed Modifications applies to employers with five or more employees. Employees outside of California are included in the count but are not covered by the protections of the modifications if the prohibited activity occurred outside of California, according to the California Fair Employment & Housing Council (FEHC). In contrast, the Workplace Technology Accountability Act and the NYC Bias Audit legislation do not outline exemptions for certain employers.
While NYC’s Local Law 144 and the Proposed Modifications focus on isolated decisions that result from the use of AEDTs, California’s Workplace Technology Act is broader in scope. The legislation offers workers greater protection from everyday automated decisions that may cause potential harm and discriminatory impact, placing stricter safeguards on employees’ workplace privacy rights, not only unjust hiring.
The Workplace Technology Accountability Act and the New York City legislation require independent assessments by a third-party auditor of automated tools used in hiring, assessment, and promotion. The Act specifies that employers and vendors must conduct an algorithmic impact assessment, which tests for bias or discriminatory outcomes, along with other factors such as errors and potential privacy harms, and informs mitigation strategies for any risks identified. In contrast, New York City’s Local Law only requires an impartial bias audit before a tool is used and does not prescribe additional requirements should bias actually be found in a system.
California and NYC diverge on their approach to liability. Local Law 144 places the responsibility on employers to comply. Comparatively, California’s compliance obligations are wider spread. Both vendors and agents acting on behalf of an employer are considered as an employer under the proposed laws. In this case, vendors and agents equally share liability and must comply. As a result, under New York Law, employers and employment agencies could incur penalties of up to $1,500 per violation per day for noncompliance. Similarly, failure to meet California’s strict record-keeping and data-collection requirements can result in hefty breach fines.
Strict notification, collection and data retention requirements are at the core of all three pieces of legislation for employers and vendors wishing to deploy an automated decision tool. Most critically, the Proposed Modifications condone any unlawful discrimination on the basis of protected characteristics without demonstration of business necessity, a difficult task to prove.
Employers and vendors embedding artificial intelligence in their products, services, processes, and decision-making, need to consider adopting reliable systems of governance and auditing to avoid discrimination in their use of AI employment tools to stay ahead of emerging regulations from coast to coast and manage risks. Employers should:
Book a call with our team of experts for a complimentary consult.
Subscribe to our Newsletter!
Join our mailing list to receive the latest news and updates.