New York City has been leading efforts to regulate automated employment decision tools (AEDTs) used to evaluate applicants for a position or employees for promotion within the City, enacting Local Law 144 in November 2021. Due to come into effect on 1 January 2023, the enforcement date was first postponed to 15 April 2023 and finally 5 July 2023 while the Department of Consumer and Worker Protection workshopped rules clarifying the law, adopting the final version in April 2023. Following suit, New Jersey proposed Assembly Bill 4909 in December 2022, requiring yearly bias audits of AEDTs before they can be made available for sale in the state.
In January 2023, the New York State Assembly demonstrated their own efforts to regulate AEDTs, introducing Assembly Bill A00567 to require annual disparate impact analysis (or bias audits) of AEDTs. In this blog post, we compare New York City Local Law 144 and New York State Assembly Bill A00567, which amends the labour law of New York by adding a new section 203-f.
Under NYC LL144, an automated employment decision tool (AEDT) is defined as:
“any computational process, derived from machine learning, statistical modelling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”
This does not include tools that are not used to automate, support, substantially assist or replace discretionary decision-making processes or if they do not impact natural persons, such as junk email filters and firewalls.
Under the adopted rules, statistical modelling, data analytics, or artificial intelligence is a group of mathematical, computer-based techniques that:
“i. that generate a prediction, meaning an expected outcome for an observation, such as an assessment of a candidate’s fit or likelihood of success, or that generate a classification, meaning an assignment of an observation to a group, such as categorizations based on skill sets or aptitude; and
ii. for which a computer at least in part identifies the inputs, the relative importance placed on those inputs, and, if applicable, other parameters for the models in order to improve the accuracy of the prediction or classification.”
And a simplified output is:
“a prediction or classification as specified in the definition for “machine learning, statistical modelling, data analytics, or artificial intelligence.” A simplified output may take the form of a score (e.g., rating a candidate’s estimated technical skills), tag or categorization (e.g., categorizing a candidate’s resume based on key words, assigning a skill or trait to a candidate), recommendation (e.g., whether a candidate should be given an interview), or ranking (e.g., arranging a list of candidates based on how well their cover letters match the job description).”
This does not include the output of tools that translate or transcribe text, such as converting a resume from a PDF or tools used to transcribe interviews.
According to A00567, an AEDT is:
“any system used to filter employment candidates or prospective candidates for hire in a way that establishes a preferred candidate or candidates without relying on candidate-specific assessments by individual decision-makers.”
This includes tests of cognitive ability or personality, resume scoring systems, and any other system whose function is governed by statistical theory or whose parameters are defined by such systems, including inferential methodologies, linear regression, neural networks, decision trees, random forests and other artificial intelligence or machine learning algorithms.
Excluded from this are tools that do not automate, support, substantially assist or replace discretionary decision-making processes and that do not materially impact natural persons.
According to Local Law 144, a bias audit is:
“an impartial evaluation by an independent auditor. Such bias audit shall include but not be limited to the testing of an automated employment decision tool to assess the tool’s disparate impact on persons of any component 1 category required to be reported by employers pursuant to subsection (c) of section 2000e-8 of title 42 of the United States code as specified in part 1602.7 of title 29 of the code of federal regulations.”
Where the component 1 categories are sex/gender (male, female, and optionally other) and race/ethnicity (Hispanic or Latino, White, Black or African American, Native Hawaiian or Pacific Islander, Asian, Native American or Alaska Native, and two or more races.)
Further, the adopted rules clarify that an independent auditor is not someone who:
“i. is or was involved in using, developing, or distributing the AEDT;
ii. at any point during the bias audit, has an employment relationship with an employer or employment agency that seeks to use or continue to use the AEDT or with a vendor that developed or distributes the AEDT; or
iii. at any point during the bias audit, has a direct financial interest or a material indirect financial interest in an employer or employment agency that seeks to use or continue to use the AEDT or in a vendor that developed or distributed the AEDT.”
Such audits must be carried out annually and must calculate bias using the following metric when the system is categorical:
And use this metric, which binarizes scores based on whether they are above or below the dataset median, for continuous systems:
These impact ratios should be calculated for both standalone and intersectional groups using historical data where available but can also be carried out using test data. Under the adopted rules, groups representing less than 2% of the data can be excluded from the analysis.
According to A00567, a disparate impact analysis (in other words, a bias audit) is:
"an impartial analysis, including but not limited to testing of the extent to which use of an automated employment decision tool is likely to result in an adverse impact to the detriment of any group on the basis of sex, race, ethnicity, or other protected class under article fifteen of the executive law.”
The results of the analysis must be reported to the employer implementing or using the AEDT and should differentiate between candidates who were and were not selected by the tool.
The disparate impact analysis should follow that specified in the Equal Employment Opportunity Commission’s Uniform Guidelines on Employee Selection Procedures. According to these guidelines, adverse impact can be said to be occurring when the selection rate for any race, sex, or ethnic group is less than four-fifths of the rate for the group with the highest rate.
According to Local Law 144, an employment decision means to:
“screen candidates for employment or employees for promotion within the city.”
Covering only employment and not promotions, A00567 defines employment decision as:
“to screen candidates for employment.”
Under Local Law 144, it will be unlawful from 5 July 2023 for employers or employment agencies to use an AEDT to screen candidates for employment or employees for promotion unless the tool has been audited for bias no more than one year prior to the use of the tool, and a summary of the results of the most recent bias audit is publicly available on the website of the employer or employment agency prior to its use.
The summary of results must include the date of the audit, the distribution date of the tool, the source and explanation of data used, the number of applicants in each category and scoring/selection rate of a category, and the impact ratio.
If test data is used, the summary of results must explain why historical data was not used and describe how the test data used was generated and obtained. Where groups representing less than 2% are excluded, the summary must also include justification for the exclusion, the number of applicants in the category, and the scoring rate or selection rate for the excluded category.
Annually, a disparate impact analysis must be conducted for AEDTs used by an employer to select candidates for jobs within the state. This analysis must be provided to the employer but should not be publicly filed. Further, a summary of the most recent disparate impact analysis and the distribution date must be made publicly available on the website of the employer or employment agency prior to the implementation or use of the tool. Employers must also provide the summary to the Department of Labor annually.
As well as publishing the summary of results, employers and employment agencies using AEDTs in NYC must also comply with notification requirements. At least 10 business days before the tool is used, they must provide notice of the use of the tool, including the qualifications and characteristics it will use to make judgments, and allow them to request an accommodation or alternative selection process. If information about the type of data collected, the source of the data, and the data retention policy are not available on the website, candidates can make a written request for this information. Employers and employment agencies must comply with these requests within 30 days of receipt, except where doing so would violate local, state, or federal law or interfere with a law enforcement investigation.
Currently, A00567 does not outline any notification requirements.
Under Local Law 144, civil penalties for non-compliance start at $500 for the first violation and each violation occurring on the same day, rising to up to $1500 for subsequent violations. The failure to provide notice and commission a bias audit are separate violations.
Although no specific penalties are outlined in A00567, the attorney general or commissioner may initiate an investigation if there is evidence of a violation, including from the summary of results. Proceedings can be initiated in any court of competent jurisdiction to correct any violation.
Initially due to go into effect on 1 January 2023, the enforcement date of Local Law 144 was postponed to 15 April 2023 before being postponed again to 5 July 2023 due to the adoption of the final rules. In contrast, the New York City law will take effect immediately once enacted.
While both NYC Local Law 144 and NY AB A00567 require impartial bias audits, although they use different language (bias audit vs disparate impact analysis), and impose transparency requirements, there are a number of notable differences between the two laws. Namely, under LL144, auditors must be independent, having no connection to the development, design, or deployment of the AEDT being audited, while under A00567, auditors must only be impartial. Therefore, A00567 could permit internal analyses providing that the specified metric is used. However, this would not lead to as robust an analysis as one conducted by an independent entity since bad faith actors could cherry-pick the data they analyse and present to achieve favourable results.
Further, the scope of LL144 is wider than that one A00567 given that it covers both systems used in hiring decisions and those used in promotional decisions, whereas A00567 only applies to systems used in hiring. Additionally, LL144 imposes notice requirements where candidates or employees must be informed of the use of the tool, and its capabilities, and be able to request an alternative selection process or accommodation. However, A00567 does not impose any notification requirements on employers, resulting in less transparency about the use of the tool.
On the other hand, A00567 requires a summary of the results of the analysis to be shared with the Department of Labor while LL144 does not require sharing with any specific entity, just that the results are publicly available. Nevertheless, Local Law 144 has more repercussions for employers that do not comply, with penalties ranging from $500-$1500, while A00567 does not currently outline any specific penalties.
The NYC bias audit law is already showing a knock-on effect with both New Jersey and New York proposing similar laws. In the US, several other laws have also been proposed to regulate HR Tech, and wider laws such as the EU AI Act will also have implications for HR Tech.
With this sector increasingly being regulated, on top of already existing employment regulations, taking steps early is the best way to make sure you are compliant ahead of them coming into effect. At Holistic AI, we combine expertise in business psychology, computer science, law, policy, and ethics to manage the risks of AI and facilitate legal compliance. To find out how we can help you, get in touch at firstname.lastname@example.org.
Written by Airlie Hilliard, Senior Researcher at Holistic AI.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AIGet Started