Talent management is becoming increasingly automated, from CV scanners to AI-analysed video interviews, employers around the world are capitalising on the range of AI-driven and automated talent management tools available on the market. These tools promise to save time, improve candidate experience, and make the process more convenient and flexible, with some even boasting claims of making the workforce more diverse. However, it is no secret that these tools do not always deliver on their promises, particularly in regards to increasing diversity. While it is true that automated tools can overcome human biases, which are notoriously difficult to stamp out, such tools should be extensively tested for bias and the social implications of their use examined for issues such as accessibility in order to ensure they do not just pose different risks.
Although these novel tools are covered under existing equal opportunity legislation, policymakers have begun to take steps to codify requirements to ensure that automated employment tools do not result in intentional or unintentional harm, particularly in relation to bias. California policymakers have been particularly active here, proposing modifications to its employment regulations to address automated decision systems (ADSs). The modifications were first proposed in March 2022 for public workshop, and were then updated in July 2022 and February 2023. Overall, their goal is to protect employees and applicants from unlawful discrimination based on protected characteristics, and in addition to adding new definitions related to automated-decision systems, many of the modifications of the text explicate that discrimination based on these characteristics is unlawful even when decisions are made using an ADS. In this blog post, we highlight the key changes that have been made to the modifications with each iteration.
Throughout the iterations of the proposed modifications, many of the provisions have broadly remained the same, although there have been some clarifications made to the language used and additional definitions added.
In the first version of the modifications, an automated decision system was defined as:
“A computational process, including one derived from machine-learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants”
This definition remained the same in the second version of the modifications, but was reworded for the most recent version:
“A computational process that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts applicants or employees. An Automated-Decision System may be derived from and/or use machine-learning, algorithms, statistics, and/or other data processing or artificial intelligence techniques.”
The components of the definition, namely the use of the tool and technology it is derived from, are the same across all three versions, just rephrased for the most recent definition.
Under the first version of the modifications, ADSs include algorithms used to: i) screen resumes for particular terms or patterns; ii) to analyse facial expressions, word choice and voices using face and/or voice recognition; iii) make predictive assessments about an employee or applicant’s dexterity, reaction time, or physical or mental abilities through gamified testing including questions, puzzles, or other challenges; and iv) employ online tests to measure personality, aptitude, cognitive ability, and/or cultural fit,
The scope was broadened in the second version and the wording was refined. In particular, the second version of the modifications specified that ADSs may additionally be used to direct job advertisements or other recruitment materials to specific groups. In addition, it was specified that the use of ADSs to analyse facial expressions, word choice or voices pertained to online interviews only.
The most recent version of the modifications further refined this, specifying that ADSs do not include word processing software, spreadsheet software, and map navigation systems, even if they are used in the recruitment process.
Consistent in the first and second version of the proposed modifications, an algorithm is defined as:
“A process or set of rules or instructions, typically used by a computer, to make a calculation, solve a problem, or render a decision.”
The most recent version of the modifications, however, gives more specific examples of the applications of algorithms:
“A set of rules or instructions a computer follows to perform calculations or other problem-solving operations. Algorithms can, for example, detect patterns in datasets and automate decisions making based on those patterns and datasets.”
Not defined in the first version of the modifications, the second version defines artificial intelligence as:
“A machine learning system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence may include machine learning.”
The most recent version of the modifications, in comparison, has a shorter definition of AI, although it still includes the same key elements:
“A machine-learning system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions.”
Having included machine learning in the definition of AI, the modifications naturally define machine learning, although using slightly different terms. The first version of the modifications define machine learning algorithms as:
“Algorithms that identify patterns in existing datasets and use those patterns to analyze and assess new information, and revise the algorithms themselves based upon their operations.”
The definition of machine learning in the second version of the modifications echoes this sentiment but is more succinct as it draws on the previous definition of AI:
“An application of Artificial Intelligence that is characterized by providing systems the ability to automatically learn and improve on the basis of data or experience, without being explicitly programmed.”
With an even more succinct definition, that does not include AI, the most recent version of the modifications defines machine learning as:
“The ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.”
A common theme with the evolution of these definitions is a refinement in the language to make the most recent version more succinct and to the point. This is something that can help to reduce ambiguity and make the scope of the regulation clearer.
While the data associated with ADSs is defined in all three versions of the proposed modifications, it was referred to as machine-learning data in the first version before being renamed to automated-decision system data in the second version.
In the first version, machine-learning data is defined as:
“All data used in the process of developing and/or applying machine-learning algorithms that are utilized as part of an automated-decision system.”
This includes, but is not limited to, datasets used to train the machine learning algorithm used for an ADS; data provided by or about individual applicants or employees that is analysed by an ADS; and data produced by the application of an ADS.
With a slightly different definition, the second version of the modifications specify that ADS data is:
“All data used in the process of developing and/or applying machine-learning, algorithms, and/or artificial intelligence that is utilized as part of an automated-decision system.”
The examples given, however, are the same as in the first version.
The definition in the most recent version of the modifications is largely consistent, as are the examples of different types of data:
“Any data used in the process of developing and/or applying machine-learning, algorithms, and/or artificial intelligence that is utilized as a part of an automated-decision system.”
Only defined in the most recent version of the modifications, adverse impact means:
“The use of a facially neutral practice that negatively limits, screens out, tends to limit or screen out, ranks, or prioritizes applicants or employees on a basis protected by the Act.”
Given that ADSs are typically designed, developed, and used by multidisciplinary teams, which can all have their own definitions of unfair outcomes, the modifications importantly specify that adverse impact is synonymous with disparate impact, a term that is more commonly used in the machine learning field.
In addition, the second and most recent version of the modifications define proxy characteristics, which are important to consider to identify and mitigate unintentional bias. In the second version of the modifications, a proxy is defined as:
“A facially-neutral characteristic that is correlated with having one or more characteristics protected by the Act.”
The third and most recent version of the modifications revise this definition to prevent any ambiguity around the phrase “facially-neutral” only applying to the use of facial recognition, defining a proxy as:
“A technically neutral characteristic or category correlated with a basis protected by the Act.”
Given that modifications are being made to existing employment regulations instead of an entirely new law being introduced – like with New York City Local Law 144 – the major requirements and prohibitions are largely the same across all three versions of the modifications. Overall, the employment regulations prohibit discrimination based on characteristics including race, national origin, gender, accent, English proficiency, immigration status, driver's license status, citizenship, height or weight, national origin, sex, pregnancy or perceived pregnancy, religion, and age unless they are shown to be job-related for the position in question and are consistent with business necessity.
The modifications typically explicate that these prohibitions also apply to the use of ADSs. However, the most recent version introduces some novel obligations, including providing evidence of bias testing and other similar efforts to avoid unlawful discrimination. This includes information on the quality, recent, and scope of these efforts, as well as the results of the testing and any responses. This indicates that bias audits could play a key role in ensuring that employment practices, particularly those that make use of an ADS, are not discriminatory and that the appropriate safeguards are in place to prevent any future risks of discrimination.
Further, the original employment regulations require that those who advertise, sell, provide, or use a selection tool to maintain records of the assessment criteria used by the ADS for each employer or covered entity the ADS is provided to for at least 2 years from the date of making the employment decision. All three versions of the modifications require that this data is kept for at least four years, including ADS data.
The most recent version of the modifications, however, extend these requirements; records must be kept of the training set, modelling, assessment criteria, and outputs from the ADS for at least four years from the last date the ADS was used by the employer or other covered entity.
The modifications also all specify that ADSs can bring about unintentional discrimination against those with disabilities if they measure attributes such as reaction time, for example. This is something that the Equal Employment Opportunity Commission (EEOC) issued guidance on. As such, it is important that employers ensure that the ADSs they use to evaluate candidates or employees are in line with business necessity, or measure job-related characteristics, and are appropriately validated.
California modifying its employment regulations is a prime example of how existing regulations also apply to automated and AI-driven tools. California is simply explicating that the existing employment laws also apply to the use of ADSs, and adding some additional obligations to deal with the novel risks that come with these tools. Indeed, the EEOC, along with other federal agencies, recently issued a statement reiterating that existing regulations apply to these tools, and the lawsuit brought against ATS provider Workday for alleged discrimination highlights this.
Given the wave of regulations that will soon target HR Tech, it is important to take action early to remain compliant. Schedule a demo to find out how Holistic AI can help you with this.
Written by Airlie Hilliard, Senior Researcher at Holistic AI
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.