In the wake of New York City’s bias audit mandate for automated employment decision tools (Local Law 144), states like California have introduced legislation to address the potential harms that can result from using artificial intelligence or automated systems in hiring. The legislation targets the use of automated systems or tools in the workplace and employment-related decisions, where bias or discrimination against protected classes is a particular concern. To address the bias and discrimination, California has proposed amendments to its employment regulations to extend non-discrimination practices to automated-decision systems.
Employers with five or more employees are subject to this regulation, which includes employees outside of California, but they are not covered by the act protections if the prohibited activity did not occur in California. Vendors, or agents, acting on behalf of an employer are also considered an employer under this regulation.
An automated-decision system (ADS) is a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorises, recommends, or makes or facilitates employment-related decisions. This includes systems used to direct job advert targeting, screening resumes, analysis of facial expressions, word choice, and voices in video interviews, computer-based tests and game-based assessments, and the measurement of constructs such as personality, aptitude, cognitive ability, or cultural fit through automated tests.
Automated-decision system (ADS) data is that used to develop or apply machine learning, algorithms, or artificial intelligence as part of an ADS. This includes training data, data provided by applicants or employees or information about applicants or employees that has been analysed by an ADS, and data produced by an ADS.
Under the proposed amendments to California’s employment legislation, it is prohibited to use automated-decision systems that limit, express a preference for, or screen out applicants based on protected characteristics or proxies of characteristics unless there is an affirmative defence for using this criterion.
Artificial intelligence is a machine learning system that can make predictions, recommendations, or decisions that influence real or virtual environments when given a set of human-defined objectives. Typically, the developer relies partly on the computer’s analysis of data to determine the criteria to use to make decisions.
Machine learning is an application of artificial intelligence where a system can automatically learn and improve based on data or experience without the need for explicit programming.
An applicant must be notified if an employer plans to withdraw an employment offer based on the applicant’s criminal history, and the decision to withdraw the offer involves using an ADS. The applicant must be provided with a copy or description of any report or information from the operation of the automated decision system, related data, and assessment criteria used as part of an automated -decision system resulting in the withdrawn employment offer.
Before an offer is extended to an applicant, procedures to conduct a medical or psychological exam, including by using an ADS, are not permitted. This includes using tests of optimism, emotional stability, extraversion, intensity, and tests of mental ability to make a medical or psychological enquiry.
The California’s Employment Legislation prohibits discrimination based on characteristics including race, national origin, gender, accent, English proficiency, immigration status, driver's license status, citizenship, height or weight, national origin, sex, pregnancy or perceived pregnancy, religion, and age unless they are shown to be job-related for the position in question and are consistent with business necessity.
Anyone involved in the advertisement, sale, provision, or use of a selection tool, including an ADS, must retain records of the assessment criteria used for each employer or entity that is provided with the tool for at least 4 years after the tool is last used.
While this legislation does not require that employers must conduct bias audits of their automated-decision systems like legislation in New York City does, many companies are opting to conduct an audit on the automated-decision systems to identify bias and assess and manage risks. Having an audit and taking steps to mitigate any bias revealed as part of the audit can help reduce harm and legal risks.
Schedule a demo with us and find out about how Holistic AI can help you manage risks of your automated-decision systems.
Written by Ayesha Gulley, Public Policy Associate at Holistic AI & Airlie Hilliard, Senior Researcher at Holistic AI.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our automated AI Risk Management platform empowers your enterprise to confidently embrace AIGet Started