Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Key Takeaways from the EEOC’s Navigating Employment Discrimination in AI and Automated Systems Hearing

Authored by
Ayesha Gulley
Policy Product Manager at Holistic AI
Published on
Feb 1, 2023
read time
0
min read
share this
Key Takeaways from the EEOC’s Navigating Employment Discrimination in AI and Automated Systems Hearing

Artificial intelligence (“AI”) is undeniably transforming the workplace, though many implications remain unknown. Employers increasingly rely on algorithms to determine who gets interviewed, hired, promoted, rejected, or fired. Notwithstanding its positive impacts, AI poses new employment discrimination issues, especially when designed or used improperly.

As part of its initiative on AI and Algorthmic Fairness, on 10 January 2023, the Equal Employment Opportunity Commission (“EEOC”) published for public comment a draft Strategic Enforcement Plan (“SEP”) for 2023-2027. The SEP aims to focus and coordinate the agency’s work to produce “a sustained impact in advancing equal employment opportunity”. The enforcement plan is the first to address the use of automated systems for hiring and focuses on how AI and machine learning (“ML”) systems may be used to intentionally exclude or adversely impact protected groups.

Continuing its efforts, on 31 January 2023, the EEOC held a public hearing on how the use of automated systems, including artificial intelligence, in employment decisions can comply with the federal rights laws the EEOC enforces. The hearing generated a large audience, including a panel discussion on the civil rights implications of AI and other automated systems for US employees and job candidates. During this, twelve individuals presented comments orally, representing multiple disciplines including, legal, data science, civil advocacy, and industrial and organizational psychology. The hearing explored the ways in which emerging technologies could further promote diversity, inclusion, accessibility, and diversity.

In this blog article, we outline the key takeaways from the hearing.

The four-fifths rule is an insufficient metric

While the EEOC’s Uniform Guidelines on Employee Selection Procedures provide a federal framework for analysing whether employment selection procedures could violate antidiscrimination laws, there were concerns raised about how this guidance does not reflect current legal guidelines or professional standards.

Several participants emphasized concerns with the four-fifths rule as a metric for determining adverse impact, where the hiring rate of one subgroup should not be less than four-fifths of the hiring rate of the subgroup with the highest rate. A large body of research has discussed how the metric can be problematic for determining bias, particularly when sample sizes are small. Instead, other metrics such as the two standard deviations rule or Cohen’s d can be more suitable, or metrics from computer science can be applied to algorithmic systems. Since some employers and vendors of algorithmic tools will often look to guidance provided by the EEOC and follow it closely, there was general agreement for further guidance and education on measurement limitations and the effects that arise for different groups. The technical community and vendors can also play an important role in adding to understanding and conversation around the most appropriate metrics to use in different contexts.

Speakers also raised concerns about misalignment between the Uniform Guidelines and guidance for industrial-organisational psychologists (e.g., the Principles for the Validation and Use of Personnel Selection Procedures); the Uniform Guidelines do not require a user to conduct validity studies of selection procedures where no adverse impact results. Instead, a user is only required to validate when evidence of adverse impact and validity evidence is trifurcated into criterion-related, content, and construct validity. However, the Principles recommend that all selection assessments should be validated and there should be multiple sources of validity evidence - no trifurcation.  

Additionally, although the Uniform Guidelines require that selection assessments should be informed by job analysis to identify the constructs that need to be measured and when creating additional assessments, speakers argued that this is difficult to do with algorithmic assessments. Indeed, the huge number of predictors means that it is hard to justify how each relates to the construct or job performance, which can lead to irrelevant predictors influencing outcomes (e.g., items in the background of video interviews can influence scores). As a result, speakers called for additional guidance on the justification of predictors in algorithmic models and whether evidence beyond simple correlations needs to be presented to justify the use of predictors and constructs.

Auditing is essential in mitigating potential biases

Overwhelming, panellists agreed that audits are necessary to mitigate intentional or unintentional biases against protected characteristics. However, there was some disagreement around who should conduct these audits, governments or a third-party.  A government-led audit could potentially stifle innovation and third-party auditors often lack a consensus on what metrics to use in testing.

Many stakeholders voiced the importance of taking protected characteristics into account when auditing and assessing on a continuous basis for the elimination of for proxies. Proposals included going beyond only assessing the disparate impact of the tool, including considering the  transparency and efficacy of the tool, and requiring that audit results be made public. Setting standards for vendors and requiring companies to disclose what hiring tools they're using were also discussed.

Multiple panellists encouraged that the EEOC continue to pursue opportunities to increase enforcement including strategically selected targets to ensure accountability in the use of these systems, alongside transparency mechanisms for enforcement.

The scope of Title VII liability should be updated to align with technological advancements

Several stakeholders commented on the scope of Title VII Liability under the 1964 Civil Rights Act. Notably, Title VII prohibits the use of neutral policies and procedures that disproportionately adversely impact (or here screen out) a group protected under the Act because of their race, national origin, ethnicity, gender, disability, or other protected trait. Developed fifty-nine years ago with human decision makers in mind, the Act does not adequately align with current trends and industry guidance is noticeably more up to date. Moreover, amongst other things, the ambiguity of wether an AI vendor could meet the definition of employment agency under Title VII deserves further clarification.

Experts suggested the law be clarified today to address the risks that are posed by automated systems and make it clear who is in scope for legal compliance.

What’s next for employers and vendors

Algorthmic discrimination has attracted substantial attention from US federal, state, and local authorities. The EEOC is taking a strong stance in leading the way for federal agencies to ensure that AI does not perpetuate bias or present barriers to employment opportunities. This hearing represents a clear statement about the Agency’s enforcement priorities, and as such, employers should be aware of and manage risks associated with the use of AI for employee recruitment and selection in light of the inevitable enforcement actions coming down the pike.

As stated by EEOC Commission chair Burrow “we must work to ensure that these new technologies do not become a high-tech pathway to discrimination.” Auditing current systems (and new ones before deployment) will be increasingly important to keep both regulators and plaintiffs at bay. Vendors and employers should look at looking at equal employment opportunity not just as a compliance issue but as a value.

To find out how Holistic AI can audit your automated recruitment tools to ensure that they comply with equal opportunity legislation, get in touch at we@holisticai.com.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Take command of your AI ecosystem

Learn more

Track AI Regulations in Real-time

Learn more
Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Get a demo