The EEOC Releases a Joint Statement on AI and Automated Systems

April 27, 2023
Authored by
No items found.
The EEOC Releases a Joint Statement on AI and Automated Systems

On 25 April 2023, a press release from the Equal Employment Opportunity Commission (EEOC) announced the publication of a joint statement on artificial intelligence (AI) and automated systems with the Consumer Financial Protection Bureau (CFPB), the Department of Justice’s Civil Rights Division (DOJ), and the Federal Trade Commission (FTC). Here, automated systems are defined as software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions.

The statement expresses a commitment from the four bodies to ensure that the use of AI does not violate the core principles of fairness, equality, and justice that are embedded in US federal laws. The statement reiterates that existing laws apply to AI and automated systems and that a lack of transparency about the system and understanding of how it works is not a valid excuse for violating laws relevant to the context the system is used in.

EEOC’s efforts to address AI in hiring technologies

This joint statement is not the first action the EEOC has taken in relation to AI; in October 2021, it launched an initiative on AI and algorithmic fairness, which aimed to form a working group on AI, create opportunities for stakeholder consultation, and establish best practices to issue technical guidance for using AI in employment decisions.

More recently, the EEOCC published guidance on how AI-driven assessments can have implications under the Americans with Disabilities Act in May 2022 and published a draft strategic enforcement plan for 2023-2027 was published in January 2023, where AI was highlighted as a priority that will shape the actions of the Commission over the next four years. A public hearing was also held by the EEOC in January 2023 on how the use of automated systems, including artificial intelligence, in employment decisions can comply with the federal rights laws the EEOC enforces, and Commissioner Keith Sonderling spoke at the SHRM /SIOP event on Exploring the Legal and Practical Implications of Using AI-Based Assessments in Hiring.

The joint statement from EEOC Chair Burrows and officials from DOJ, CFPB, and FTC is the latest effort from the EEOC surrounding the use of AI.

Agency enforcement actions

As well as the EEOC’s publications and its lawsuit against the iTutorGroup for algorithm-driven age discrimination, the other enforcement agencies have also shown how their enforcement authority applies to AI and automated systems:

  • The Consumer Financial Protection Bureau (CFPB), which enforces numerous federal consumer financial laws, has published a circular confirming that consumer financial laws and adverse action requirements apply regardless of the technology being used and that a lack of transparency about how credit decisions are made is not a defence for violation.
  • The Department of Justice’s Civil Rights Division enforces non-discrimination rules and has recently filed a statement of interest in federal court that states that the Fair Housing Act also applies to algorithm-based tenant screening
  • The Federal Trade Commission (FTC) enforces the FTC Act to protect consumers from deceptive or unfair business practices, having issued a report on the use and impact of AI in combatting online harms that were identified by Congress. It has also issued a warning that the FTC Act could be violated if market participants use automated tools with discriminatory impacts or if systems are deployed before risks are identified and mitigated and required the destruction of algorithms trained on data that should not have been collected.

How AI and automated systems can violate non-discrimination laws

The main aim of the statement is to reiterate the fact that AI and automated systems are covered under existing laws and that using technology does not create a loophole for compliance. The statement ends by suggesting some sources of unlawful discrimination, or bias, from the use of automated systems:

  • Training data - Automated systems trained on unrepresentative or imbalanced data, biased data, or erroneous data can skew outcomes and lead to discriminatory outcomes, particularly if data acts as a proxy for protected classes
  • Lack of transparency – it is not uncommon for automated systems to be a black-box, making it difficult for developers and other entities to know whether a system is fair since its inner workings are unknown
  • Design and use – there can be a lack of consideration for the social context that technical systems might be used in by developers, meaning that systems may be designed and developed on the basis of flawed assumptions about users and societal impact

To find out how Holistic AI can help you get your algorithms legally compliant, get in touch at we@holisticai.com.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call