Policy Hour
Online Webinar

Workplace Relations in the Age of AI: Practices, Regulations and the Future of Equal Opportunity Laws

Thursday, 18 May, at 11:30 am ET
11:30 ET / 4:30pm BST
Register for this event

We recently held a riveting webinar discussing the intersection of AI and HR, a rapidly evolving field. Expert perspectives were shared on how AI and technology are reshaping the employee lifecycle and transforming workplace relations.

Keith Sonderling, Commissioner of the EEOC, provided valuable insights on practices, regulations, and the future of equal opportunity laws in this AI-dominated era. Co-moderated by Adriano Koshiyama, Co-founder at Holistic AI and Jonathan Kestenbaum, Managing Director of Technology Strategy & Partnerships at AMS, this engaging session offered a forward-thinking view into the future of HR.

Key Discussions

Throughout the discussion, our experts shed light on several pivotal issues, including:

  • The role of AI and technology in reshaping the employee lifecycle
  • Understanding and navigating the NYC Bias Audit Law
  • Strategies for balancing fairness and efficiency in the hiring process
  • A comprehensive outlook on the future of HR and technology regulations, presented by Commissioner Sonderling

Post-Event Q&A

Our insightful webinar sparked a host of questions from our engaged audience. Although we couldn't answer them all live due to time limits, we've put together a post-event Q&A to cover all remaining inquiries.

In case you missed the session or need to revisit any points, we've provided a link to the full recording of the webinar below.


On 18 May, 2023, the U.S. Equal Employment Opportunity Commission (EEOC), the federal agency charged with administering federal civil rights laws (including Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA), released a "technical assistance document" titled "Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”

The technical assistance guidance includes several key takeaways for employers using selection tools that incorporate or are driven by AI under Title VII:

  • Liability: Employers are accountable for disproportionate impacts caused by AI selection tools, regardless of whether an external vendor developed them. The EEOC advises employers to evaluate a vendor's measures to identify potential adverse effects before outsourcing the administration of such tools. The guidance emphasizes that an employer may still be held liable even if a vendor's assessment is incorrect (such as wrongly indicating that the tool does not have an adverse impact when it actually does).
  • Self-Auditing: Employers are advised to regularly self-audit selection tools to assess any adverse impact on protected groups. If adverse impact is found, the EEOC encourages employers to modify the tool to minimize such effects.
  • Four-fifths rule is not an indicator of disparate impact: While the four-fifths rule is a rule of thumb, it does not guarantee the absence of disparate impact. Smaller differences in selection rates may still indicate an adverse impact where the tool is used for a substantial number of selections or when the employer discourages certain applicants from applying. Even if a tool meets the four-fifths test, it can still be deemed to have an unlawful adverse impact if it leads to a statistically significant difference in selection rates.

Algorithms can perpetuate and amplify existing biases if appropriate safeguards are not used. For example, a model trained on biased human judgements is also likely to be biased. However, unlike unconscious bias, which is difficult to fully overcome, the use of AI and algorithms allows technical approaches to mitigating bias to be applied, alongside any other non-technical approaches. This can mean that even when the data used to train the model is biased, the algorithm is not biased. Check out our open-source library to see some of the mitigation techniques that can be used.

AI and automation pose novel risks and require additional safeguards to mitigate this. There has also been a movement towards greater transparency and explainability due to AI ethics, and the legislation specifically targeting AI and automation often imposes additional transparency requirements on top of enforcing equal opportunity.

Currently, most of the targeted regulation have specific and more narrow requirements than wider regulations such as the EU AI Act, which will impose additional requirements such as risk management frameworks and obligations surrounding the data used by the systems. Both wider and targeted legislation will, therefore, play an important role in making HR Tech tools safer and fairer.

AI systems should undergo continuous monitoring throughout their lifecycle, from design and development through to deployment, but particularly following any major changes to the system. Outputs should be monitored for bias and any unequal outcomes should be mitigated. Holistic AI’s governance, risk management, and compliance platform is an excellent way to ensure compliance and mitigate risks before they cause harm.

The AI regulatory landscape is constantly evolving. Currently, there are no proposals for a similar law to NYC Local Law 144 at the federal level, but similar laws have been proposed in New Jersey and New York State. At the federal level, there have been a number of initiatives such as the Blueprint for the AI Bill of Rights and the Algorithmic Accountability Act, as well at the AI Risk Management Framework published by the National Institute of Standards and Technology (NIST).

The Uniform Guidelines on Employee Selection Procedures were published in 1978 and provide guidance on issues such as adverse impact analysis and validation of selection procedures, as well as proposing the four-fifths rule of thumb as a metric for assessing adverse impact. The EEOC has not announced any plans to update these rules to address AI-driven or automated tools, but there are several regulations that have been proposed in the US to regulate AI.

Many employers acquire their AEDTs from third-party vendors, meaning that they are not always in control of how the system is designed. When this is the case, it is more important than ever that employers do their due diligence by requesting information about:

  • Validation studies that have been conducted for the tool
  • The results of these validation studies
  • Whether adverse impact was tested during the development of the tool and the metrics used to assess this
  • Whether the vendor conducts adverse impact analysis of the tool once deployed, how often, and what metrics are used
  • Whether the results of any adverse impact analyses are shared with clients
  • What happens if adverse impact is identified
  • Any other risk management practices of the vendor

Based on this information, employers can consider the validity of the tool, its potential for adverse impact, resolutions if adverse impact is identified, and whether the tool aligns with their values and priorities. It is also important to consult council before acquiring any selection tools and follow guidance issued by relevant bodies such as the Society for Industrial and Organisational Psychology’s Principles for the Validation and Use of Personnel Selection Procedures. It is essential to keep in mind that it is typically employers that are liable for any violations of equal employment laws, not the vendors.

There is a lack of alignment on how AI is defined, with different definitions varying in how comprehensive they are. For an overview of some of the different ways that AI is defined, check out our blog post “Lost in translation: Differing definitions of AI”.

AMS, the global talent solutions business, has recently announced a partnership with Holistic AI, which will result in HR Tech tools on the AMS platform being given a score indicating each tool’s level of AI sophistication and risk, the first rating system of its kind in the HR Tech space.

Making decisions based on protected characteristics, such as race and gender, is prohibited under many equal employment laws. It is important that systems do not disproportionally impact any subgroups to give all individuals a fair chance of being selected based on their qualifications and other relevant criteria.

In the context of NYC Local Law 144, auditors are not independent if they are involved in the development or deployment of the AEDT being audited, have an employment relationship with the employer/employment agency or vendor whose tool is being audited during the audit, or have a direct or material indirect financial interest in the employer/employment agency or vendor. While there is not a requirement for auditors to have a formal qualification, there is an understanding that auditors must have the required expertise to understand the system and its capabilities and outputs and be capable of performing the required analysis.

At Holistic AI, we combine expertise in computer science, law, policy, and business psychology to holistically understand a system and the context it is used in, having audited over 100 projects. Schedule a demo to find out more.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call