We recently held a riveting webinar discussing the intersection of AI and HR, a rapidly evolving field. Expert perspectives were shared on how AI and technology are reshaping the employee lifecycle and transforming workplace relations.
Keith Sonderling, Commissioner of the EEOC, provided valuable insights on practices, regulations, and the future of equal opportunity laws in this AI-dominated era. Co-moderated by Adriano Koshiyama, Co-founder at Holistic AI and Jonathan Kestenbaum, Managing Director of Technology Strategy & Partnerships at AMS, this engaging session offered a forward-thinking view into the future of HR.
Key Discussions
Throughout the discussion, our experts shed light on several pivotal issues, including:
Post-Event Q&A
Our insightful webinar sparked a host of questions from our engaged audience. Although we couldn't answer them all live due to time limits, we've put together a post-event Q&A to cover all remaining inquiries.
In case you missed the session or need to revisit any points, we've provided a link to the full recording of the webinar below.
On 18 May, 2023, the U.S. Equal Employment Opportunity Commission (EEOC), the federal agency charged with administering federal civil rights laws (including Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA), released a "technical assistance document" titled "Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”
The technical assistance guidance includes several key takeaways for employers using selection tools that incorporate or are driven by AI under Title VII:Algorithms can perpetuate and amplify existing biases if appropriate safeguards are not used. For example, a model trained on biased human judgements is also likely to be biased. However, unlike unconscious bias, which is difficult to fully overcome, the use of AI and algorithms allows technical approaches to mitigating bias to be applied, alongside any other non-technical approaches. This can mean that even when the data used to train the model is biased, the algorithm is not biased. Check out our open-source library to see some of the mitigation techniques that can be used.
AI and automation pose novel risks and require additional safeguards to mitigate this. There has also been a movement towards greater transparency and explainability due to AI ethics, and the legislation specifically targeting AI and automation often imposes additional transparency requirements on top of enforcing equal opportunity.
Currently, most of the targeted regulation have specific and more narrow requirements than wider regulations such as the EU AI Act, which will impose additional requirements such as risk management frameworks and obligations surrounding the data used by the systems. Both wider and targeted legislation will, therefore, play an important role in making HR Tech tools safer and fairer.
AI systems should undergo continuous monitoring throughout their lifecycle, from design and development through to deployment, but particularly following any major changes to the system. Outputs should be monitored for bias and any unequal outcomes should be mitigated. Holistic AI’s governance, risk management, and compliance platform is an excellent way to ensure compliance and mitigate risks before they cause harm.
The AI regulatory landscape is constantly evolving. Currently, there are no proposals for a similar law to NYC Local Law 144 at the federal level, but similar laws have been proposed in New Jersey and New York State. At the federal level, there have been a number of initiatives such as the Blueprint for the AI Bill of Rights and the Algorithmic Accountability Act, as well at the AI Risk Management Framework published by the National Institute of Standards and Technology (NIST).
The Uniform Guidelines on Employee Selection Procedures were published in 1978 and provide guidance on issues such as adverse impact analysis and validation of selection procedures, as well as proposing the four-fifths rule of thumb as a metric for assessing adverse impact. The EEOC has not announced any plans to update these rules to address AI-driven or automated tools, but there are several regulations that have been proposed in the US to regulate AI.
Many employers acquire their AEDTs from third-party vendors, meaning that they are not always in control of how the system is designed. When this is the case, it is more important than ever that employers do their due diligence by requesting information about:
Based on this information, employers can consider the validity of the tool, its potential for adverse impact, resolutions if adverse impact is identified, and whether the tool aligns with their values and priorities. It is also important to consult council before acquiring any selection tools and follow guidance issued by relevant bodies such as the Society for Industrial and Organisational Psychology’s Principles for the Validation and Use of Personnel Selection Procedures. It is essential to keep in mind that it is typically employers that are liable for any violations of equal employment laws, not the vendors.
There is a lack of alignment on how AI is defined, with different definitions varying in how comprehensive they are. For an overview of some of the different ways that AI is defined, check out our blog post “Lost in translation: Differing definitions of AI”.
AMS, the global talent solutions business, has recently announced a partnership with Holistic AI, which will result in HR Tech tools on the AMS platform being given a score indicating each tool’s level of AI sophistication and risk, the first rating system of its kind in the HR Tech space.
Making decisions based on protected characteristics, such as race and gender, is prohibited under many equal employment laws. It is important that systems do not disproportionally impact any subgroups to give all individuals a fair chance of being selected based on their qualifications and other relevant criteria.
In the context of NYC Local Law 144, auditors are not independent if they are involved in the development or deployment of the AEDT being audited, have an employment relationship with the employer/employment agency or vendor whose tool is being audited during the audit, or have a direct or material indirect financial interest in the employer/employment agency or vendor. While there is not a requirement for auditors to have a formal qualification, there is an understanding that auditors must have the required expertise to understand the system and its capabilities and outputs and be capable of performing the required analysis.
At Holistic AI, we combine expertise in computer science, law, policy, and business psychology to holistically understand a system and the context it is used in, having audited over 100 projects. Schedule a demo to find out more.
Schedule a call with one of our experts
DISCLAIMER: The information provided on this website does not, and is not intended to, constitute legal advice; instead, all information, content, and materials available on this site are for general informational purposes only. Information on this website may not constitute the most up-to-date legal or other information. This website contains links to other third-party websites. Such links are only for the convenience of the reader, user or browser; Holistic AI does not recommend or endorse the contents of the third-party sites.
Readers of this website should contact their attorney to obtain advice with respect to any particular legal matter. No reader, user, or browser of this site should act or refrain from acting on the basis of information on this site without first seeking legal advice from counsel in the relevant jurisdiction. Only your individual attorney can provide assurances that the information contained herein – and your interpretation of it – is applicable or appropriate to your particular situation. Use of, and access to, this website or any of the links or resources contained within the site do not create an attorney-client relationship between the reader, user, or browser and website authors, contributors, contributing law firms, or committee members and their respective employers.
The views expressed at, or through, this site are those of the individual authors writing in their individual capacities only – not those of their respective employers, Holistic AI, or committee/task force as a whole. All liability with respect to actions taken or not taken based on the contents of this site are hereby expressly disclaimed. The content on this posting is provided "as is;" no representations are made that the content is error-free.