SIOP Publishes Guidelines on AI-Based Employee Selection Assessments

January 27, 2023
Authored by
No items found.
SIOP Publishes Guidelines on AI-Based Employee Selection Assessments

Following the launch of its Task Force on Artificial Intelligence (AI)-Based Assessments in October 2021, the Society for Industrial and Organizational Psychology (SIOP) has published guidelines on the validation and use of AI-based assessment in employee selection.

Summary of SIOP Guidelines

Building on its statement on the use of AI in hiring published in January 2022, SIOP’s guidelines, released in January 2023, are based on five key principles:

  1. The scores from AI-based assessments should accurately predict future job performance (or other relevant outcomes);
  1. Scores from AI-based assessments should be consistent upon re-test and measure job-related characteristics;
  1. AI-based assessments should produce fair and unbiased scores;
  1. AI-based assessments should be used appropriately with operational considerations in mind; and
  1. Decision-making related to AI-driven assessments should be adequately documented to facilitate verification and external auditing.

To comply with these principles, key requirements include collating the appropriate evidence to validate the tool (including convergent and discriminant validity and generalisability), ensuring equitable treatment of different groups, and identifying and mitigating predictive bias and measurement bias. Employers should also ensure that they are using approaches informed by research and industry best practices, communicate the system specifications to users in a comprehensible way, and document information such as data sources, validation efforts, details about the algorithm, and any technical requirements are sufficiently to enable external audits to be meaningful.

Towards Transparency and Fairness

Key areas of interest are the recommendations to increase the transparency of AI-driven assessments by ensuring that their purpose and functionality are communicated to candidates, and the distinction between fair practices (e.g., ensuring constructs and assessments are as accessible as possible) and unbiased practices (e.g., testing the model functioning for different groups to identify predictive bias).

Auditing of AI-Driven Selection Assessments

In addition, SIOP recommends that decision-making processes pertinent to the creation and validation of the tool and associated algorithms are documented to support external audits. This could be achieved by ensuring that all AI-driven assessments are accompanied by a technical manual detailing the design, development, and deployment of the tool, for example.

Such efforts will also support compliance with New York City Local Law 144, which requires independent impartial bias audits of automated employment decision tools being used to evaluate candidates for employment or employees for promotion in New York City.

To find out more about the guidance for automated recruitment tools and other relevant guidance, download our HR Tech Policy Pack.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call