Following the launch of its Task Force on Artificial Intelligence (AI)-Based Assessments in October 2021, the Society for Industrial and Organizational Psychology (SIOP) has published guidelines on the validation and use of AI-based assessment in employee selection.
Summary of SIOP Guidelines
Building on its statement on the use of AI in hiring published in January 2022, SIOP’s guidelines, released in January 2023, are based on five key principles:
- The scores from AI-based assessments should accurately predict future job performance (or other relevant outcomes);
- Scores from AI-based assessments should be consistent upon re-test and measure job-related characteristics;
- AI-based assessments should produce fair and unbiased scores;
- AI-based assessments should be used appropriately with operational considerations in mind; and
- Decision-making related to AI-driven assessments should be adequately documented to facilitate verification and external auditing.
To comply with these principles, key requirements include collating the appropriate evidence to validate the tool (including convergent and discriminant validity and generalisability), ensuring equitable treatment of different groups, and identifying and mitigating predictive bias and measurement bias. Employers should also ensure that they are using approaches informed by research and industry best practices, communicate the system specifications to users in a comprehensible way, and document information such as data sources, validation efforts, details about the algorithm, and any technical requirements are sufficiently to enable external audits to be meaningful.
Towards Transparency and Fairness
Key areas of interest are the recommendations to increase the transparency of AI-driven assessments by ensuring that their purpose and functionality are communicated to candidates, and the distinction between fair practices (e.g., ensuring constructs and assessments are as accessible as possible) and unbiased practices (e.g., testing the model functioning for different groups to identify predictive bias).
Auditing of AI-Driven Selection Assessments
In addition, SIOP recommends that decision-making processes pertinent to the creation and validation of the tool and associated algorithms are documented to support external audits. This could be achieved by ensuring that all AI-driven assessments are accompanied by a technical manual detailing the design, development, and deployment of the tool, for example.
Such efforts will also support compliance with New York City Local Law 144, which requires independent impartial bias audits of automated employment decision tools being used to evaluate candidates for employment or employees for promotion in New York City.
To find out more about the guidance for automated recruitment tools and other relevant guidance, download our HR Tech Policy Pack.