New legislation requires employers in New York City that use “automated employment decision tools” to commission a third-party bias audit of their tool from 1st January 2023 (Update: Local Law 144 has now been pushed back to 5 July 2023). This means that any employer using machine learning, statistical modeling, data analytics or artificial intelligence to make or aid decisions about recruitment and employment must have their technology audited, to ensure it is fair.
One of the companies in this position was Hired, a job search marketplace that uses AI to give prospective candidates access to the Hired platform and rank candidates based on their suitability for a given job requirement. Aware of the NYC legislation, Chief Technology Officer Dave Walters began the process of vetting AI audit providers, deciding that Holistic AI was just what they were looking for. In his interview with Protocol, Dave outlines the criteria he used in his search for a partner:
Holistic AI ticked all the boxes. Having been established at the start of 2020, Holistic AI was an early adopter of AI auditing and assurance. We have reviewed over 100 AI projects, with our secure AI risk management platform covering over 10,000 algorithms across more than 20 jurisdictions. We currently work with clients including Unilever, MindBridge and Starling Bank. Having a team of expert AI auditors capable of examining the data and code that powers automated systems, as well as the policies and processes governing their use, we can support clients with annual audits and forge long-term partnerships.
Our expert auditors identified the risks associated with each of the algorithms used by Hired in terms of explainability, bias and privacy, and analysed the outputs of these models for bias based on age, gender and race. In line with the requirements of the legislation, the results of this audit will soon be made public.
Auditing and assurance is just one of the services that we offer at Holistic AI; our scalable AI risk management platform can be used to catalogue AI systems being used across the company, identify risks, and create a bespoke risk management framework, enabling continuous monitoring of AI systems. Our team of researchers also track AI-relevant legislation, such as the upcoming EU AI Act and the Colorado legislation that prohibits insurance providers from using algorithms and data that result in unfair discrimination. This research informs our risk management platform, empowering enterprises to adopt AI with confidence.
To find out if you need to comply with the NYC mandatory bias audit law or if you are exempt, take our quiz or schedule a demo with a member of our team to see how we can help you manage the risks of your AI systems.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts