×
Policy Hour
Online Webinar

Policy Hour: Deep Dive Into the NYC Bias Audit Law

Wednesday, 15 March 2023, 11:00 am ET
TIME
11:30 ET / 4:30pm BST
Zoom
Register for this event

Last Wednesday, 15 March we took a deep dive into the NYC Bias Audit Law in our inaugural Policy Hour webinar. Our host, Holistic AI’s Co-Founder & CEO Dr Adriano Koshiyama‍, was delighted to be joined by an esteemed panel of industry experts. Jacob Canter, Lawyer at Crowell & Moring LLP, and Airlie Hillard, Senior Researcher at Holistic AI, provided insights into the impending law, set to come into force on 15 April 2023, and what it will mean for NYC businesses using automated employment decision tools in their employment process.

During the discussion, we received many questions from the audience which we were unfortunately not able to get through during the event, so we have put together a Q&A.

Below we have included the full recording of the event, as well as the slides, Q&A, and poll results.

Q&A


No – Local Law 144 does not outline any exemptions based on factors such as number of employees or revenue etc.  For more, check our NYC Bias audit solution page.

As well as legal compliance, AI audits can help to increase the trust in AI tools, benefitting both enterprises who can have greater confidence in their tools and candidates who can make more informed decisions about their interactions with AI tools and what it could mean for someone like them. AI audits and AI risk management can make AI systems safer and reduce the risk of potential harms.

Under Local Law 144, employers are ultimately liable for compliance. However, many employers procure their AEDTs from third-party vendors, and are likely to look to them for support or expect them to commission the audit on their behalf. Therefore, legal liability lies with the employer but the auditing process is likely to be collaborative.

Under Local Law 144, employers are ultimately liable for compliance. However, many employers procure their AEDTs from third-party vendors, and are likely to look to them for support or expect them to commission the audit on their behalf. Therefore, legal liability lies with the employer but the auditing process is likely to be collaborative.

  1. Is or was involved in the use, development or distribution of the AEDT
  2. Has an employment relationship with the employer (or employment agency) or vendor at any point during the audit
  3. Has a direct or indirect financial interest in the employer (or employment agency) or vendor
Therefore, neither the employer nor the vendor can conduct the audit to comply with Local Law 144. Audits must be conducted by third parties.

The Equal Employment Opportunity Commission’s (EEOC) clarifications on the Uniform Guidelines require that employers report selection rates of groups constituting more than 2% of the workforce. However, the rules proposed by the DCWP show calculations using groups representing only 1.5% of the workforce, so impact ratios should be calculated for all groups represented in the data.

While calculations based on small sample sizes might not be meaningful and could lead to inconclusive results, there are multiple ways to address small sample sizes, including ensuring these groups do not represent the denominator in impact ratio calculations and using an asterisk to indicate calculations based on small samples that might be less meaningful.

An ethical and responsible approach to AI can be considered throughout the lifecycle:

  • Design - consider accessibility in terms of things like dyslexia friendly font (arial or other sans serif) or using colours that are compatible with colour blindness (usually blue and orange.)
  • Model training - ensure the training data is as representative as possible; try to ensure that any human judgements used to train the model are as unbiased as possible (although this can be difficult); check for proxy variables and considering removing or lessening the importance of those that are associated with protected attributes.
  • Deployment – check outcomes for bias both during the validation and after deployment and continuously monitoring this, especially after updates to the system or assessment.
  • Technical approaches to mitigation - we can borrow technical approaches from computer science, for example creating an intermediate version of the training data where protected attributes are not encoded due to statistical transformations (see our open-source library for more mitigation techniques.)

At minimum, the summary of results must include the distribution date of the tool, the date of the bias audit, impact ratios for all categories, and a source and explanation of the data used to conduct the audit. Additional transparency can also be introduced by providing as much detail as possible about the system, including the characteristics it considers and how the output will be used.

Holistic AI has several resources to help with compliance. Check out some of them here.

While Local Law 144 targets the outputs of the system, audits can also be expanded to consider the model itself and even the training data. Biased outputs may also indicate biased algorithms so identifying biased outcomes could provide an opportunity to work backwards to improve the model and make it fairer.

The scope of the law is not determined by the output being measured but instead by the technical specifications of the tool. Tools derived from machine learning, statistical modelling, data analytics, of AI that issue a simplifies output are in scope. Check with your counsel on whether the tool you are using is within the scope of the legislation.

The efficacy of bias mitigation can be tested by comparing model performance and impact ratios before and after the mitigation. If the model improves, this indicates the mitigation was effective.

At minimum, impact ratios must be calculated based on sex/gender and race/ethnicity. However, the scope of the audit can be increased by testing other risk verticals, examining the training data, or looking at features used by the model, for example.

Currently, there is no qualifying criteria for auditors other than that they are independent and impartial (see question 4). However, to ensure that audits are effective and correct, auditors will need an interdisciplinary and holistic approach to understand the technical specifications of the system and the context it is used in.

At Holistic AI, we combine expertise in computer science and business psychology with insights from law, policy, and ethics to ensure we understand the system and how it is used.

Holistic AI uses our proprietary Bias Audit Platform to conduct the audit to maximise independence and impartiality.

The EEOC Uniform guidelines specify that adverse impact is occurring when the hiring rate of one group is less than four-fifths (.80) of the hiring rate of the group with the highest rate. The rules proposed by the DCWP take inspiration from this calculation but do not specify that bias is occurring when the impact ratio falls below .80. Auditors can therefore establish an appropriate threshold for bias themselves and may adopt this threshold as evidence of potential bias.

The impact ratio metrics that must be used to evaluate AEDTs for bias can be found here.

The cost of an audit is dependent on the number of systems audited and individual requirements. Schedule a demo to find out more.

Machine learning, statistical modelling, data analytics, and artificial intelligence are defined by the proposed rules as a group of computer-based mathematical, computer-based techniques that generate a prediction of a candidate’s fit or likelihood of success or classification based on skills/aptitude. The inputs, predictor importance, and parameters of the model are identified by a computer to improve model accuracy or performance and are refined through cross-validation or by using a train/test split. Check with your general counsel on whether your tools are covered by this definition.

The revised proposed rules state that AEDTs covered by the legislation are those that substantially assist or replace decision making in that they are relied on solely, are weighted more than any other criterion, or are used to overrule other decisions. This does indeed narrow the scope of the legislation. However, a key point of contention from the second DCWP hearing was that this definition was too narrow and did not capture the spirit of the law and there were calls for this definition to be updated. Currently, we are waiting to see whether this will be reverted back to its broader previous form.

A list of the protected characteristics can be found here. Minimum sample sizes are not specified. See question 5 for what do to when sample sizes are small.

The DCWP’s revised rules specify that either historical or test data can be used. Historical data is data collected during the real use of a tool, but it is not specified that this data must have been collected in the US. If the tool is used by multiple employers, historical data can be used by any employers or employment agencies that have used the AEDT (providing the employer/employment agency also provides this data or have never used the tool). If historical data is not available and test or synthetic data is used, this must be specified in the summary of results.

Currently, the categories for sex/gender that are required to be tested are male and female, with an optional other category. How those that do not identify with any of these categories are coded is dependent on individual employers.

Poll Results:

During our deep dive webinar, we asked attendees if they are prepared for the introduction of Local Law 144. See below for the results:

Are you ready for Local Law 144?

How we can help

The NYC Bias Audit Law (Local Law 144) is set to come into force on 15 April 2023. If your organisation is going to fall under the legislation and still requires an independent bias audit of your Automated Employment Decision Tool (AEDT) get in touch to organise a preview of our bias audit platform and learn how Holistic AI can help you be compliant with the law.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call