The Washington DC Algorithms Law: Stop Discrimination
Regulations

The Washington DC Algorithms Law: Stop Discrimination

November 18, 2022

As part of the US efforts to regulate AI and to manage the risks that algorithmic systems can pose, the District of Columbia has proposed the Stop Discrimination by Algorithms Act to prohibit organisations from using algorithms that make decisions based on protected characteristics.

Washington DC Algorithms Law

The Act was introduced in 2021, with one of the primary purposes being to ensure individuals, particularly those from vulnerable communities, are not being restricted from accessing important life opportunities such as employment or access to housing due to biased algorithms.

The proposed legislation would make it illegal for non-profit and for-profit organisations to use algorithms that make decisions based on protected characteristics. Specifically, the Act refers to the attributes protected under the DC Human Rights Act, which outlines 23 characteristics, including race, sex, gender, disability, religion, and age. In this blog, we outline the main contributions and requirements of the legislation.

Key takeaways

  • Introduced in 2021, Washington DC’s Stop Discrimination by Algorithms Act would make it illegal for non-profit and for-profit organizations to use algorithms that make decisions based on protected characteristics.
  • The legislation would mandate annual audits and specific transparency requirements; failure to comply could result in individual violation fines of $10,000 each.
  • If the Act is passed, it will be enacted without a grace period, and compliance will be expected.
  • The Act has received both positive support as well as negative reception from industry group.

Three-pronged approach

The legislation takes a three-pronged approach to mitigate harm caused by algorithmic bias and issuing of penalties

  • Prohibition: Companies and organizations would be prohibited from using algorithms which produce biased/unfair results.
  • Annual Audits: Companies and organizations would be mandated to perform yearly audits to ensure their algorithms and algorithms processing practices are not directly discriminating, nor do they show disparate impact on certain groups. Companies and organizations would also have to document and share with the Office of the Attorney General how their algorithms are built, how they make decisions, all the decisions made and audit results.
  • Transparency: For consumer transparency, companies and organizations must make easy-to-understand disclosures about the personal information being collected and how their algorithms reach decisions. Companies and organizations would also have to provide in-depth explanations to consumers if an algorithm makes an unfavourable decision and allow consumers to submit for corrections.
  • Penalties: The penalties outlined would be $10,000 per individual violation and can be either personal or civil.

What does this mean for your business?

Once passed, the legislation would apply to Washington DC businesses and organizations that use algorithms in this manner both knowingly and unknowingly, and would apply to entities that:

  • Possess or control personal information on more than 25,000 Washington DC residents.
  • Have greater than $15 million in average annualized gross receipts.
  • Are a data broker that process personal information.
  • Are a service provider.

Effective immediately

Unlike legislation like the NYC Bias Audit Law, which gave a year between the law being enacted and coming into effect to allow businesses to comply, in DC, covered entities would be expected to comply as soon as the legislation is passed. Although a public hearing on the legislation was held in September 2022, in November it was announced that the Act will not move forward this council session. Council members have expressed a commitment to try again in the first quarter of 2023.

Contentions remain

The Act has received both positive support as well as negative reception from industry groups.  

Policymakers, lawmakers, and academics support the Act as a tangible way to address the unfettered bias and discrimination that algorithms can perpetuate if gone unregulated. However, certain industry groups, such as credit trade groups, have criticized the legislation as compliance burdens may result in decreased credit access and higher-cost loans.

A national precedence

Nationally, the Stop Discrimination by Algorithms Act has set a precedent that regulators and policymakers highly mirror. For example, the recently published Blueprint for the AI Bill of Rights borrowed heavily from the Stop Discrimination by Algorithms Act. The commitment to addressing the additional layer of social inequity, knowingly and unknowingly propagated by algorithms, is strong in the United States. As such, businesses should be prepared to comply.

Taking steps early is the best way to get ahead of this and other global AI regulations. At Holistic AI, we have a team of experts who, informed by relevant policies, can help you manage the risks of your AI. Reach out to us at we@holisticai.com to learn more about how we can help you embrace your AI confidently.

Written by Ashyana-Jasmine Kachra, Public Policy Intern at Holistic AI. Follow her on Linkedin.

No items found.

Manage risks. Embrace AI.

Our automated AI Risk Management platform empowers your enterprise to confidently embrace AI

Get Started