As part of the US efforts to regulate AI and to manage the risks that algorithmic systems can pose, the District of Columbia has proposed the Stop Discrimination by Algorithms Act to prohibit organisations from using algorithms that make decisions based on protected characteristics.
The Act was introduced in 2021, with one of the primary purposes being to ensure individuals, particularly those from vulnerable communities, are not being restricted from accessing important life opportunities such as employment or access to housing due to biased algorithms.
The proposed legislation would make it illegal for non-profit and for-profit organisations to use algorithms that make decisions based on protected characteristics. Specifically, the Act refers to the attributes protected under the DC Human Rights Act, which outlines 23 characteristics, including race, sex, gender, disability, religion, and age. In this blog, we outline the main contributions and requirements of the legislation.
The legislation takes a three-pronged approach to mitigate harm caused by algorithmic bias and issuing of penalties
Once passed, the legislation would apply to Washington DC businesses and organizations that use algorithms in this manner both knowingly and unknowingly, and would apply to entities that:
Unlike legislation like the NYC Bias Audit Law, which gave a year between the law being enacted and coming into effect to allow businesses to comply, in DC, covered entities would be expected to comply as soon as the legislation is passed. Although a public hearing on the legislation was held in September 2022, in November it was announced that the Act will not move forward this council session. Council members have expressed a commitment to try again in the first quarter of 2023. Update: the Act was reintroduced in February 2023.
The Act has received both positive support as well as negative reception from industry groups.
Policymakers, lawmakers, and academics support the Act as a tangible way to address the unfettered bias and discrimination that algorithms can perpetuate if gone unregulated. However, certain industry groups, such as credit trade groups, have criticized the legislation as compliance burdens may result in decreased credit access and higher-cost loans.
Nationally, the Stop Discrimination by Algorithms Act has set a precedent that regulators and policymakers highly mirror. For example, the recently published Blueprint for the AI Bill of Rights borrowed heavily from the Stop Discrimination by Algorithms Act. The commitment to addressing the additional layer of social inequity, knowingly and unknowingly propagated by algorithms, is strong in the United States. As such, businesses should be prepared to comply.
Taking steps early is the best way to get ahead of this and other global AI regulations. At Holistic AI, we have a team of experts who, informed by relevant policies, can help you manage the risks of your AI. Reach out to us at email@example.com to learn more about how we can help you embrace your AI confidently.
Written by Ashyana-Jasmine Kachra, Public Policy Intern at Holistic AI. Follow her on Linkedin.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AIGet Started