AI Regulation
We’re Hiring!
Join our team

Do Existing Laws Apply to AI?

November 9, 2023
Authored by
Researcher at Holistic AI
Researcher at Holistic AI
Do Existing Laws Apply to AI?

Artificial intelligence’s (AI) integration in business is expanding across the globe, with over a third of companies using AI and an additional 42% exploring how it can be used to support business practices.

While this can bring considerable benefits to both businesses and consumers by removing the burden of tedious and repetitive tasks, streamlining processes, and allowing greater personalisation, the widespread use of AI comes with risks, particularly if appropriate safeguards are not implemented.

As such, AI is being increasingly targeted by lawmakers around the world, including in the US at the state, federal, and local levels. The same is true in the EU, where the EU AI Act is set to become the global gold standard for AI regulation through its risk-based approach. Laws have also been proposed to regulate AI and automation on a sectorial basis, in industries like HR Tech and Insurance targeted.

Notwithstanding this, it is important to recognise that AI systems are still within the scope of existing laws – automation does not create a loophole for compliance.

This has been repeatedly reiterated by regulators, including the Equal Employment Opportunity Commission (EEOC), Financial Conduct Authority (FCA), Federal Trade Commission (FTC), and Consumer Financial Protection Bureau (CFPB). Indeed, a number of lawsuits have already been brought against companies using AI without appropriate safeguards or considerations, resulting in them breaking existing laws.

In this blog post, we explore how some of these existing laws have been enforced against the misuse of AI.

Existing non-discrimination laws enforced against AI

One of the most widely acknowledged risks of AI is bias. Bias can be introduced by multiple sources, including system design and training data. Many of these biases reflect existing societal prejudices, which are reflected and perpetuated by AI systems.

However, unlike human biases, which are notoriously difficult to alleviate, there is the potential for the bias in AI models to be mitigated through both social and technical approaches. However, there have been multiple instances of algorithmic discrimination occurring across sectors, resulting in several high-profile harms and lawsuits.

In the HR Tech sector, a lawsuit was brought against ATS provider Workday for alleged age, disability, and racial discrimination, which violates Title VII of the Civil Rights Act of 1964, the Civil Rights Act of 1866, the Age Discrimination in Employment Act of 1967, and the ADA Amendments Act of 2008.

Plaintiff Derek Mobley, a black disabled man over 40, claims that he has applied for up to 100 jobs at companies that he believes use Workday and has not had any success in obtaining a position despite having a bachelor's degree in finance and an associate degree in network systems administration.

As such, Mobley filed the lawsuit 4:23-cv-00770 on behalf of himself and others in similar situations, namely African American applicants, candidates over 40, and disabled applicants. The lawsuit addresses the alleged discrimintaory screening process that, from 3 June 2019 until present, has prevented these individuals from being referred or permanently hired for employment.

The case is still open and could set a precedent for AI-based discrimination once resolved. Meanwhile the EEOC recently settled an age discrimination lawsuit with iTutorGroup for $365,000 for automatic rejection of applicants based on age.

Outside of employment decisions, a discrimination lawsuit has also been brought against insurance provider State Farm on the basis that the company discriminates against black policyholders, therefore violating the Fair Housing Act 42 U.S. Code § 3604 (a)(b) and 42 U.S. Code § 3605.

The class action 1:22-cv-07014 was filed by State Farm policyholder Jacqueline Huskey and is supported by a study from the Centre on Race, Inequality and the Law at the NYU School of Law. The study surveyed 800 black and white homeowners and found disparities between the way claims from white policyholders and black policyholders are met.

Black policyholders experienced prolonged delays in communication with State Farm agents, having more correspondence compared to other policyholders. Additionally, their claims were met with increased suspicion in contrast to their white counterparts.

The lawsuit alleges that this disparate treatment is the result of the algorithms and tools State Farm deploys from third-party vendors to automate their claims processing. In particular, the lawsuit identifies Duck Creek Technologies, a provider of claims management and fraud-detection tools, as a potential source of the alleged discrimination. The use of natural language processing is alleged to have resulted in negative biases in voice analytics for black vs white policyholders.

Like the Workday lawsuit, the State Farm lawsuit is still open, but these cases highlight the fact that existing non-discrimination laws can and will be applied to AI and automated decision systems.

Lawsuits brought against AI under existing biometric and data protection laws

In addition to AI system outcomes falling under existing legal regulations, the data used to train these models is also within the legal framework. Consequently, multiple lawsuits have been initiated against companies that unlawfully use biometric data concerning their AI systems.

One company that has been subject to legal action in multiple countries is Clearview AI, which uses images scraped from the internet and social media to build a database of facial images, which they then provide to law enforcement.

Since the company did not inform individuals that they were collecting facial images or outline any storage period, this violated data protection laws in multiple countries. For example, Italy’s Security Agency (Garante per la Protezione dei Dati Peronsali) fined the company €20 million under GDPR and banned them from monitoring, storing, and processing biometric information of individuals in Italy and ordered the company to delete all existing data belonging to Italians. Similar action was brought against the company in Illinois by the American Civil Liberties Union for violating Illinois’ Biometric Information Privacy Act (BIPA).

Also in Illinois, a case was brought against Prisma Labs Inc. by Jack Flora, Nathan Matson, Courtney Owens and D.J. for failing to declare the collection and storage of biometric data on facial geometry.

Prisma Labs develops mobile apps for editing and stylizing digital images and videos. Their Lensa app is designed for retouching facial images. To train the algorithms used by the app, Prisma collects the facial geometry of uploaded images. The plaintiffs claim that Prisma has not informed users in writing that this biometric data is collected and stored by Lensa and that language used in the privacy policy is too vague and does not clearly disclose the collection and storage of data. As such, the lawsuit 3:23-cv-00680 asserts that Prisma’s lack of disclosure violates BIPA (section 15(a) to 15(d)) and consequent damages of up to $5 million are being sought.

In the insurance sector, Lemonade Inc. has had a lawsuit brought against it for the unlawful collection of data points from policyholders, particularly in relation to facial recognition. This is because Lemonade Inc. uses AI chatbots for many of its insurance processes, extracting 1600 data points through 13 questions.

Although Lemonade’s Privacy Pledge claims that the company does not collect, require, or share policyholders’ biometric information, a now-deleted tweet from the company claimed that their AI technology can extract non-verbal cues from videos submitted as part of claims evidence, with the company, therefore, relying on facial recognition for fraud detection. As such, claimant Mark Pruden brought a case against Lemonade for violation of New York’s Deceptive Trade Practices Act, with the lawsuit 1:21-cv-07070 being settled in 2022 for $4 million in damages.

Lawsuits brought against AI for copyright infringements

Finally, the proliferation of generative AI in the past year has resulted in a slew of lawsuits against the developers of these tools using vast amounts of data to train complex models.

For example, ChatGPT developers OpenAI have been involved in several lawsuits due to claims of copyright infringement in the training of these models. Most recently, Authors Guild filed a lawsuit claiming that OpenAI used their works of fiction to train their AI models without permission or compensation. A similar lawsuit was also brought against OpenAI earlier in 2023 by Paul Tremblay and Mona Awad, who also assert that OpenAI used their books to train ChatGPT without their permission, thus violating copyright laws.

However, OpenAI is not the only provider of generative AI models to be targeted by legal action. Stability AI, developer of AI image generator Stable Diffusion, has also been subject to copyright lawsuits. For example, Getty Images has filed a complaint against Stability AI for using more than 12 million copyrighted Getty photos without permission or compensation. Similarly, California resident Sarah Andersen, author of a webcomic, has, alongside other fellow artists, sued Stability AI over their use of copyrighted images to train the generative models.

Conversely, a DC court has ruled that outputs generated by artificial intelligence (AI) systems cannot be granted copyright protection, reserving this protection solely for works produced by humans. Accordingly, a copyright application from computer scientist Stephen Thaler for his Device for the Autonomous Bootstrapping of Unified Sentience (DABUS) system was rejected by the Copyright Office. Likewise, applications for copyrights on AI-generated artworks have also been rejected in the US.

Prioritise compliance

It is clear that courts are increasingly cracking down on the illegal use of AI under current laws, emphasising the need for ongoing risk management and compliance when using the technology.

With the wave of upcoming AI regulation, it is more important than ever to ensure compliance with both new and existing laws to avoid legal action and heavy penalties – up to 7% of annual worldwide turnover in the case of the EU AI Act, for example.

To find out how Holistic AI can help you with AI governance, risk, and compliance, get in touch at we@holisticai.com.

Written by Airlie Hilliard, Senior Researcher at Holistic AI.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.
deployment options

HAI Platform has two deployment options, Hybrid and Fully Managed

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →
product

AI Governance

A command centre suite for executive-level management of AI applications

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →
Do Existing Laws Apply to AI?
AI Regulations

Do Existing Laws Apply to AI?

November 9, 2023

Artificial intelligence’s (AI) integration in business is expanding across the globe, with over a third of companies using AI and an additional 42% exploring how it can be used to support business practices.

While this can bring considerable benefits to both businesses and consumers by removing the burden of tedious and repetitive tasks, streamlining processes, and allowing greater personalisation, the widespread use of AI comes with risks, particularly if appropriate safeguards are not implemented.

As such, AI is being increasingly targeted by lawmakers around the world, including in the US at the state, federal, and local levels. The same is true in the EU, where the EU AI Act is set to become the global gold standard for AI regulation through its risk-based approach. Laws have also been proposed to regulate AI and automation on a sectorial basis, in industries like HR Tech and Insurance targeted.

Notwithstanding this, it is important to recognise that AI systems are still within the scope of existing laws – automation does not create a loophole for compliance.

This has been repeatedly reiterated by regulators, including the Equal Employment Opportunity Commission (EEOC), Financial Conduct Authority (FCA), Federal Trade Commission (FTC), and Consumer Financial Protection Bureau (CFPB). Indeed, a number of lawsuits have already been brought against companies using AI without appropriate safeguards or considerations, resulting in them breaking existing laws.

In this blog post, we explore how some of these existing laws have been enforced against the misuse of AI.

Existing non-discrimination laws enforced against AI

One of the most widely acknowledged risks of AI is bias. Bias can be introduced by multiple sources, including system design and training data. Many of these biases reflect existing societal prejudices, which are reflected and perpetuated by AI systems.

However, unlike human biases, which are notoriously difficult to alleviate, there is the potential for the bias in AI models to be mitigated through both social and technical approaches. However, there have been multiple instances of algorithmic discrimination occurring across sectors, resulting in several high-profile harms and lawsuits.

In the HR Tech sector, a lawsuit was brought against ATS provider Workday for alleged age, disability, and racial discrimination, which violates Title VII of the Civil Rights Act of 1964, the Civil Rights Act of 1866, the Age Discrimination in Employment Act of 1967, and the ADA Amendments Act of 2008.

Plaintiff Derek Mobley, a black disabled man over 40, claims that he has applied for up to 100 jobs at companies that he believes use Workday and has not had any success in obtaining a position despite having a bachelor's degree in finance and an associate degree in network systems administration.

As such, Mobley filed the lawsuit 4:23-cv-00770 on behalf of himself and others in similar situations, namely African American applicants, candidates over 40, and disabled applicants. The lawsuit addresses the alleged discrimintaory screening process that, from 3 June 2019 until present, has prevented these individuals from being referred or permanently hired for employment.

The case is still open and could set a precedent for AI-based discrimination once resolved. Meanwhile the EEOC recently settled an age discrimination lawsuit with iTutorGroup for $365,000 for automatic rejection of applicants based on age.

Outside of employment decisions, a discrimination lawsuit has also been brought against insurance provider State Farm on the basis that the company discriminates against black policyholders, therefore violating the Fair Housing Act 42 U.S. Code § 3604 (a)(b) and 42 U.S. Code § 3605.

The class action 1:22-cv-07014 was filed by State Farm policyholder Jacqueline Huskey and is supported by a study from the Centre on Race, Inequality and the Law at the NYU School of Law. The study surveyed 800 black and white homeowners and found disparities between the way claims from white policyholders and black policyholders are met.

Black policyholders experienced prolonged delays in communication with State Farm agents, having more correspondence compared to other policyholders. Additionally, their claims were met with increased suspicion in contrast to their white counterparts.

The lawsuit alleges that this disparate treatment is the result of the algorithms and tools State Farm deploys from third-party vendors to automate their claims processing. In particular, the lawsuit identifies Duck Creek Technologies, a provider of claims management and fraud-detection tools, as a potential source of the alleged discrimination. The use of natural language processing is alleged to have resulted in negative biases in voice analytics for black vs white policyholders.

Like the Workday lawsuit, the State Farm lawsuit is still open, but these cases highlight the fact that existing non-discrimination laws can and will be applied to AI and automated decision systems.

Lawsuits brought against AI under existing biometric and data protection laws

In addition to AI system outcomes falling under existing legal regulations, the data used to train these models is also within the legal framework. Consequently, multiple lawsuits have been initiated against companies that unlawfully use biometric data concerning their AI systems.

One company that has been subject to legal action in multiple countries is Clearview AI, which uses images scraped from the internet and social media to build a database of facial images, which they then provide to law enforcement.

Since the company did not inform individuals that they were collecting facial images or outline any storage period, this violated data protection laws in multiple countries. For example, Italy’s Security Agency (Garante per la Protezione dei Dati Peronsali) fined the company €20 million under GDPR and banned them from monitoring, storing, and processing biometric information of individuals in Italy and ordered the company to delete all existing data belonging to Italians. Similar action was brought against the company in Illinois by the American Civil Liberties Union for violating Illinois’ Biometric Information Privacy Act (BIPA).

Also in Illinois, a case was brought against Prisma Labs Inc. by Jack Flora, Nathan Matson, Courtney Owens and D.J. for failing to declare the collection and storage of biometric data on facial geometry.

Prisma Labs develops mobile apps for editing and stylizing digital images and videos. Their Lensa app is designed for retouching facial images. To train the algorithms used by the app, Prisma collects the facial geometry of uploaded images. The plaintiffs claim that Prisma has not informed users in writing that this biometric data is collected and stored by Lensa and that language used in the privacy policy is too vague and does not clearly disclose the collection and storage of data. As such, the lawsuit 3:23-cv-00680 asserts that Prisma’s lack of disclosure violates BIPA (section 15(a) to 15(d)) and consequent damages of up to $5 million are being sought.

In the insurance sector, Lemonade Inc. has had a lawsuit brought against it for the unlawful collection of data points from policyholders, particularly in relation to facial recognition. This is because Lemonade Inc. uses AI chatbots for many of its insurance processes, extracting 1600 data points through 13 questions.

Although Lemonade’s Privacy Pledge claims that the company does not collect, require, or share policyholders’ biometric information, a now-deleted tweet from the company claimed that their AI technology can extract non-verbal cues from videos submitted as part of claims evidence, with the company, therefore, relying on facial recognition for fraud detection. As such, claimant Mark Pruden brought a case against Lemonade for violation of New York’s Deceptive Trade Practices Act, with the lawsuit 1:21-cv-07070 being settled in 2022 for $4 million in damages.

Lawsuits brought against AI for copyright infringements

Finally, the proliferation of generative AI in the past year has resulted in a slew of lawsuits against the developers of these tools using vast amounts of data to train complex models.

For example, ChatGPT developers OpenAI have been involved in several lawsuits due to claims of copyright infringement in the training of these models. Most recently, Authors Guild filed a lawsuit claiming that OpenAI used their works of fiction to train their AI models without permission or compensation. A similar lawsuit was also brought against OpenAI earlier in 2023 by Paul Tremblay and Mona Awad, who also assert that OpenAI used their books to train ChatGPT without their permission, thus violating copyright laws.

However, OpenAI is not the only provider of generative AI models to be targeted by legal action. Stability AI, developer of AI image generator Stable Diffusion, has also been subject to copyright lawsuits. For example, Getty Images has filed a complaint against Stability AI for using more than 12 million copyrighted Getty photos without permission or compensation. Similarly, California resident Sarah Andersen, author of a webcomic, has, alongside other fellow artists, sued Stability AI over their use of copyrighted images to train the generative models.

Conversely, a DC court has ruled that outputs generated by artificial intelligence (AI) systems cannot be granted copyright protection, reserving this protection solely for works produced by humans. Accordingly, a copyright application from computer scientist Stephen Thaler for his Device for the Autonomous Bootstrapping of Unified Sentience (DABUS) system was rejected by the Copyright Office. Likewise, applications for copyrights on AI-generated artworks have also been rejected in the US.

Prioritise compliance

It is clear that courts are increasingly cracking down on the illegal use of AI under current laws, emphasising the need for ongoing risk management and compliance when using the technology.

With the wave of upcoming AI regulation, it is more important than ever to ensure compliance with both new and existing laws to avoid legal action and heavy penalties – up to 7% of annual worldwide turnover in the case of the EU AI Act, for example.

To find out how Holistic AI can help you with AI governance, risk, and compliance, get in touch at we@holisticai.com.

Written by Airlie Hilliard, Senior Researcher at Holistic AI.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Manage risks. Embrace AI.

Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI

Get Started