Generative AI is currently at the peak of the AI hype cycle, and is being used for a wide range of applications. Trained on vast amounts of data, generative AI models can produce content such as text, music, images, and even video from other inputs.
Given the vast amounts of data required to train these models, data is often scraped from the internet, where anything in the public domain might be seen as available for free use. Recently, there have been a slew of copyright lawsuits that seek to challenge the legality of the training of these models, particularly due to the training of the models on copyrighted content such as books.
In fact, our AI event tracker notes over 30 lawsuits against generative AI foundational model makers. Add to this emerging regulations governing specific uses like generative AI in hiring and HR contexts, and the near future of generative AI use is set to be much more regulated.
So, what AI lawsuits should your organization be tracking? In short, any that apply to your location, industry, or use type (vendor, employer, user). As a brief catch-up on the most important lawsuits, we’ve highlighted a series of the highest-profile cases below:
Getty Images has filed a complaint against Stability AI for using almost 12 million Getty Images without consent. The images were used to train the image-generating tool, Stable Diffusion, which delivers generated images in response to prompts
Plaintiffs allege that Google is taking data that is shared on the internet including copyrighted materials, to train their chatbot, Bard, and other AI products. Google’s updated policy provided that Google is allowed to collect information that is publicly available online.
The Plaintiffs have filed a lawsuit of copyright infringement against Shein, accusing the company of using an algorithm to produce, distribute, and sell copies of copyrighted designs for profit. The algorithm finds fashion trends and designs with commercial potential and sells copies on the Shein website.
The author’s Guild including 17 authors, filed a class action lawsuit against OpenAI, claiming copyright infringement. The authors allege that OpenAI is using copyrighted material without consent or compensation to train their models. Authors joining the lawsuit include David Baldacci, Mary Bly, Michael Connelly, Sylvia Day, Jonathan Franzen, John Grisham, Elin Hilderbrand, Christina Baker Kline, Maya Shanbhag Lang, Victor LaValle, George R.R. Martin, Jodi Picoult, Douglas Preston, Roxana Robinson, George Saunders, Scott Turow, and Rachel Vail.
Anthropic, the defendant, is infringing upon the publishers’ copyright by using their lyrics for musical compositions without permission to train its AI model (Claude) which is used to respond to end user prompts.
Huckabee (former Arkansas Governor) along with a group of authors have brought a class action suite against Meta, Microsoft, Bloomberg, and Eleuther AI Institute on the grounds that the AI tools were trained using Books3, a dataset compiled by Independant contractors that included copyrighted materials.
A group of visual artists have filed a class action suit against Stability AI and Midjourney. The artists allege that Stability AI has utilized their work to train its Generative AI system without any compensation or consent.
Microsoft and OpenAI’s GenAI tools rely on large language models (LLMs) and are alleged to be built using millions of The Times’ copyrighted articles. Defendants are seeking the exception of “fair use” to train their models for transformative purposes.
A group of visual artists have filed a class action suit against Stability AI and Midjourney. The artists allege that Stability AI has utilized their work to train its generative AI system without any compensation or consent.
First and foremost, existing laws apply to AI use. This means protected input data doesn’t lose its protections being fed into generative AI as training data. This stance has been reiterated by a vast majority of jurisdictions that have produced legislation and guidance on AI.
Secondly, some industries or use cases are likelier to use generative AI at scale. Hiring, recruitment, and human resources, for example, are targeted by several vertical-specific laws across the US.
Finally, it’s worth considering that some jurisdictions are presenting legislation for the makers of AI products, while others legislate AI users as well. For example, bias audit laws in many US jurisdictions require annual audits of AI systems used in hiring whether made in house or purchased from a vendor.
Regardless of whether your organization uses generative AI in risk-heavy ways, the growth in lawsuits towards large risk vector AI tools points to a growing trend: organizations must be able to track the evolving AI regulatory landscape.
Want to explore how you and your team can have global visibility of AI landscape to guide your organization? Schedule a call with one of our team to learn more about our AI Tracker today.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts