×
Policy Hour
panel & networking
Online Webinar

US AI Policy: A Comprehensive Overview

Thursday, 16 May 2024 | 9am PST/ 12pm EST/ 5pm BST
Register for this event
Register for this event

In the May 2024 edition of our monthly Policy Hour Webinar, Holistic AI’s Nikitha Anand and Ella Shoup discussed the US AI Policy Landscape. They covered both federal and state legislation, regulation, guidance, and legal action, diving into both horizontal initiatives and vertical initiatives.

Key applications of AI targeted by vertical legislation include generative AI, HR Tech, insurtech, autonomous vehicles, and online platforms. Within these laws, key themes include protection from unsafe or ineffective systems, transparency, accountability, data privacy, and discrimination.

We also covered activity by federal agencies, including joint statements, international guidance, and action by the EEOC against automated employment decision tools, including a settlement with iTutorGroup for $365,000 in 2023.

Couldn’t make it to the webinar or want a refresher? We’ve included the slides below and recording from the webinar at the top.

Interested in the activity across the pond? Join us in Brussels on 3 June 2023 from 6pm for our Policy Connect event, where we will be joined by Ashley Casovan (Managing Director, IAPP AI Governance Centre), Kai Zenner (Head of Office for MEP Axel Voss, EU Parliament), Elinor Wahal (Legal and Policy Officer, DG-CNECT, EU Commission) and Gabriele Mazzini (Team Leader, AI Act, EU Commission) for our panel EU AI Act: Hear from the Experts.

Q&A


There are a number of entities targeted by US AI laws introduced at both the state and local levels. For example, Colorado’s recently passed SB24-205, provides consumer protections against AI, targets both developers and deployers of AI systems used to make critical decisions. New Jersey’s S1588 imposes bias audit requirements on vendors of automated employment decision tools, while Pennsylvania’s SB1729 imposes similar requirements on employers and employment agencies using these tools, similar to NYC Local Law 144.

Federal agencies are also being targeted by emerging AI legislation, such as the federal Artificial Intelligence Risk Management Act, which requires federal agencies to follow NIST’s AI Risk Management Framework. In fact, plenty of AI laws target public sector bodies. Several proposed legislations at the federal level also seek to target private companies, such as the Stop Spying Bosses Act S262.

While many laws seek to harm or discrimination, they typically do so for broad groups based on protected attributes such as race/ethnicity. However, the EEOC has released guidance on the implications of AI-driven recruitment tools for disabled applicants and how to avoid violations of the Americans With Disabilities Act through the use of the tools. Moreover, several online safety law proposals specifically focus on the protection of children on online platforms, such as the Kids Online Safety Act, EARN IT Act, and Children’s Online Privacy Protection rule.
NIST’s AI RMF is a non-binding, flexible framework that can be adapted to the context the AI system is deployed in and jurisdictional requirements. As such, the RMF, and similar frameworks, can be used around the world to support AI governance.

Moreover, laws such as NYC Local Law 144 have an extraterritorial scope given that they govern the use of automated employment decision tools to evaluate candidates in New York City, meaning that organizations headquartered around the world could be in scope. Additionally, the federal proposed Protect Victims of Digital Exploitation and Manipulation Act HR7567 also establishes extraterritorial federal jurisdiction in certain cases.

The definition of AI varies by entity and legal framework, although there is starting to be some progress made towards greater convergence on how it is defined. Consequently, there is also some divergence in how similar terms such as machine learning and automated employment decision systems are defined. However, given that NYC Local Law 144 influenced several other bias audit laws, many of these laws are using the same or similar definitions.

Organizations should consult with their legal teams to determine whether their tools meet the definition of AI and AEDTs under different laws and frameworks.

Regulating electronic monitoring in the workplace has increasingly become a priority for policy makers. California previously introduced the Workplace Technology Accountability Act, which led to Massachusetts introducing an almost identical law and Vermont also introducing a similar law.

New York’s A09315 (same as S07623) restricts employers’ use of electronic monitoring and prohibits employers from selling, transferring, or disclosing employee data collected via electronic monitoring tools unless mandated by law or necessary for compliance with a bias audit of an automated employment decision tool. Meanwhile, A08328 restricts the electronic monitoring or the use of an automated employment decision tool to screen a candidate or employee for an employment decision unless such tool has been audited for bias.

At the Federal level, the Exploitative Workplace Surveillance and Technologies Task Force Act sought to create a task force to study the prevalence and types of workplace surveillance used across industries, how data collected through monitoring is used, and the impact of surveillance and automated decisions on compensation, schedules, promotions, duties, health and safety, and termination. Moreover, the Stop Spying Bosses Act sought to mandate employers to disclose the use of AI in worker surveillance and data collection, enforce data privacy, and ensure automated decision systems' fairness.

Our Speakers

No items found.

Agenda

Hosted by

No items found.

In the May 2024 edition of our monthly Policy Hour Webinar, Holistic AI’s Nikitha Anand and Ella Shoup discussed the US AI Policy Landscape. They covered both federal and state legislation, regulation, guidance, and legal action, diving into both horizontal initiatives and vertical initiatives.

Key applications of AI targeted by vertical legislation include generative AI, HR Tech, insurtech, autonomous vehicles, and online platforms. Within these laws, key themes include protection from unsafe or ineffective systems, transparency, accountability, data privacy, and discrimination.

We also covered activity by federal agencies, including joint statements, international guidance, and action by the EEOC against automated employment decision tools, including a settlement with iTutorGroup for $365,000 in 2023.

Couldn’t make it to the webinar or want a refresher? We’ve included the slides below and recording from the webinar at the top.

Interested in the activity across the pond? Join us in Brussels on 3 June 2023 from 6pm for our Policy Connect event, where we will be joined by Ashley Casovan (Managing Director, IAPP AI Governance Centre), Kai Zenner (Head of Office for MEP Axel Voss, EU Parliament), Elinor Wahal (Legal and Policy Officer, DG-CNECT, EU Commission) and Gabriele Mazzini (Team Leader, AI Act, EU Commission) for our panel EU AI Act: Hear from the Experts.

Q&A


There are a number of entities targeted by US AI laws introduced at both the state and local levels. For example, Colorado’s recently passed SB24-205, provides consumer protections against AI, targets both developers and deployers of AI systems used to make critical decisions. New Jersey’s S1588 imposes bias audit requirements on vendors of automated employment decision tools, while Pennsylvania’s SB1729 imposes similar requirements on employers and employment agencies using these tools, similar to NYC Local Law 144.

Federal agencies are also being targeted by emerging AI legislation, such as the federal Artificial Intelligence Risk Management Act, which requires federal agencies to follow NIST’s AI Risk Management Framework. In fact, plenty of AI laws target public sector bodies. Several proposed legislations at the federal level also seek to target private companies, such as the Stop Spying Bosses Act S262.

While many laws seek to harm or discrimination, they typically do so for broad groups based on protected attributes such as race/ethnicity. However, the EEOC has released guidance on the implications of AI-driven recruitment tools for disabled applicants and how to avoid violations of the Americans With Disabilities Act through the use of the tools. Moreover, several online safety law proposals specifically focus on the protection of children on online platforms, such as the Kids Online Safety Act, EARN IT Act, and Children’s Online Privacy Protection rule.
NIST’s AI RMF is a non-binding, flexible framework that can be adapted to the context the AI system is deployed in and jurisdictional requirements. As such, the RMF, and similar frameworks, can be used around the world to support AI governance.

Moreover, laws such as NYC Local Law 144 have an extraterritorial scope given that they govern the use of automated employment decision tools to evaluate candidates in New York City, meaning that organizations headquartered around the world could be in scope. Additionally, the federal proposed Protect Victims of Digital Exploitation and Manipulation Act HR7567 also establishes extraterritorial federal jurisdiction in certain cases.

The definition of AI varies by entity and legal framework, although there is starting to be some progress made towards greater convergence on how it is defined. Consequently, there is also some divergence in how similar terms such as machine learning and automated employment decision systems are defined. However, given that NYC Local Law 144 influenced several other bias audit laws, many of these laws are using the same or similar definitions.

Organizations should consult with their legal teams to determine whether their tools meet the definition of AI and AEDTs under different laws and frameworks.

Regulating electronic monitoring in the workplace has increasingly become a priority for policy makers. California previously introduced the Workplace Technology Accountability Act, which led to Massachusetts introducing an almost identical law and Vermont also introducing a similar law.

New York’s A09315 (same as S07623) restricts employers’ use of electronic monitoring and prohibits employers from selling, transferring, or disclosing employee data collected via electronic monitoring tools unless mandated by law or necessary for compliance with a bias audit of an automated employment decision tool. Meanwhile, A08328 restricts the electronic monitoring or the use of an automated employment decision tool to screen a candidate or employee for an employment decision unless such tool has been audited for bias.

At the Federal level, the Exploitative Workplace Surveillance and Technologies Task Force Act sought to create a task force to study the prevalence and types of workplace surveillance used across industries, how data collected through monitoring is used, and the impact of surveillance and automated decisions on compensation, schedules, promotions, duties, health and safety, and termination. Moreover, the Stop Spying Bosses Act sought to mandate employers to disclose the use of AI in worker surveillance and data collection, enforce data privacy, and ensure automated decision systems' fairness.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call