The European Commission's proposed harmonised rules on artificial intelligence, the EU AI Act, aim to regulate AI systems on the EU market to create greater trust. Spain has not contributed to the development of the Act but has launched the first regulatory sandbox to experiment with its obligations. Spain has also published a National AI Strategy, announced Europe's first AI Supervisory Agency, and targeted platform-based worker rights with its Rider Law.
In an attempt to regulate the use of artificial intelligence (AI) by State Agencies, Connecticut lawmakers have taken decisive action and proposed SB- 1103, An Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy. The Proposed Bill aims to establish an Office of Artificial Intelligence, a task force to study artificial intelligence and develop an AI Bill of Rights.
While many regulatory efforts surrounding the use of AI are focused on business applications, the increasing use of AI within the public sector has led to an increased focus on regulating governmental use of AI. In this blog post, we give a high-level overview of some of these initiatives, focusing on the US, EU, and UK.
The European Union’s (EU’s) proposed AI Act aims to harmonise requirements for AI systems across the EU with its risk-based approach. Ahead of this, some countries, like the Netherlands, are pressing ahead with specific national requirements.The Dutch approach is shaped by the scandal around a biased algorithm that their tax office used to assess benefits claims. The tax office implemented the system in 2013 and, after civil society raised concerns, two formal investigations in 2020 and 2021 uncovered systematic bias affecting 1.4 million people.
In this blog post, we compare New York City Local Law 144, which will require independent bias audits of automated employment decision tools from the 15th of April 2023, and the New Jersey Assembly Bill 4909, which has been recently introduced and will have similar requirements if passed.
In 2022, China passed and enforced three distinct regulatory measures on the national, regional and local levels. This momentum has carried into 2023, where in January alone, China has already cracked down on deepfake technology through a national level legislation. In this blog, we survey China’s AI regulatory endeavours, highlighting interesting enforcement mechanisms and comment on China’s approach to AI governance.
On 31 January 2023, the Equal Employment Opportunity Commission (EEOC) held a public hearing on how the use of automated systems, including artificial intelligence, in employment decisions can comply with the federal civil rights laws the EEOC enforces. Check out our latest blog article on the key takeaways from the hearing, including how to promote diversity, equity and inclusion.
In recent years, the fairness of automated employment decision tools (AEDTs) has received increasing attention. In November 2021, the New York City Council passed Local Law 144, which mandates bias audits of these systems.
Under Local Law 144, employers and employment agencies are required to commission independent, impartial bias audits of their tools, where, under the latest version of the Department of Consumer and Worker Protection’s (DCWP) proposed rules, bias should be determined using impact ratios based on outcomes for different subgroups. In this blog post, we outline the metrics required to conduct the bias audit, how small sample sizes can pose issues, and how they can be dealt with when carrying out audits.
The New York City Council took decisive action to mandate bias audits of automated employment decision tools (AEDTs) used to evaluate employees for promotion or candidates for employment in New York City, signaling that the risks of Artificial Intelligence (AI) are becoming an increasing regulatory concern. Local Law 144, also known as the NYC Bias Audit Law, is the first of its kind to codify independent, impartial bias audits in law.
The Digital Markets Act (DMA) came into effect on November 1st 2022 and focuses on regulating how online platforms operate with respect to fair competition and consumer choice by reducing the bottlenecks that so-called gatekeepers create by monopolising the digital economy.
First proposed on the 21st of April 2021, the European Commission’s proposed Harmonised Rules on Artificial Intelligence, colloquially known as the EU AI Act, seeks to lead the world in AI regulation. Likely to become the global gold standard for AI regulation, much like the general data protection regulations did for privacy regulation, the rules aim to create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights in the development and deployment of AI.
In January 2019, the New York Department of Financial Services published a circular letter to all insurers authorized to write life insurance in New York State. The letter makes it clear that insurers should not use an external data source, algorithm or predictive model in underwriting or rating unless it has been determined by the insurer (not just the vendor) that the system does not collect or use prohibited criteria.
The Artificial Intelligence Training for the Acquisition Workforce Act (AI Training Act) was signed into law by President Biden in October 2022. The Act is premised on education and training to inform procurement and facilitate the adoption of AI for services at the Federal Level.
The latest and final compromise text of the EU AI Act (released on 6 December 2022) marks the EU ministers' official greenlight to adopt a general approach to the AI Act.
The Digital Services Act (DSA) is a lengthy (300 pages) and horizontal (cross-sector) piece of legislation with composite rules and legal obligations for technology companies. Notably, there is a focus on social media, user-oriented communities, and online services with an advertising-driven business model.
The UK government has not yet proposed any AI-specific regulation but has published several policy papers, frameworks, standards, and strategies. This blog post outlines the major AI regulations in the UK.
The European Commission aims to lead the world in Artificial Intelligence (AI) regulation with the proposed EU AI Act. This article explores the proposed penalties of the EU AI Act for organisations that are non-compliant with the Act.
Washington DC proposes the "stop discrimination by algorithms act" aiming to prevent algorithmic discrimination for protected classes.
Canada has proposed the Digital Charter Implementation Act (Bill C-27), a trio of laws for trust and privacy. This new AI Law will regulate AI systems.
Spain’s royal decree 9/2021 or rider law gives platform delivery workers employment rights and imposes algorithmic transparency obligations.
California has proposed a Workplace Technology Accountability Act and modifications to its employment regulations to address automated decision systems. In this blog, we compare these proposals to the proposed EU AI Act.
Following their proposed rules for the NYC Bias Audit legislation, the Department of Consumer and Worker Protection held a public hearing. In this blog, we summarise the key takeaways from this session.
The blog compares the NYC bias audit law with California’s proposed Workplace Technology Accountability Act and Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems.
Originally due to come into effect on 1st January 2023, the enforcement date of the NYC Bias audit law (Local Law 144) has been pushed back to 15th April 2023. Here’s what you need to know.
California has proposed amendments to its employment regulations to extend non-discrimination practices to automated decision tools. Here are a few things you need to know about the proposed modifications.
The California Workplace Technology Accountability Act aims to increase the accountability surrounding using technology in the workplace and reduce potential harm. Here are 10 things that you need to know about the proposed Act.
Local Law 144 takes effect on 1st January 2023. It mandates independent bias audits of automated tools which are used to make or support decisions about hiring candidates or promoting employees within New York City. Learn about how RHR International, a global leadership consulting firm, are working with Holistic AI to prepare for the upcoming deadline.
The White House recently published a Blueprint for an AI Bill of Rights, signalling the intent of the U.S. government to regulate AI.
The AI Liability Directive is the EU’s proposed new law to make it easier to prove liability in cases where AI systems cause harm.
The EU AI Act was first proposed by the European Commission in April 2021. It will be the first law worldwide which regulates the development and use of AI in a comprehensive way.
The New York City (NYC) Department of Consumer and Worker Protection (DCWP) published proposed amendments to Local Law 144, which mandates independent bias audits of ‘automated employment decision tools’ used by employers or employment agencies.
Colorado’s General Assembly enacted legislation last year that restricts insurers’ use of ‘external consumer data’, prohibits data, algorithms, or predictive models from unfairly discriminating, and requires insurers to test their systems and demonstrate that they are not biased.
In response to concerns about harm that can result from the use of AI and the calls for greater governance of AI systems by the AI ethics movement, legislation addressing the use of these technologies has begun to emerge.
Under a new framework, in the UK, AI regulation will be context-specific and based on the use and impact of the technology, with responsibility for developing appropriate enforcement strategies being delegated to the appropriate regulator(s).
The EU Artificial Intelligence (AI) Act aims to lead the world in the governance of AI, requiring impact assessments to identify the risk associated with the use of AI systems and continuous management and mitigation of this risk.
New York City’s legislation requiring bias audits of automated employment decision tools is coming into effect in just a few months (1st January 2023) and has left many people wondering – do I need an audit, or am I exempt?
Recruitment tools driven by artificial intelligence (AI) algorithms – including game- or image-based assessments and algorithmically analysed video interviews – are becoming more mainstream, with uptake accelerated by the pandemic. The growing adoption of these tools has led to concerns about how they can be applied ethically and without discrimination.
The Artificial Intelligence Video Interview Act came into effect in Illinois on 1st January 2020, affecting employers who use artificial intelligence to analyse video interviews of job applicants. Here are the 5 things you need to know about this legislation.
Our automated AI Risk Management platform empowers your enterprise to confidently embrace AIGet Started