The Digital Markets Act (DMA) came into effect on November 1st 2022 and focuses on regulating how online platforms operate with respect to fair competition and consumer choice by reducing the bottlenecks that so-called gatekeepers create by monopolising the digital economy.
First proposed on the 21st of April 2021, the European Commission’s proposed Harmonised Rules on Artificial Intelligence, colloquially known as the EU AI Act, seeks to lead the world in AI regulation. Likely to become the global gold standard for AI regulation, much like the general data protection regulations did for privacy regulation, the rules aim to create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights in the development and deployment of AI.
The latest and final compromise text of the EU AI Act (released on 6 December 2022) marks the EU ministers' official greenlight to adopt a general approach to the AI Act.
The Digital Services Act (DSA) is a lengthy (300 pages) and horizontal (cross-sector) piece of legislation with composite rules and legal obligations for technology companies. Notably, there is a focus on social media, user-oriented communities, and online services with an advertising-driven business model.
The UK government has not yet proposed any AI-specific regulation but has published several policy papers, frameworks, standards, and strategies. This blog post outlines the major AI regulations in the UK.
The European Commission aims to lead the world in Artificial Intelligence (AI) regulation with the proposed EU AI Act. This article explores the proposed penalties of the EU AI Act for organisations that are non-compliant with the Act.
Spain’s royal decree 9/2021 or rider law gives platform delivery workers employment rights and imposes algorithmic transparency obligations.
California has proposed a Workplace Technology Accountability Act and modifications to its employment regulations to address automated decision systems. In this blog, we compare these proposals to the proposed EU AI Act.
The AI Liability Directive is the EU’s proposed new law to make it easier to prove liability in cases where AI systems cause harm.
The EU AI Act was first proposed by the European Commission in April 2021. It will be the first law worldwide which regulates the development and use of AI in a comprehensive way.
In response to concerns about harm that can result from the use of AI and the calls for greater governance of AI systems by the AI ethics movement, legislation addressing the use of these technologies has begun to emerge.
Under a new framework, in the UK, AI regulation will be context-specific and based on the use and impact of the technology, with responsibility for developing appropriate enforcement strategies being delegated to the appropriate regulator(s).