AI Regulations

AI Regulation in 2026: Navigating an Uncertain Landscape

The AI regulation landscape as we move into 2026 is complex. This is not just due to the volume of laws that there are to navigate, but also the patchwork of overlapping different laws. Adding to this complexity is the uncertainty that was introduced at the end of 2025.

If you missed it, President Trump signed an Executive Order in December 2025 to block state-level AI laws that are flagged as incompatible with a minimally burdensome national policy framework for AI. A bill has been introduced to block Trump’s blocking, but the uncertainty doesn’t end there. The European Commission published a Digital Omnibus on AI Regulation Proposal in November 2025 with the aim of simplifying the EU AI Act and delaying the application date for high-risk AI systems.

The proposal must be approved by the European Parliament before any simplification or delays can be implemented. The deadline for this process is August 2026, leaving providers of high-risk AI systems in limbo.

What we do know

While the EU and US are still deciding how they are going to regulate AI in 2026, other countries around the world are still moving forward; Korea’s Basic AI Act and Vietnam’s first dedicated AI law are both set to take effect in 2026.

Moreover, although each jurisdiction takes a different approach, we can also draw several parallels between what they are regulating and how.

HR Technologies continue to be a key focus

Some of the earliest AI-specific laws targeted HR tech. What started with consent requirements for AI-driven video interviews has expanded into:

  • Mandatory annual bias audits 
  • Public or employee-facing disclosure obligations 
  • Impact assessments focused on discrimination and fairness 
  • Restrictions on fully automated employment decisions

Existing laws and regulations have also been updated to specifically address automated tools, and there are even proposals to penalise employers for AI-driven employee displacement. 2026 will undoubtedly see AI systems used for employment decisions continue to be a priority around the world, whether this is through new laws or existing laws.

Dynamic pricing is becoming a new compliance risk

2026 will see algorithms used for personalization increasingly come under scrutiny. From wages and rent to ticket pricing, the use of AI to create what could be a detrimentally personalized experience is now on the radar of policymakers and society alike.

Risk-based regulation is proliferating

The EU AI Act is the OG risk-based AI law. Even though there is currently a lot of uncertainty surrounding it, it’s not the only risk-based law. Korea, Kazakhstan, Vietnam, and Brazil have all passed AI laws that use a similar risk-based classification where use-cases such as employment, education, and essential services, such as insurance and financial services, are subject to strict obligations across all of the frameworks.

Several pending bills in the US also target many of the systems considered high-risk by these laws.

Some of the AI systems considered high-risk and the specific obligations under each law vary, making compliance more challenging. However, they converge on common expectations for high-risk systems:

  • Risk and impact assessments
  • Data governance controls
  • Oversight mechanisms
  • Documentation and post-deployment monitoring

Generative AI is being targeted from multiple angles

Multiple applications of generative AI are being targeted by policymakers. Due to the sheer volume of laws introduced in the US in general, much of the activity is centered here. However, India, the UK, and Denmark have all taken steps to tackle harmful deepfake material. Broadly, they aim to prevent non-consensual deepfakes of people or art, particularly when it comes to intimate imagery.

The UK, EU, and US Bar Associations have also issued multiple pieces of guidance on AI in judicial service. Unlike laws, this guidance is not affected by the uncertainty in the US. While perhaps not carrying the same weight as legislation, those not following the appropriate guidelines can incur heavy sanctions.

Proactive AI governance is the solution

Complying with AI regulation is not just something for legal teams to be concerned about; it takes whole-company commitment and cooperation. With the US and EU both at a crossroads, proactive AI governance has never been more important.

You cannot pause or suspend your governance on the off chance that a specific law will be revoked, simplified, or have delayed deadlines. Policymakers in the rest of the world are still very much active, so even if a couple of jurisdictions slow down, you will likely be affected by AI legislation relevant to your organization in the near future. Moreover, whether or not AI-specific laws are enforced, existing laws still apply to AI, so governance remains essential.

More importantly, remember that compliance is only a small part of AI governance – proactive governance creates trust, supports your AI transformation, and helps you actually deliver ROI.

Want more detail?

{{2026}}

For a deeper, structured view of how AI regulation applies across specific use cases, sectors, and jurisdictions, download our State of AI Regulations 2026  eBook. Compiled by our policy team, the eBook outlines the key activity you should have on your radar in 2026 to help you navigate the uncertainty.

Table of contents

Stay informed with the latest news & updates
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this

Unlock the Future with AI Governance.

Get a demo

Get a demo