AI Regulations

If Compliance is Soley your North Star, You’ve Already Lost

Regulating technology has never been easy. While society benefits tremendously when brilliant people build bold new tools that advance science, lighten administrative burdens, strengthen defense, accelerate discovery, and make life easier, the innovators closest to the breakthroughs often see the limitless possibilities far more clearly than the unintended harms and risks. This is what makes them such great inventors.

This tension is not new.

Watching the movie, Oppenheimer, I was struck by how intoxicating the pursuit of the unprecedented can be. You can feel the momentum of invention, the thrill of doing something novel. And yet, when the field of vision widens to reveal the broader consequences, the mood can shift from exhilaration to fear. This begs the question: did anyone fully understand what they were unleashing? And if they had, what would they have changed?

It’s worth asking: Would regulation have slowed or prevented the creation of the bomb? It’s unlikely. In most transformative moments, what shapes outcomes isn’t the regulatory environment, but rather the judgment, values, and moral clarity (or lack thereof) of the humans driving the technology forward.

Which brings us to this week’s headlines.

The European Commission signaled it may delay or soften parts of the EU AI Act, offering providers a grace period and postponing enforcement of some obligations. The reactions were immediate: relief from some industry quarters, concern from others, and speculation about whether Europe is retreating from its leadership position in responsible AI and caving to pressure from the US and global technology giants.

If the EU delays implementation of the EU AI Act, what changes? Legally, a few dates. Possibly, a few requirements. Strategically? Almost nothing. I’ll explain:

The possible delay in the EU AI Act is not the existential problem commentators claim. If anything, the bigger risk is that companies interpret the delay as permission to slow down or defer their governance efforts.

Because when compliance becomes the sole driving force, we lose sight of the real purpose, which is safety.

Compliance ≠ Governance

Regulations are important. They set the baseline, create clarity, and enforce accountability. But compliance with the various regional regulations is not the same as governance. Governance is broader, deeper, and more fundamental to both safety and business success. It is about:

Compliance ≠ Governance

‍None of that is achieved simply by meeting regulatory deadlines or avoiding fines.

This is a point I argued in SiliconANGLE last year: when safety becomes subject to political cycles or regulatory timetables, companies lose sight of what’s at stake. Corporate reputation and public trust will play a far greater role in determining winners and losers in AI than any regulator. In fact, in the 2025 Financial Services Industry Outlook, Deloitte identifies trust as a cornerstone of business resilience and growth.

The EU AI Act delay doesn’t change that.

If anything, it clarifies it.

The Real Work Can’t Wait

AI systems will still make consequential decisions about people’s lives. Companies deploying AI must still manage reputational, operational, and ethical risks. Consumers still expect, and deserve, safe, transparent, high-quality systems. AI projects will continue to stall or fail because of the lack of appropriate guardrails. And history shows that investors increasingly reward companies that don’t gamble with trust.

That’s why the most forward-thinking enterprises didn’t wait for regulation to incorporate AI governance into the fabric of their businesses.

For example, Unilever introduced a Responsible AI Framework years before the EU AI Act, embedding fairness, transparency, and accountability into its global operations.

MAPFRE, one of Spain’s largest insurers, created its “Humanistic, Ethical & Responsible AI Manifesto” before any regulatory deadline loomed, emphasizing people-centric governance and public trust.

These companies understood something essential. Defining and then standing by their own operating principles, regardless of the various technology transitions and regulations, creates an identity that stakeholders can understand and support through the good and the bad. In this, governance becomes a competitive advantage by ensuring that internal teams remain committed to the company’s values, which in turn builds trust.

Proactive Governance Builds More Trust than Regulatory Compliance

Compliance does not equal safety. Each AI system is unique, and reducing risk requires more than just checking off the boxes on a list of blanket requirements.

As history reminds us, as Oppenheimer taught us, human judgment is what determines whether powerful technologies are used wisely.

Regulators can guide, incentivize, and penalize. But regulators cannot replace:

Proactive Governance Builds More Trust than Regulatory Compliance

These are obligations that exist regardless of legislative timelines.

Governance Is a Partner to Innovation, Not a Barrier

At Holistic AI, we advocate for a model where safety and innovation are mutually reinforcing. Governance is not the drag on innovation many fear. Done properly, it is the engine that:

Governance Is a Partner to Innovation, Not a Barrier

In fact, companies that wait for regulation before acting are the ones that will move slowest and bear the highest risk.

A Delay Should Trigger Action, Not Complacency

If the EU provides more time, companies should be using that time to inventory all AI systems in use at the company, mapping risks, identifying ownership, and mapping accountability structures. It is also a great time to embed AI literacy across teams, strengthen documentation, and ensure transparency. Don’t forget to stress-test systems with rigorous red teaming and jailbreaking at regular intervals as AI learns, grows, and drifts. Most of all, ensure that your company has a set of principles or operating procedures that outline your company’s commitment to its stakeholders.

This delay is not a reprieve, it is an opportunity to lead.

The Bottom Line

Regulation matters, but it must never be the primary driver of trusted AI. Compliance with regulations ensure legality, not that your AI systems will be trusted by your customers. AI Governance ensures safety, and safety is what protects people, builds trust, and speeds innovation.

When compliance becomes the sole North Star, we lose sight of what truly matters. When you align innovation with governance, you build a future where business can flourish safely and society can use the systems shaping their lives with confidence.

That is the work in front of us. And that work cannot wait.

Table of contents

Stay informed with the latest news & updates
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this

Unlock the Future with AI Governance.

Get a demo

Get a demo