LatticeFlow AI at the 2025 Gartner® Market Guide for AI Governance Platforms. Read more.

logo
logo

Use Cases

Resources

Company

NEWS ARTICLE

Company Updates

Share

What to Pause, and What to Push: Cutting Through the Noise on the EU AI Act

Cover Image

By Dr. Petar Tsankov, CEO and Co-Founder, LatticeFlow AI

Over the last days, we’ve seen a wave of calls to pause the EU AI Act.

It began with the “Stop the Clock” open letter from the EU AI Champions Initiative, signed by leaders from major European companies urging a two-year delay for both general-purpose AI (GPAI) and high-risk AI system obligations.

Then came a spot-on analysis by Bloomberg’s Parmy Olson: “Europe’s AI Law needs a smart pause – not a full stop.”

And just days ago, the European Commission responded: enforcement of the AI Act will go ahead as planned.

These perspectives reflect a real tension: as parts of the Act remain unclear. But in all the noise, one critical point is being missed: Not everything should be paused.

Before diving into what to pause and what to push, Europe must remember what it stands for: steady, values-driven progress. In contrast, we have seen Big Tech shift positions 360 degrees depending on strategic interests – from urging a pause on training large AI models (when lagging behind) to pushing a 10-year ban on US states regulating AI. Investors, too, have shifted with the winds – at times championing AI safety by publicly endorsing Responsible AI principles to align with their LPs, only to quietly remove those commitments as narratives changed.

Europe must stop chasing noise and narrative manipulation. The current state of AI in Europe has zero connection to the EU AI Act – it’s premature and false to blame the Act. Any expert working with real-world AI deployments will confirm that clear, actionable rules are a prerequisite for scaling AI.

And the evidence is here: mission-critical industries – banks, insurers, and healthcare providers – have been proactively implementing their own AI governance and risk frameworks – not to delay AI use, but to enable production-grade AI deployments. These market dynamics are not driven by regulators. They’re driven by operational necessity: ensuring AI systems are performant, secure, and robust when going live.

Let’s cut through the noise.

Two Timelines, Two Realities

The EU AI Act draws a line between general-purpose AI (GPAI) and high-risk AI. But recent calls to pause the Act blur this distinction – and that’s where the real risk lies.

On the GPAI side, some of the concerns are valid. The concept of “systemic risk” still lacks a clear scientific foundation. Requirements to mitigate long-term AI safety risks raise open research questions that remain unresolved. More practically, key implementation tools – like the Code of Practice – are still missing. In this context, enforcing the rules by August 2025 feels premature. A short delay could give industry, policymakers, and AI researchers the space to define what responsible oversight should actually look like.

But high-risk AI is a different case. Entirely. These systems immediately impact people – making decisions in healthcare, banking, employment, public safety. The risks are not theoretical. The use cases, associated harms, and appropriate mitigations are well understood. In fact, companies in these sectors have been proactively adopting similar AI governance measures. And unlike GPAI, the AI Act provides a concrete framework of obligations – enabling a level playing field, building trust, and ultimately accelerating AI deployments.

Delaying both tracks equally sends the wrong message: that the entire system is unworkable. In reality, one part needs refinement. The other needs to move forward.

The message shouldn’t be “pause everything.” It should be: pause what lacks clarity, and push forward where structure already exists.

A Science-Driven Path Forward

When it comes to General-Purpose AI (GPAI), progress hinges on science. We still lack clarity on core research questions: How do we identify and evaluate emergent or intrinsic capabilities in frontier models – like deceptive behavior, instrumental convergence, or goal misgeneralization? Until then, trying to regulate these systems without scientific grounding risks doing more harm than good.

In high-risk use cases, that clarity is already emerging. At LatticeFlow AI, we’ve worked together with ETH Zurich and INSAIT to develop COMPL-AI, the first open-source framework for evaluating AI systems against well-understood risks outlined in the EU AI Act before their deployment in high-risk AI systems. It enables companies to assess whether their AI systems meet regulatory expectations – not in theory, but in practice.

This is the kind of science-based infrastructure Europe needs more of: clear definitions, measurable criteria, and implementation pathways that support innovation instead of stalling it.

The longer companies operate without clarity, the harder it becomes to build trust, ensure accountability, and scale AI deployments.

What we need now isn’t more time. We need precision. Structure doesn’t kill innovation. Structure builds trust. Trust drives adoption. And adoption is what turns AI pilots into impact at scale.

From Uncertainty to Leadership

Europe stands at a juncture. The pressure to delay is understandable, but the real risk lies in overcorrecting. Now is not the time to set precedents and raise uncertainty.

Pausing everything won’t solve the problem. It will blur the lines between what’s vague and what’s ready, slow down responsible adoption, and erode the confidence of those trying to do the right thing.

The path forward is not about buying time, it’s about using it wisely.

  • Pause what lacks scientific grounding.
  • Push what’s already understood and implementable.
  • And equip companies with the tools they need to act now.

This is how Europe leads. Not by hesitating, but by acting with focus, clarity, and direction.

We don’t need to stop the clock. We need to get the timeline right.