Policy Synthesized from 1 source

EU Delays AI Act Enforcement 21 Months, Then Bans Nudify Apps

Key Points

  • Compliance deadlines pushed to December 2027 for high-risk AI, August 2028 for some sectors
  • Nudify app ban approved in same parliamentary session as delay vote
  • 462 MEPs voted for delay; industry lobbying credited with influencing outcome
  • Watermarking requirements also delayed, leaving 2026-2027 elections exposed
  • Consumer groups warn delays undermine Act's core protections
  • Original deadlines already provided nearly three years of preparation time
References (1)
  1. [1] EU Parliament delays AI Act compliance, bans nude apps — The Verge AI

21 months. That's how much longer Europe's AI industry just secured to comply with the EU AI Act—while simultaneously watching Brussels ban a single category of applications. The European Parliament's decision last week to push key compliance deadlines to December 2027, with some high-risk sectors getting until August 2028, reveals a law that talks ambition but walks backward.

The timing is revealing. Parliament approved these delays by a large majority—462 votes in favor—while also greenlighting a ban on "nudify" apps that create non-consensual intimate images. On the surface, both moves appear decisive. But examined together, they expose a schizophrenia in Brussels' approach to AI governance: it can ban one harmful application category, yet it cannot enforce rules on the systems already making consequential decisions about people's lives.

Industry lobbied hard for these extensions. Tech giants and European AI companies argued that compliance infrastructure needed more time to build. The EU AI Office, the bloc's dedicated AI regulatory body, fielded months of pressure from major developers. Their thesis was straightforward: rushed compliance produces fragile systems, and Europe cannot afford to fall behind American competitors while sorting out paperwork.

This argument has surface appeal. Nobody wants half-baked compliance. But the premise collapses under scrutiny. The original deadlines already provided nearly three years of runway from the Act's passage. Companies had ample warning. The firms crying loudest about needing more time are precisely those that spent that warning period lobbying for delays rather than building compliance frameworks.

The harm falls on people, not on algorithms. High-risk AI systems—facial recognition in public spaces, automated hiring tools, credit-scoring systems—make decisions that shape employment, liberty, and access to services. December 2027 is not an abstract deadline. It is the date when oversight begins in earnest. Every month of delay is a month where biased hiring algorithms continue screening candidates unchallenged, where biometric systems operate without mandatory transparency requirements.

Consumer advocates and digital rights groups watched the vote with alarm. They noted the inconsistency: Parliament can mobilize quickly to ban nudify apps—which already violated harassment and privacy laws—yet compliance enforcement for foundational AI systems gets pushed back repeatedly. The nudify ban is symbolically potent but practically redundant. The compliance delays are symbolically weak but practically consequential.

The watermarking requirement tells the same story. Rules requiring AI-generated content to carry detectable markers were also delayed. This provision matters. Without mandatory watermarking, distinguishing authentic political speech from AI-generated fabrications becomes exponentially harder. The 2026 and 2027 election cycles across EU member states will unfold without this safeguard.

Nobody seriously argues that watermarking is technically infeasible. Providers already implement voluntary standards like C2PA. The delay reflects political calculation: enforcement mechanisms remain undefined, and Parliament lacks appetite for confrontations that might spook AI investment.

This is the Brussels bargain, exposed. The EU wants to lead global AI governance, projecting authority through ambitious legislation. Simultaneously, it fears the costs of that ambition—costs measured in industry complaints, investment flows, and competitiveness rankings. So it writes strong laws and then quietly extends the deadlines, maintaining the appearance of leadership while avoiding the friction of actual enforcement.

The test will come in 2027. By then, general-purpose AI systems will have grown more capable and more embedded in critical infrastructure. Whether Brussels enforces its own rules—or finds another reason to wait—will determine whether the AI Act becomes a model for the world or a cautionary tale about regulatory hubris.

0:00