Europe Hits Pause on Its Toughest AI Rules – and the Backlash Has Already Begun

EU officials have agreed to water down certain aspects of the AI Act, including delaying the implementation of rules covering a number of high-risk applications until December 2027, instead of the originally set deadline of August 2026, according to the latest update of EU lawmakers watering down AI rules.
This agreement comes after many companies argued the EU was bogging itself down in unnecessary regulation, leaving the EU behind competitors in the US and Asia.
The deal was reached after 9 hours of talks, which is fairly standard for negotiations in Brussels. It still needs to be ratified by EU leaders and the EU’s parliament, so don’t expect any final changes just yet. But the bottom line is pretty clear: Europe still wants to regulate AI, just a little less strictly.
The final deal means that high-risk, stand-alone AI systems would have to comply by December 2, 2027, but high-risk systems embedded in high-risk products, such as cars or medical devices, would have until August 2, 2028 to get it right.
The Council said this is to help “simplify” the AI Act, including by preventing overlaps with other sectoral legislation. In other words, if a machine, medical product or device is already regulated as a regulated product, then there is no need for companies to produce duplicate paperwork just to comply with the AI Act.
That said, the deal is no golden ticket for big AI firms: The agreement would introduce a ban on non-consensual, sexually explicit AI images and videos, including so called “nudifier” apps and child sexual abuse material.
The ban is scheduled to come into force on December 2, 2026, when watermarks on AI-generated content are due to take effect — allowing a clearer timetable for industry players.
The European Parliament said the AI Act package of simplifications “strikes a careful balance between the simplifications of the rules, maintaining the risk-based approach of the AI Act and adding safeguards against so called ‘nudifier apps’.”
It’s a crucial point — few people would really argue that we should delay on tackling the sexual deepfakes problem, especially after women, young people, and politicians have seen themselves as targets of synthetic images, images that are not only harmful but damaging.

The primary contention is about timing. Civil society and digital rights activists contend that delaying more stringent regulations around high-risk AI means leaving individuals exposed in a variety of areas, from employment and education to biometrics, critical infrastructure and the police.
Conversely, the business community contends that an unclear landscape with overlapping obligations will stall Europe’s AI industry before it has truly got off the ground. Either could be true, which makes this a minefield.
The original law went into effect in August 2024, when the European Commission heralded it as the first full AI regulatory framework in the world. The law is risk-based: certain uses of AI are banned, high-risk uses have strict requirements and low-risk uses have lighter obligations. That remains the same under the new agreement, which just delays the timing and scope of some of the tighter obligations.
It all feels a bit like political whiplash. Europe has for years positioned itself as the responsible adult in the AI conversation: the one that prioritises rights and safety over hype.
Now, under intense pressure from industry and big tech, it is stepping back. Pragmatism? Yes. A surrender? You can be sure many will argue that. My guess is that the truth lies somewhere in the messy grey between.

Siemens and ASML had lobbied for AI regulations for industrial applications, with Reuters reporting that AI Act rules will not apply where there are industry-specific regulations.
For manufacturers who were worried about a compliance headache, particularly in some of the heartlands of Europe’s industrial power, that is a welcome development. It also poses a simple question: when does simplification become a loophole?
The European Commission hailed the deal, noting that the revised AI Act is intended to promote innovation while shielding citizens from the harmful consequences of AI. “Innovation and protection” and “speed and safety” and “less paperwork and more human rights” — everyone wants that; no one wants it to be true.
For startups, the postponement offers some relief. In the European Union, creating artificial intelligence has become a regulatory minefield and smaller companies may lack the resources of a Google in the form of a team of compliance experts.
If the AI Act takes longer to apply, it might give more room for European developers to compete rather than spend money on law firms as soon as they begin work on seed.
But the compromise doesn’t look so nice for the public. High-risk AI systems are labeled “high-risk” for a reason—they can affect who gets hired, how governments provide services, how the police use their tools, and even how critical infrastructure works. Delaying enforcement might reduce industry worries, but it also delays the day when citizens get maximum protection. It’s an uneasy dilemma that Brussels won’t be able to paper over.
Europe wants to be the region that lays down the laws of the AI age. But it also wants to be the place where AI companies build real-world products. Both of those goals can happen, but they’ll have to be squeezed in with enough friction to create a little heat. This week’s agreement is designed to dampen some of that friction before it boils over.
The final compromise will move into the next phase of the formal process and, if approved, will set the course for the first few years of implementing the AI Act, while also offering a signal to countries beyond the EU that even the world’s most ambitious AI regulator is tweaking its plans based on the pace, costs and political realities of the AI race.
Now, the real question is: Does Europe still want to enforce strong AI rules? It clearly does. But is Europe also able to make them enforceable while not making them so weak that the safety shield starts leaking?