
Google stops hackers from using AI to launch mass cyberattack
May 11, 2026The European Union has agreed to significantly weaken its landmark artificial intelligence regulations, pushing back key implementation dates and reducing requirements for companies in what critics describe as a capitulation to Big Tech lobbying efforts.
EU countries and European Parliament lawmakers reached a provisional agreement after nine hours of negotiations on Thursday, marking a substantial retreat from the bloc’s original ambitious stance on AI governance. The deal still requires formal approval from EU governments and the European Parliament in the coming months.
The changes represent a major shift in Europe’s approach to AI regulation, coming at a time when global competition in artificial intelligence is intensifying. The original AI Act was designed to be the world’s first comprehensive AI regulation, setting a template that other regions might follow. Now, the watered-down version raises questions about whether Europe can maintain its position as a global leader in tech regulation while keeping its companies competitive.
The most significant change involves delaying rules for high-risk AI systems until December 2027. These systems include those using biometrics or operating in critical infrastructure and law enforcement. The original deadline was August 2024, meaning companies now have more than three additional years to comply with the most stringent requirements.
“Today’s agreement on the AI Act significantly supports our companies by reducing recurring administrative costs,” said Marilena Raouna, Cyprus’s deputy minister for European affairs. Cyprus currently holds the rotating EU Council presidency.
The revisions are part of a broader European Commission effort to simplify digital regulations after businesses complained that overlapping rules and red tape were hampering their ability to compete with U.S. and Asian rivals. This concern has become particularly acute as American companies like OpenAI and Google dominate the generative AI space, while Chinese firms advance rapidly in AI applications.
Key changes in the revised agreement include:
- Exclusion of machinery from AI Act coverage, as it’s already subject to sectoral rules
- Delayed implementation timeline for high-risk AI systems
- Streamlined compliance processes for European developers
- Reduced administrative burden on companies
The machinery exemption came following lobbying from major European industrial companies including Germany’s Siemens and Dutch semiconductor equipment maker ASML. These companies argued that existing industry-specific regulations already cover their AI applications adequately.
However, lawmakers did agree to strengthen protections in one area: banning AI practices that create unauthorized sexually explicit images. This prohibition will take effect from December 2, responding to concerns about deepfake technology being used to create non-consensual intimate images. The move addresses problems with AI systems like Elon Musk’s xAI chatbot Grok, which has generated such content.
“By the end of this year everyone, but especially women and girls will be safe from horrific nudifier apps being widely available on the EU market,” said Dutch lawmaker Kim van Sparrentak. The ban represents one of the few areas where the revised Act actually strengthens protections.
Mandatory watermarking of AI-generated content will also begin in December, helping users identify when they’re viewing or reading AI-created material. This requirement addresses growing concerns about AI-generated misinformation and the need for transparency in AI outputs.
Michael McNamara, the lawmaker who led negotiations for Parliament, emphasized the balance the new rules attempt to strike. “I’m also happy that it will streamline the processes involved for European developers and deployers to get their products to the market while protecting consumers,” he said.
The response to the changes has been mixed. The European Consumer Organisation criticized the weakened protections, while the Computer & Communications Industry Association, a tech lobbying group, argued that lawmakers should have gone even further in reducing restrictions.
Despite the revisions, the EU’s AI rules remain the strictest globally, even in their watered-down form. No other major jurisdiction has implemented comprehensive AI legislation, though several countries are developing their own frameworks. The EU’s approach will likely influence how other regions regulate AI, making these changes significant beyond Europe’s borders.
The timing of these revisions is particularly notable as AI technology continues to advance rapidly. The delay in high-risk system regulations means that some of the most potentially harmful AI applications will operate with minimal oversight for several more years. This could impact areas like criminal justice, where AI systems are increasingly used for risk assessment and decision-making.
The changes also reflect the ongoing tension between innovation and regulation in the tech sector. European policymakers have long struggled to balance protecting citizens and maintaining fair markets while ensuring their companies can compete globally. These AI Act revisions suggest that competitive pressure is increasingly winning out over regulatory ambition.




