
OpenAI launches deployment company to help businesses integrate AI systems
May 11, 2026
EU waters down AI regulations with delayed timelines and industry exemptions
May 11, 2026Google’s security team has blocked what could have been a major cyberattack orchestrated by hackers using artificial intelligence. The company’s Threat Intelligence Group reported on Monday that it stopped an effort to use AI models for what it called a “mass vulnerability exploitation operation.”
The hackers had used AI to discover and exploit a zero-day vulnerability – a previously unknown software flaw that gives attackers a way to bypass two-factor authentication. Google says it has “high confidence” that this marks the first recorded case of hackers successfully using AI models to find and exploit such vulnerabilities in the wild.
This incident highlights a growing concern in cybersecurity: criminals are now using the same AI tools that were designed to help businesses and researchers. The hackers had planned to use their AI-discovered exploit for a large-scale attack, but Google’s early detection may have stopped it from happening. The company did not reveal which hacker group was involved, but confirmed that its own Gemini AI model was not used in the attack.
The threat comes as AI models become more powerful at finding software flaws. Groups linked to China and North Korea have shown “significant interest in capitalizing on AI for vulnerability discovery,” according to Google’s report. Hackers are already using available tools like OpenClaw to:
- Find security vulnerabilities in software
- Launch targeted cyberattacks
- Develop new types of malware
- Plan large-scale exploitation campaigns
The cybersecurity industry has been grappling with this double-edged sword of AI capabilities. While these tools can help security teams find and fix vulnerabilities faster, they also give bad actors powerful new weapons. This creates an arms race where both defenders and attackers are using increasingly sophisticated AI systems.
The concerns have reached the highest levels of government and industry. In April, AI company Anthropic delayed releasing its Mythos model specifically because of worries that criminals could use it to exploit decades-old software vulnerabilities. The decision sent ripples through the tech industry and prompted meetings at the White House with technology and business leaders.
Anthropic has since released the model to a limited group of trusted testers, including major companies like Apple, CrowdStrike, Microsoft, and Palo Alto Networks. Similarly, OpenAI announced last week that it’s rolling out GPT-5.5-Cyber, a specialized version of its latest model designed for cybersecurity teams, but only to vetted organizations.
The Google incident shows these precautions may not be enough. Even with restricted access to the most advanced AI models, hackers are finding ways to use available AI tools for malicious purposes. This puts enormous pressure on cybersecurity firms and government agencies to stay ahead of AI-powered threats, even as they invest billions of dollars in digital defenses.
For businesses and organizations, this represents a new category of risk. Traditional cybersecurity measures were designed to stop human hackers working at human speed. AI-powered attacks can potentially find vulnerabilities and launch exploits much faster than human defenders can respond. This speed advantage could make future cyberattacks more damaging and harder to prevent.




