General Synthesized from 1 source

AI Won't Invent New Hacks—It'll Find Existing Ones Faster

Key Points

  • Ptacek predicts AI agents will transform exploit dev within months
  • Frontier models encode bug class knowledge before seeing target code
  • Agents never tire—search runs until a vulnerability is found
  • Defenders and attackers gain same tools; asymmetry favors neither
  • Vulnerability scarcity economics collapse once search automates
References (1)
  1. [1] Security Expert: AI Agents Will Find Most Zero-Days Within Months — Simon Willison's Weblog

The security industry has spent decades preparing for sophisticated attacks—buffer overflows refined over years, novel exploitation primitives developed in secret. But Thomas Ptacek thinks that's the wrong threat model. The real danger isn't that AI will dream up attacks we've never seen. It's that there's so much vulnerable code already written that no one has bothered to audit that AI just needs to search better.

Ptacek, a veteran security researcher who built his reputation dissecting cryptographic failures at Matasano Security, laid out this argument in a widely-shared post titled "Vulnerability Research Is Cooked." His prediction is blunt: within months, coding agents will drastically alter both the practice and economics of exploit development. The mechanism isn't AI creativity—it's pattern matching at scale. Frontier models already encode what he calls "supernatural amounts of correlation" across source codebases, plus the complete library of documented bug classes: stale pointers, integer mishandling, type confusion, allocator grooming. Vulnerabilities become implicit search problems, and search is precisely what LLMs solve best.

The security community has relied on a fundamental scarcity: skilled vulnerability researchers who can find high-impact bugs are rare, expensive, and human. That scarcity justifies the economics. A single critical vulnerability in a widely-deployed system can sell for seven figures. But Ptacek argues frontier AI models collapse that scarcity—not through superior intelligence, but through sheer persistence and encoded knowledge. An agent never gets bored. It won't stop after eight hours. Point it at a Linux kernel subsystem and tell it "find me zero days," and it will search until it finds something.

Defenders and attackers face this reality asymmetrically. Security teams can deploy AI agents to audit their own codebases—find the bugs before adversarial researchers do. The first-mover advantage is significant: if your team finds the vulnerability, you can patch it. If an attacker finds it first, you have a zero-day. But that asymmetry erodes quickly. If AI agents work for defenders, they work for threat actors too. Nation-state hacking operations and criminal ransomware groups have access to the same frontier models. The question isn't whether AI will find the vulnerabilities in your infrastructure. It's who finds them first.

Some researchers push back. They argue current AI systems mostly rediscover known bug classes rather than identifying genuinely novel vulnerability patterns. Others question whether pattern matching on source code suffices without deep knowledge of runtime behavior and memory layout. These are fair objections. But they miss Ptacek's core claim. He's not arguing AI invents novel exploitation techniques. He's arguing it eliminates the bottleneck of human attention. There is, quite literally, more vulnerable code than the human security community can audit. AI doesn't need to be smarter than researchers. It just needs to be thorough enough to search what humans haven't gotten to yet.

The implications reach beyond individual organizations. If vulnerability discovery becomes automated and cheap, the entire security economy shifts. Bug bounty programs that pay premium rates for critical vulnerabilities face deflation. Security consulting firms built on manual audit expertise must adapt or become irrelevant. The gap between organizations with mature security programs and those without narrows—except the narrowing goes in the wrong direction for under-resourced defenders. Attackers gain a tool that scales; defenders gain the same tool, but they're already behind.

Ptacek's prediction may overstate the timeline or underestimate the expertise required for complex exploits. But the trajectory is clear. The question isn't whether AI transforms vulnerability research. It's whether that transformation happens faster than defensive infrastructure can adapt. "Within the next few months," he wrote. The security industry has months to prepare.

0:00