When LiteLLM disclosed its breach, the message was clear: damage contained, systems patched. Three weeks later, Mercor confirmed that 45,000 users' data had been stolen through a cyberattack directly tied to the same vulnerability. That contradiction—that a "contained" breach somehow spread downstream—reveals something the AI industry has been reluctant to acknowledge: open-source AI infrastructure is a supply chain liability that standard security practices were never built to address.
Mercor, an AI recruiting startup that raised $100 million at a $2 billion valuation in 2025, confirmed the breach on March 31st. A hacking group took credit for the theft and began selling the data on criminal forums. The company said the attack vector traced back to a compromise of LiteLLM, an open-source tool that hundreds of AI companies use to route requests across multiple language models. Mercor didn't respond to requests for comment on the specific data stolen.
The attack works like this: LiteLLM acts as middleware between an organization's AI applications and models from OpenAI, Anthropic, and others. When LiteLLM was breached, attackers gained the ability to query those connections—and potentially intercept data in transit. For Mercor, that meant the breach gave hackers a path into systems containing user data, including job applications, messages, and identification documents submitted by candidates.
This wasn't a sophisticated zero-day exploit. The LiteLLM vulnerability was a known weakness that the project's maintainers patched. What made it catastrophic was how many companies had integrated LiteLLM into their infrastructure without fully understanding what that access meant. LiteLLM's documentation is permissive; the tool routes API calls, logs prompts, and stores configuration files. In the wrong hands, those capabilities become a reconnaissance tool for corporate espionage or data theft.
Security researchers tracking the incident note a pattern: LiteLLM's adoption exploded because it solved a real problem—managing multiple AI models without building custom integrations. That ubiquity made it a single point of failure. One project's weakness became thousands of companies' exposure.
The deeper question is whether the open-source development model can sustain AI infrastructure. Projects like LiteLLM are maintained by small teams, often volunteers, with limited resources for security auditing. Enterprises adopted these tools on the assumption that transparency meant safety—a dangerous leap that the Mercor breach has now exposed. When a commercial vendor gets breached, customers have contracts and liability. When an open-source project gets breached, downstream users discover they trusted a stranger's code with their most sensitive data.
What happens next is unclear. LiteLLM's maintainers have released patches and are working with security firms to audit the codebase. Mercor is investigating with federal law enforcement. But the incident has exposed a structural problem: every company that integrated LiteLLM now faces the same question Mercor answered—how do you contain a breach that arrived through your own infrastructure? The answer, so far, is that nobody knows.