Open Source Supply Chain Attack Hits AI Ecosystem: LiteLLM Compromise Leads to Mercor Data Breach
A cyberattack on AI hiring startup Mercor has been traced to a compromised version of LiteLLM, one of the most widely used open source AI infrastructure libraries. The incident is a sharp warning about the security posture of the rapidly growing ecosystem of AI tooling — where trust in open source packages is high and security scrutiny often isn't.

D.O.T.S AI Newsroom
AI News Desk
Mercor, an AI-powered hiring platform, has confirmed it was hit by a cyberattack that exploited a compromise of LiteLLM — a popular open source library used to proxy and route requests across major AI APIs including OpenAI, Anthropic, and Google. The attack chain, disclosed by the company and attributed to an extortion hacking crew, represents one of the first high-profile supply chain attacks to target the emerging stack of AI infrastructure tooling.
What LiteLLM Is and Why It Matters
LiteLLM has become a standard piece of infrastructure in the AI development ecosystem. It provides a unified interface for calling models across different providers — switching between GPT-4o, Claude, Gemini, and Mistral with a single API call — and is used in production by thousands of companies building AI applications. Its GitHub repository has accumulated tens of thousands of stars, and it appears in the dependency tree of a significant fraction of AI startups.
That ubiquity is precisely what made it an attractive target. Supply chain attacks target trusted dependencies — code that organizations install without inspecting, because they trust the maintainer or the reputation of the package. A compromised version of LiteLLM, if introduced early enough in the dependency chain, could reach thousands of downstream applications simultaneously.
The Attack and What Was Stolen
Mercor confirmed a security incident after an extortion group publicly claimed responsibility for stealing data from the company's systems. The attack is linked to a malicious version of the LiteLLM package that was briefly introduced into the package's distribution. Companies that updated their LiteLLM dependency during the window of compromise would have inadvertently installed the malicious code.
The specific data stolen from Mercor has not been fully disclosed. Mercor's platform handles sensitive hiring data — resumes, assessments, compensation information, and background screening results — making the potential exposure significant. The company stated it is cooperating with law enforcement and notifying affected users.
A Structural Vulnerability in the AI Ecosystem
The LiteLLM incident is not an isolated case of poor security hygiene at one company. It reflects a structural vulnerability in how the AI ecosystem has been built. Over the past two years, an enormous quantity of open source AI tooling has been published, adopted at speed, and integrated into production systems — often with the same trust and velocity that characterized the early npm/PyPI ecosystem before supply chain attacks became a known threat vector.
AI infrastructure packages — LLM proxies, embedding libraries, agent frameworks, vector database clients — sit at a privileged position in application stacks. They handle API keys, process user data, and in some cases have network access to sensitive backend systems. A compromised AI infrastructure package is not merely a code-execution risk; it is often a credential-harvesting and data-exfiltration risk with wide reach.
The security community has been warning about this gap for months. The LiteLLM/Mercor incident is likely to accelerate the conversation about whether the AI tooling ecosystem needs the kind of package security infrastructure — code signing, dependency auditing, maintainer verification — that the broader software ecosystem has been building for years. The cost of moving fast without that infrastructure is now documented.