LiteLLM Cuts Ties With Delve After AI Gateway Was Compromised by Credential-Stealing Malware
LiteLLM, one of the most widely used open-source AI gateway libraries, has ended its partnership with Delve — a security compliance startup — after Delve suffered a credential-stealing malware attack while holding active security certifications. The incident highlights a critical irony: AI infrastructure is scaling faster than the security layer meant to protect it.

D.O.T.S AI Newsroom
AI News Desk
LiteLLM, the open-source AI gateway library used by thousands of teams to manage access to multiple AI model providers, has severed its partnership with Delve, a startup that provides security compliance services for AI development environments. The termination follows a damaging disclosure: Delve fell victim to credential-stealing malware while actively holding the security certifications it sells to clients as proof of compliance.
The incident is not a simple vendor failure story. It is a stress test of a rapidly assembled AI security supply chain that has not yet been proven at scale — and in this case, it failed at the worst possible point.
Why This Is Worse Than a Normal Breach
The particular irony of the Delve incident is structural. Delve's core product is security compliance certification — the documentation that enterprise AI teams use to demonstrate to customers, auditors, and regulators that their tooling meets established security standards. When an AI developer integrates Delve into their pipeline, they are explicitly trusting it to make their environment more secure, not less.
Credential-stealing malware targeting a security compliance vendor creates exactly the risk profile those compliance certifications are supposed to prevent: attackers with access to API keys, authentication tokens, and integration credentials for the downstream systems Delve had been granted access to. In the case of AI infrastructure, those credentials can include API keys for OpenAI, Anthropic, and other model providers — enabling unauthorized usage at potentially enormous cost, as well as access to the data flowing through those APIs.
LiteLLM's Exposure and Response
LiteLLM's decision to publicly end the partnership rather than quietly deprioritize it is notable. The library has a large developer community and enterprise customer base that routes significant AI API traffic through its gateway. Transparency about a supply chain compromise affecting that infrastructure — even a third-party one — is a reasonable security response, but it also carries reputational cost. The willingness to absorb that cost suggests the LiteLLM team assessed the ongoing risk of the Delve association as higher than the cost of the disclosure.
The Supply Chain Problem for AI Tooling
The LiteLLM-Delve incident is an early data point in what is likely to become a recurring pattern. AI development tooling has proliferated at a speed that outpaces the security review processes that would normally gate production enterprise infrastructure. Developers building on AI foundations are integrating libraries, compliance tools, and gateways with the same velocity that characterized early cloud adoption — and encountering similar supply chain risks as a consequence.
The difference is that AI API credentials provide a more immediately monetizable attack surface than most cloud credentials. They can be used to generate content, execute agentic tasks, and extract data — all at the API owner's expense, billed invisibly until a usage anomaly triggers an alert.