Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Tools

LiteLLM Cuts Ties With Delve After AI Gateway Was Compromised by Credential-Stealing Malware

LiteLLM, one of the most widely used open-source AI gateway libraries, has ended its partnership with Delve — a security compliance startup — after Delve suffered a credential-stealing malware attack while holding active security certifications. The incident highlights a critical irony: AI infrastructure is scaling faster than the security layer meant to protect it.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

3 min read
LiteLLM Cuts Ties With Delve After AI Gateway Was Compromised by Credential-Stealing Malware

LiteLLM, the open-source AI gateway library used by thousands of teams to manage access to multiple AI model providers, has severed its partnership with Delve, a startup that provides security compliance services for AI development environments. The termination follows a damaging disclosure: Delve fell victim to credential-stealing malware while actively holding the security certifications it sells to clients as proof of compliance.

The incident is not a simple vendor failure story. It is a stress test of a rapidly assembled AI security supply chain that has not yet been proven at scale — and in this case, it failed at the worst possible point.

Why This Is Worse Than a Normal Breach

The particular irony of the Delve incident is structural. Delve's core product is security compliance certification — the documentation that enterprise AI teams use to demonstrate to customers, auditors, and regulators that their tooling meets established security standards. When an AI developer integrates Delve into their pipeline, they are explicitly trusting it to make their environment more secure, not less.

Credential-stealing malware targeting a security compliance vendor creates exactly the risk profile those compliance certifications are supposed to prevent: attackers with access to API keys, authentication tokens, and integration credentials for the downstream systems Delve had been granted access to. In the case of AI infrastructure, those credentials can include API keys for OpenAI, Anthropic, and other model providers — enabling unauthorized usage at potentially enormous cost, as well as access to the data flowing through those APIs.

LiteLLM's Exposure and Response

LiteLLM's decision to publicly end the partnership rather than quietly deprioritize it is notable. The library has a large developer community and enterprise customer base that routes significant AI API traffic through its gateway. Transparency about a supply chain compromise affecting that infrastructure — even a third-party one — is a reasonable security response, but it also carries reputational cost. The willingness to absorb that cost suggests the LiteLLM team assessed the ongoing risk of the Delve association as higher than the cost of the disclosure.

The Supply Chain Problem for AI Tooling

The LiteLLM-Delve incident is an early data point in what is likely to become a recurring pattern. AI development tooling has proliferated at a speed that outpaces the security review processes that would normally gate production enterprise infrastructure. Developers building on AI foundations are integrating libraries, compliance tools, and gateways with the same velocity that characterized early cloud adoption — and encountering similar supply chain risks as a consequence.

The difference is that AI API credentials provide a more immediately monetizable attack surface than most cloud credentials. They can be used to generate content, execute agentic tasks, and extract data — all at the API owner's expense, billed invisibly until a usage anomaly triggers an alert.

Back to Home

Related Stories

Astropad's Workbench Turns a Mac Mini Into an AI Agent Server You Control From Your Phone
Tools

Astropad's Workbench Turns a Mac Mini Into an AI Agent Server You Control From Your Phone

Astropad, the company behind the Luna Display hardware that lets iPads function as Mac monitors, has built a new product for a new era: Workbench lets users remotely monitor and control AI agents running on Mac Minis from an iPhone or iPad. It is remote desktop software reimagined not for IT support but for the AI agent operator — the person who needs to check on autonomous workflows without being at their desk.

D.O.T.S AI Newsroom
Microsoft's Bing Team Open-Sources Harrier, a Multilingual Embedding Model That Tops the MTEB v2 Benchmark
Tools

Microsoft's Bing Team Open-Sources Harrier, a Multilingual Embedding Model That Tops the MTEB v2 Benchmark

Microsoft's Bing search team has released Harrier as an open-source embedding model, and it tops the multilingual MTEB v2 benchmark while supporting over 100 languages. The release is significant not just for the benchmark numbers but for the source: a search team that has spent decades optimizing retrieval systems has built an embedding model for the exact use case — semantic search and retrieval — that underpins most production RAG applications.

D.O.T.S AI Newsroom
Stability AI Pivots to Enterprise With Brand Studio — a Platform for Brand-Consistent AI Image Generation
Tools

Stability AI Pivots to Enterprise With Brand Studio — a Platform for Brand-Consistent AI Image Generation

Stability AI, the company that made open-source image generation mainstream with Stable Diffusion, is repositioning for enterprise with Brand Studio. The platform lets creative teams train brand-specific image models, automate visual production workflows, and route tasks to the best-suited AI model — a commercial play from a company that built its name on open access.

D.O.T.S AI Newsroom