Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

Hackers Are Redistributing the Leaked Claude Code Repository — With Bonus Malware Attached

Wired reports that threat actors are repackaging the leaked Claude Code source repository and uploading it to file-sharing platforms bundled with information-stealing malware. The pattern is a textbook social engineering play: developers curious about the leaked AI tool are downloading what looks like the genuine repository and executing malware in the process.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

2 min read
Hackers Are Redistributing the Leaked Claude Code Repository — With Bonus Malware Attached

Shortly after Claude Code's source code was leaked online in early April, Wired reports that hackers moved quickly to weaponize the leak itself. Malicious actors are uploading repackaged versions of the leaked repository to code-sharing and file-distribution platforms — with information-stealing malware embedded alongside the genuine files. The tactic exploits a predictable behavior: developers curious about the leaked AI coding assistant will search for the repository, find what appears to be a legitimate copy, and download and run it without the same scrutiny they would apply to an unknown executable.

The Attack Pattern

This is a well-established social engineering playbook applied to a high-interest target. High-profile software leaks reliably generate a wave of curious downloads from technical users — exactly the audience that typically has elevated access privileges, interesting credentials stored in browser profiles, and valuable API keys on their development machines. Information-stealing malware targeting a developer's machine is disproportionately valuable compared to targeting a general consumer: the API keys, SSH credentials, cloud access tokens, and code repositories accessible from a development environment represent substantial attack surface. The same pattern has been used with leaked game source codes, popular software cracks, and now AI tool leaks.

The Broader Supply Chain Warning

The Claude Code malware redistribution episode is a data point in a larger pattern the security community has been tracking: AI-related software is becoming a reliable social engineering vector because developer curiosity about AI tooling is high, verification habits are inconsistent, and the perceived legitimacy of code from a major AI lab can override normal skepticism. Organizations should ensure developers understand that any leaked or unofficial version of an AI tool — regardless of the claimed source — should be treated as an untrusted binary until verified through official channels.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom