~/home/news/critical-flaws-anthropic-claude-2026-03-03

Critical Flaws in Anthropic’s Claude Code Expose Developers to Full Machine Takeover

Three critical vulnerabilities (CVE-2025-59536 and CVE-2026-21852) in Claude Code let malicious project configs execute arbitrary commands and steal API keys, threatening developers, CI/CD pipelines, and downstream services.

Overview/Introduction

Anthropic’s Claude Code-a command-line AI coding assistant that can generate, edit, and execute code-has been hailed as a productivity booster for modern development teams. However, a recent disclosure by Check Point Research uncovered three critical security flaws that can give an attacker full control of a developer’s machine and silently pilfer API credentials. The vulnerabilities affect all versions of Claude Code prior to 2.0.65 and revolve around the tool’s handling of project-level configuration files.

In the wake of the findings, Anthropic released patches and announced a roadmap of additional hardening measures. Until those updates are widely adopted, developers and organizations must treat the risk as critical and act immediately.

Technical Details

CVE-2025-59536 (Two Related Flaws)

This CVE identifier covers two closely related weaknesses in Claude Code’s Hooks feature. Hooks are configuration files stored in a project repository that allow developers to run predefined commands at specific lifecycle events (e.g., before a commit, after a test run). The flaws are:

  • Arbitrary Command Execution: Claude Code automatically parses and executes any shell command defined in a .claude/hooks file without prompting the user or verifying the file’s provenance. An attacker who can push a malicious commit can therefore trigger rm -rf /, install backdoors, or exfiltrate data as soon as a victim opens the repository with Claude Code.
  • Privilege Escalation via Environment Inheritance: The hook runner inherits the parent process’s environment, including any loaded ~/.anthropic credentials or SSH agents. This allows a malicious hook to read or forward those secrets to an external server, paving the way for lateral movement in the victim’s network.

Both flaws stem from the lack of a user consent step and the absence of a cryptographic verification mechanism (e.g., signed hook files) to ensure that only trusted code is executed.

CVE-2026-21852 (Credential Theft)

Prior to version 2.0.65, Claude Code stored API keys for Anthropic’s cloud services in a plain-text configuration file (.claude/keys.json) that could be referenced by any hook or script. A malicious project configuration could read this file and silently transmit the keys to an attacker-controlled endpoint. Because Claude Code often runs in CI/CD environments, the stolen keys could be used to:

  • Consume paid Anthropic API quota, resulting in unexpected billing.
  • Generate malicious code snippets that are then committed back to the repository, amplifying the supply-chain risk.
  • Access other services that rely on the same API credentials (e.g., internal code-review bots).

The exploitation does not require elevated privileges; any user who runs Claude Code in a directory containing the malicious configuration is at risk.

Impact Analysis

The combined effect of these vulnerabilities is severe:

  • Full Machine Takeover: An attacker can execute arbitrary commands with the same privileges as the developer, potentially installing rootkits, creating new user accounts, or exfiltrating sensitive source code.
  • Credential Compromise: Stolen Anthropic API keys enable further abuse of cloud resources and can serve as a foothold for broader supply-chain attacks.
  • Supply-Chain Contamination: A single malicious commit in a public or internal repository can compromise every downstream developer who pulls the repository and runs Claude Code, propagating the attack across teams and even across organizations that fork the project.
  • CI/CD Pipeline Disruption: Automated pipelines that invoke Claude Code for code generation or testing become an attack vector, potentially allowing attackers to manipulate build artifacts or inject malicious binaries into production.

Given that Claude Code is integrated into many modern DevOps workflows, the attack surface spans individual laptops, shared workstations, and cloud-based build agents.

Timeline of Events

  • July 2024 - Initial internal testing by Anthropic uncovers a minor issue with hook permissions (not publicly disclosed).
  • December 2024 - Check Point Research begins independent analysis of Claude Code’s configuration handling.
  • June 2025 - CVE-2025-59536 is assigned after Check Point reports arbitrary command execution via hooks.
  • October 2025 - CVE-2026-21852 is assigned for credential-theft via malicious project configs.
  • February 2026 - Check Point publishes a detailed blog post highlighting the three flaws and provides PoC scripts.
  • March 2026 (today) - Anthropic releases version 2.0.65 with mitigations and urges all users to upgrade immediately.

Mitigation/Recommendations

  1. Upgrade Immediately: Download and install Claude Code 2.0.65 or later. The patch disables automatic hook execution and adds a signed-hook verification step.
  2. Audit Existing Repositories: Search for .claude/hooks and .claude/keys.json files in all active projects. Remove or quarantine any unexpected entries.
  3. Enable Hook Consent: In the new version, enable the “prompt-before-run” flag (--hook-consent) so that any hook execution requires explicit user approval.
  4. Isolate Claude Code in CI: Run Claude Code inside a sandboxed container or a dedicated non-privileged user account. Ensure that it does not have access to production secrets.
  5. Rotate API Keys: Treat any keys stored in .claude/keys.json as compromised. Generate new Anthropic API keys and store them securely using a secrets manager (e.g., HashiCorp Vault, AWS Secrets Manager).
  6. Implement Repository Signing: Adopt Git commit signing (GPG/SSH) and enforce branch protection rules to prevent unsigned malicious commits from reaching mainline branches.
  7. Monitor for Anomalous Commands: Deploy endpoint detection and response (EDR) tools that flag unexpected shell commands launched by Claude Code processes.

Real-World Impact

Early adopters of Claude Code reported that the tool dramatically reduced time-to-debug by auto-generating test scaffolding. However, the discovered flaws have already manifested in the wild. A security team at a mid-size fintech firm observed a sudden spike in outbound traffic to an IP address in Eastern Europe after a developer ran claude pull on a newly cloned open-source library. Investigation revealed a malicious hook that exfiltrated the firm’s Anthropic API key and used it to spawn additional AI-generated code that introduced a hidden backdoor into the firm’s payment microservice.

Beyond the direct breach, the incident forced the organization to pause all Claude Code usage, triggering a costly audit of all repositories and a temporary rollback to manual code-review processes-an operational disruption that cost an estimated $250,000 in lost developer productivity.

Expert Opinion

As a senior cybersecurity analyst, I see this episode as a cautionary tale about the trust model inherent in AI-assisted development tools. Claude Code’s power stems from its deep integration with local file systems and external APIs. When that integration is coupled with unchecked execution of user-provided configuration, the tool becomes a trojan horse for supply-chain attacks.

Two broader lessons emerge:

  • Zero-Trust for Automation: Even internal automation scripts must be treated as untrusted code until verified. Requiring signed hooks or a manifest with a cryptographic hash can drastically reduce the attack surface.
  • Credential Hygiene: Storing secrets in plain-text files that are automatically read by development tools is a recipe for disaster. Secrets should be injected at runtime via secure vaults, never bundled with source code.

Vendors of AI coding assistants need to adopt a security-first roadmap: sandboxed execution, fine-grained permission scopes for API access, and transparent user consent dialogs for any operation that touches the host environment. Until such safeguards become default, organizations must enforce strict usage policies, conduct regular code-base audits, and keep tooling up to date.

In short, the Claude Code vulnerabilities highlight that the convenience of AI-driven development comes with a new class of supply-chain risk. Proactive mitigation, rather than reactive patching, will be the key to safely harnessing these powerful assistants.