Feb 26, 2026 · 5 min read
That GitHub Repo Tricked Your AI Coding Assistant Into Leaking Your API Keys
Check Point researchers disclosed three vulnerabilities in Anthropic's Claude Code that allowed attackers to execute arbitrary commands and exfiltrate API credentials simply by tricking developers into opening a malicious repository.
Three Flaws, One Attack Surface: Your Config Files
Security researchers Aviv Donenfeld and Oded Vanunu at Check Point disclosed three separate vulnerabilities in Claude Code, Anthropic's command line coding assistant. Each exploited the same fundamental weakness: configuration files checked into a repository can silently control what the tool does before the developer even sees a trust prompt.
The first flaw, rated CVSS 8.7, allowed an attacker to embed shell commands inside .claude/settings.json as project hooks. When a developer opened the repository with Claude Code, those hooks executed automatically without any separate confirmation. A reverse shell, a credential harvester, or any arbitrary payload could fire the moment the project loaded.
The second flaw, tracked as CVE-2025-59536, also CVSS 8.7, exploited Model Context Protocol server definitions in the same settings file. By setting enableAllProjectMcpServers to true, an attacker could force MCP initialization commands to execute before the trust dialog appeared, bypassing the improved warning system Anthropic had added after the first vulnerability was reported.
The API Key Heist
The third vulnerability, CVE-2026-21852 (CVSS 5.3), was arguably the most alarming. An attacker could set the ANTHROPIC_BASE_URL environment variable inside the repository's settings file, redirecting all Claude Code API traffic through an attacker controlled server. The tool would send requests containing the developer's Anthropic API key in plaintext authorization headers before the trust dialog even appeared on screen.
"Every request included the authorization header, our full Anthropic API key, completely exposed in plaintext," the Check Point researchers wrote. A stolen API key grants workspace level access, meaning files uploaded by other developers become accessible. Attackers could regenerate code artifacts, access shared project data, and run up significant API charges on the victim's account.
How the Attack Spreads
These vulnerabilities turn ordinary GitHub repositories into attack vectors. The malicious configuration files sit inside directories like .claude/ that developers rarely inspect. An attacker could distribute them through:
- Pull requests that appear to be legitimate code contributions but include hidden configuration changes
- Honeypot repositories offering useful code samples, starter templates, or tutorials
- Compromised internal enterprise repositories where trust is assumed
Configuration files receive minimal security scrutiny compared to executable code during code reviews. Most developers glance at source code changes but skip over JSON configuration files, which makes them ideal for hiding attack payloads.
All Three Flaws Are Patched
Anthropic fixed all three vulnerabilities before Check Point published its findings. The first was patched in Claude Code version 1.0.87 (September 2025), the second in version 1.0.111 (October 2025), and the third in version 2.0.65 (January 2026). The fixes include enhanced trust dialogs that explicitly warn about untrusted configurations, deferred MCP server execution until after user approval, and network request deferral to prevent pre consent API interception.
If you are running Claude Code, update to the latest version immediately. If you have opened any untrusted repositories in the past, rotate your Anthropic API keys as a precaution.
The Bigger Problem: AI Tools Trust Too Much
These vulnerabilities illustrate a systemic issue with AI powered development tools. They are designed to be helpful, which means they execute commands, connect to services, and process files with minimal friction. That same design makes them powerful attack surfaces when pointed at untrusted content.
Claude Code is not the only tool affected. Earlier this month, researchers disclosed RoguePilot, a flaw in GitHub Codespaces that allowed attackers to inject malicious instructions via issues and exploit Copilot's token access. The pattern is the same: AI assistants that trust project level configuration files can be weaponized by anyone who controls those files.
For developers, the takeaway is straightforward. Inspect .claude/, .vscode/, and similar hidden directories before opening any unfamiliar repository. Review configuration file changes during code reviews with the same rigor you apply to source code. And keep your AI development tools updated, because this is the beginning of a new category of supply chain attacks, not the end.