Light bulb Limited Spots Available: Secure Your Lifetime Subscription on Gumroad!

May 13, 2026 · 12 min read

84 Malicious TanStack Packages Just Got Published Through TanStack's Own Release Pipeline—Using a Stolen OIDC Token Pulled From the Runner's Process Memory

The attacker did not need to steal a maintainer password. They poisoned a GitHub Actions cache from a fork, waited six hours for the legitimate release workflow to run, then scraped the OIDC publish token out of the runner's RAM before npm ever saw a problem.

On the evening of May 11, 2026, between 19:20 and 19:26 UTC, somebody pushed 84 malicious package versions to npm under the @tanstack scope. The packages installed credential stealing payloads on every machine that pulled them. Total weekly downloads across the compromised packages: tens of millions.

The unusual part of the story is what TanStack was doing right when the attack succeeded. The maintainer accounts had 2FA. The release pipeline used npm's OIDC trusted publishing — no long lived publish tokens stored anywhere. Every published version came with a signed SLSA Build Level 3 provenance attestation. By npm's own checklist of supply chain hardening best practices, TanStack was already in the top tier of ecosystem maintainers.

The attacker — operating under the GitHub handle zblgg (and a sibling account voicproducoes created seven weeks earlier) — used three chained vulnerabilities to publish through TanStack's legitimate release pipeline without ever stealing a credential at rest. Each step was individually known. The combination is the new pattern.

A developer's monitor showing a CI/CD build pipeline output with subtle warning indicators at the edges, suggesting a silent compromise of the build process

Step One: The Pwn Request Pattern

TanStack's repository used a GitHub Actions workflow called bundle-size.yml that ran on every pull request to compute and comment the bundle size delta. The workflow was configured with the pull_request_target trigger, a setting GitHub introduced so that workflows could safely access secrets when running on pull requests from forks. The trade off baked into pull_request_target is that the workflow runs in the context of the base repository — with its secrets and its write permissions — but is supposed to check out the base branch code, not the fork's code.

The bundle-size workflow did the wrong thing: it explicitly checked out refs/pull/{pr_number}/merge, which is the merged fork code. That single line gave any fork PR the ability to execute attacker controlled code inside a privileged base repository context. Security researchers have called this exact pattern the "Pwn Request," and it has been documented since 2021. It is still surprisingly common, because the safe pattern (run a separate, lower privileged workflow on the fork code) requires more setup than dropping in pull_request_target and hoping nobody notices.

The attacker opened pull request #7378 titled "WIP: simplify history build" from the fork voicproducoes/router. The PR triggered the bundle-size workflow. The workflow checked out fork code. The fork code ran with base repository permissions. Step one done.

Step Two: GitHub Actions Cache Poisoning

The fork's malicious code did not try to steal credentials directly. It wrote a file into the pnpm store directory that GitHub Actions' cache action would persist for later workflow runs to consume. The file — vite_setup.mjs, about 30,000 lines of obfuscated JavaScript — was crafted so it would be restored by any subsequent run of TanStack's release workflow.

The crucial subtlety is how GitHub Actions Cache keys work. The cache is keyed on a hash of dependency lock files plus the OS. Two workflows in the same repository, even running from different branches, share cache scope as long as the cache key matches. The attacker reverse engineered TanStack's release.yml well enough to predict the exact cache key the release workflow would compute on its next run: Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11.

By executing inside the privileged base repository context (from step one) and writing into the pnpm store before the post step's actions/cache@v5 uploaded the cache, the attacker arranged for the poisoned vite_setup.mjs to be saved into the base repository's cache scope under the key the release workflow would later request. Cache poisoning completed at 11:29 UTC on May 11. The attacker then waited.

Step Three: OIDC Token Extraction From Runner Memory

Roughly six hours later — at 19:15 UTC — a TanStack maintainer pushed a routine commit to main. The push triggered release.yml. The release workflow restored the cache, which silently brought in the poisoned vite_setup.mjs. The malicious code ran inside the release workflow, with full access to whatever the release workflow had access to.

What the release workflow had access to was an OIDC token. npm's trusted publisher binding is fundamentally a deal where, instead of storing a long lived publish token in repository secrets, the npm registry agrees to accept a short lived OIDC token presented by a specific GitHub Actions workflow as proof of identity. The token is issued just in time, scoped to the workflow run, and exchanged for a per package publish capability against npm's oidc/token/exchange endpoint.

The token lives in the runner process. Specifically, it lives in the memory of the Runner.Worker process, in a JSON blob shaped like {"value": "...", "isSecret": true}. The malicious payload included a Python script that opened /proc/{pid}/mem, scanned for that exact pattern, and extracted every secret the runner had configured — including masked ones and including the OIDC token itself. Once the payload had the token, it bypassed the workflow's own Publish step, called npm's OIDC exchange endpoint directly, and used the resulting per package publish capabilities to ship 42 malicious package versions in the first six minutes. A second triggering push at 19:26 UTC produced the second wave of 42 versions.

The npm registry's provenance attestation system signed every one of those publishes. The signature is genuine. The provenance trail correctly attests that the packages were built and published by TanStack's release workflow, in TanStack's GitHub Actions environment, from a commit on TanStack's main branch. All of that is true. The malware came from the cache, not from the source tree, and provenance does not attest cache contents.

What the Payload Actually Did

Once installed on a developer machine or in a CI environment, the 2.3 MB router_init.js file (delivered via an optionalDependencies entry that pointed at the attacker's fork commit) ran a comprehensive credential exfiltration sweep. The list of targets is the closest thing the npm ecosystem has had to a "what's in a developer's keyring" census.

  • Cloud provider credentials. AWS IMDSv2 endpoint on EC2 and ECS/Fargate instances. GCP service account key files. Azure access tokens. Kubernetes kubeconfig and in cluster service tokens. HashiCorp Vault tokens queried against the local 127.0.0.1:8200 agent.
  • Developer credentials. SSH private keys for RSA, Ed25519, and ECDSA. The .npmrc and .pypirc contents. Docker login files. .netrc entries. Terraform Cloud credentials.
  • Cryptocurrency wallets. .bitcoin and .ethereum keystores, MetaMask vault files where present.
  • AI assistant configurations. Claude Code settings.json, the Kiro MCP server configuration files, and any other LLM API key files in standard paths.
  • Shell history. Plain text grep for credential patterns inside .bash_history, .zsh_history, and PowerShell command history files.

Exfiltration ran over the Session/Oxen messenger network — filev2.getsession.org and the seed{1,2,3}.getsession.org nodes — which is end to end encrypted and operates over Tor. That choice prevented IP based blocking by either GitHub or npm, since blocking Session would have collateral damage on legitimate users of an encrypted messaging service.

The persistence layer is the part that earns the campaign the "worm" label. The payload installed hooks in Claude Code's SessionStart event, in VS Code's folderOpen task list, and in OS level services (a LaunchAgent on macOS, a systemd unit on Linux). The same campaign hit SAP packages a week earlier in the mini-shai-hulud worm Output observed on May 4, and the persistence mechanism is identical.

Why It Took 20 Minutes to Detect

External detection landed within 20 minutes of the first publish. Internal detection by TanStack — that is, the maintainers learning their own packages had been compromised — did not happen until they were notified by Ashish Kurmi of StepSecurity at 19:50 UTC.

Kurmi's AI Package Analyst flagged three statistical anomalies on the new versions: tarball size was 3.7x normal (124 files versus the usual 101), an unexpected router_init.js appeared at the package root outside the build process, and an optionalDependencies entry pointed at a fork commit rather than a registered npm package. The detection rules are not complex. They are the kind of checks ecosystem level monitoring has been able to run for years; they were not running on TanStack.

The postmortem TanStack published the next day is unusually candid about the gap. The repository had no internal publish monitoring; the team learned of the compromise from third parties. The pull_request_target usage had not been audited despite well documented risks. Action versions were pinned to floating refs like @v6.0.2 and @main rather than SHAs, which created a standing supply chain exposure unrelated to the immediate attack. None of those are exotic configurations — they are how most actively maintained open source repositories operate on GitHub.

The OIDC Trust Boundary, Reconsidered

The headline takeaway from this incident, for npm maintainers and for the npm registry team, is that OIDC trusted publishing does not solve the supply chain problem; it relocates it.

The whole pitch of OIDC trusted publishing is that long lived tokens are bad because they can be stolen and reused. Short lived OIDC tokens issued just in time to the workflow run, the argument goes, eliminate that risk. The TanStack attack demonstrates the architectural flaw in that pitch: the token lives, however briefly, in the memory of a process that runs untrusted code as a routine part of its operating cycle. Reading process memory is not exotic; /proc/{pid}/mem is a documented Linux kernel feature, and the payload's extraction script is under 50 lines of Python.

As long as the runner accepts code from sources that are not the maintainer's own commits, the OIDC token is reachable by whoever can put bytes on the runner. That includes anything that flows through the cache, anything pulled in by an unpinned third party action, and anything fork PR code can write while running under pull_request_target. The npm registry can require provenance, and provenance can be signed by OIDC, and OIDC can be hijacked by reading memory. The trust chain has a hole at the runtime layer.

Several practical mitigations follow:

  • Forbid pull_request_target in repositories that handle publish credentials. Use a separate, lower privileged workflow for any analysis that needs fork code. The split is more work, but the alternative is the TanStack outcome.
  • Pin all third party GitHub Actions to a commit SHA. Floating references like @v6 or @main create a transitive supply chain. Dependabot's pin to SHA feature handles this for actively maintained repositories.
  • Add a repository_owner guard on every workflow that has access to secrets. A one line if: github.repository_owner == 'tanstack' blocks fork executions of the same workflow file under forked names.
  • Treat the GitHub Actions cache as untrusted input. Hash and verify any artifact restored from cache before consuming it in a release workflow. The cache is a network attached storage layer with weaker access controls than the source tree.
  • Subscribe to ecosystem level publish monitoring. Socket, StepSecurity, Snyk, and Aikido all run real time anomaly detection across npm. TanStack's own admission was that they would have learned of the compromise hours later without external monitoring.

Why Email Sending Stacks Should Care

A lot of the compromised packages have nothing direct to do with email. @tanstack/router and @tanstack/query are front end libraries. But the worm propagates by stealing credentials from one CI/CD pipeline and then enumerating every package that maintainer publishes, infecting them all. That self propagation means a single compromised maintainer can poison the dependency graph of an unrelated downstream system within hours.

Email sending stacks tend to live one or two npm dependency steps away from the kinds of packages this worm has been ripping through. Nodemailer pulls in transitive dependencies. Marketing automation services run npm based microservices that touch SES, SendGrid, or Mailgun credentials. The credential harvesting payload specifically looks for .env files, which on email sending workloads almost always contain provider API keys, DKIM signing keys, and bounce webhook secrets. The Adobe SES credential leak campaign earlier this month was already showing how fast attackers monetize stolen email infrastructure credentials; this worm shortens the discovery time from "search GitHub for leaked keys" to "wait for any of 170 packages I've already poisoned to be installed in someone's CI."

If you operate email infrastructure that depends on npm packages — directly or transitively — the next 30 days are the right window to audit which @tanstack, @mistralai, @uipath, and @opensearch-project packages your image manifest pulls, and which versions. The 84 malicious TanStack versions have been deprecated, but the credentials they exfiltrated during their brief publish window are still in attacker hands.

Sources

Stop Email Tracking in Gmail

Spy pixels track when you open emails, where you are, and what device you use. Gblock blocks them automatically.

Try Gblock Free for 30 Days

No credit card required. Works with Chrome, Edge, Brave, and Arc.