May 12, 2026 · 11 min read
TeamPCP Just Backdoored Checkmarx's Jenkins Plugin—the Same Group That Hit Trivy and Used Stolen Credentials From One Build Server to Hop to the Next
On Saturday, May 9, a rogue release of the Checkmarx Jenkins AST plugin landed on the official Marketplace under version 2026.5.09. CVE-2026-33634, CVSS 9.4, six weeks after the same threat group used credentials stolen from Trivy to break into the same vendor's GitHub. The taunt left on the defaced repo: "Checkmarx-Fully-Hacked-by-TeamPCP."
Checkmarx confirmed on May 11 that a malicious build of its Jenkins Application Security Testing plugin had been uploaded to repo.jenkins-ci.org two days earlier. The compromise was assigned CVE-2026-33634 with a CVSS score of 9.4 — critical — and the rogue version, 2026.5.09, sat live on the Jenkins Marketplace for an exposure window that Checkmarx has not fully disclosed. Any Jenkins instance that auto pulled an update during that window installed a backdoored plugin straight into the build pipeline.
The attribution is the part that should make every engineering organization stop and check its dependencies. This is TeamPCP, the same group behind the mini Shai-Hulud npm worm that hit SAP in early May, the Trivy v0.69.4 binary poisoning in March, the KICS Docker image compromise weeks later, the LiteLLM PyPI package incident, and a now sprawling list of secondary intrusions in projects that pulled credentials from one of the compromised hosts. Arctic Wolf's running tally puts the downstream blast radius at "at least 1,000 enterprise SaaS environments."
For developers, the practical lesson is uncomfortable. The plugin came from the right vendor, on the right channel, with the right signing. The release pipeline was the wrong release pipeline.
What the Plugin Actually Does in a Pipeline
The Checkmarx Jenkins AST plugin's normal job is to integrate static application security testing into CI builds. A developer pushes code, Jenkins kicks off a pipeline, and the plugin calls back to Checkmarx One to scan the diff. To do that, the plugin needs broad access to the build host. That access is exactly what made it valuable to TeamPCP as a backdoor location.
From the surrounding TeamPCP campaign instrumentation that Arctic Wolf documented for the LiteLLM and Trivy intrusions, the credentials targeted by the group's payloads include:
- Environment variables, including database connection strings and API tokens passed into the build process
- Cloud tokens — AWS IAM, GCP service account JSON, Azure managed identity tokens
- SSH keys mounted into the runner for deploy or repo access
- Repository secrets stored in Jenkins credential store
- Kubernetes service account tokens accessible from the runner
- LLM provider credentials — OpenAI, Anthropic, and others — increasingly common as orgs run AI evaluations in CI
Anything that ends up in the runner's process memory or filesystem during a build is in scope. A plugin already running as a privileged actor in the pipeline is the perfect place to hook those values and beacon them out. Checkmarx has not yet published full IOCs for the malicious 2026.5.09 build, but the company is asking affected operators to "assume that their credentials are compromised" and treat the incident as a full credential exposure event.
The Credential Chain TeamPCP Built
What makes this campaign different from the typical supply chain attack is the chained credential reuse. TeamPCP did not break each project independently. The group built a hopping graph where credentials harvested from one compromised host gave them write access to the next.
The chain, reconstructed from Arctic Wolf and SOCRadar reporting:
- March 20. TeamPCP exploits a misconfigured GitHub Actions workflow in the Trivy repository, harvests CI/CD secrets and signing credentials, and force pushes a malicious v0.69.4 binary. Trivy's mirror network distributes it widely.
- March 23. Using credentials harvested from Trivy contributors who also worked on Checkmarx open source, TeamPCP modifies GitHub Actions workflows on two Checkmarx repositories, including KICS. The malicious workflow exfiltrates further secrets during CI runs.
- March 24. A poisoned PyPI release of LiteLLM (versions 1.82.7 and 1.82.8) ships with a .pth file that executes on Python interpreter startup. LiteLLM had been pulling roughly 97 million monthly downloads. Credentials harvested at LiteLLM customers flowed back into the chain.
- Early May. The mini Shai-Hulud npm worm ships, propagating across npm package maintainers and hiding persistence inside Claude Code's settings file.
- May 9. The Jenkins Marketplace gets the rogue Checkmarx AST 2026.5.09 build. Same group, same Checkmarx vendor account, six weeks after the original repo compromise — meaning the credentials harvested in March remained valid long enough to publish a fresh malicious release.
That last beat is the procedural problem. Checkmarx had been working through its prior incident publicly since late March. The May 9 release suggests either a separate undetected credential at the vendor, or the original incident response did not rotate the right secrets. Either way, the same plugin namespace was used to ship a second wave of malware within the same campaign.
How to Tell if You Are Exposed
If you operate Jenkins and have any pipeline that uses the Checkmarx AST plugin, the triage steps are mechanical.
1. Identify the installed plugin version. In your Jenkins controller, navigate to Manage Jenkins → Plugins → Installed and check the version of checkmarx-ast-scanner. The malicious release is 2026.5.09. Anything in that line is exposed. The safe versions are 2.0.13-829.vc72453fa_1c16 (published December 17, 2025) or the post incident patched build 2.0.13-848.v76e89de8a_053. Older releases are also unaffected.
2. Treat every secret available to the runner as compromised. If the malicious version was installed at any point, anything the Jenkins runner could read needs to be rotated. That includes credentials in Jenkins credential store, environment variables passed to the pipeline, mounted SSH keys, cloud provider tokens, container registry credentials, repository tokens, and any tokens that an executed step in the pipeline could have read from the filesystem or environment.
3. Hunt for lateral movement. The TeamPCP pattern from the Trivy incident was credential reuse outside the originally compromised host. Audit recent activity for any of the rotated credentials. Look for unfamiliar IAM API calls in CloudTrail, unfamiliar git pushes from harvested PATs, unfamiliar container pulls, and outbound traffic from the runner to unfamiliar hosts during the exposure window.
4. Downgrade or upgrade. Either pin to 2.0.13-829.vc72453fa_1c16 or move to the patched 2.0.13-848.v76e89de8a_053. Until your investigation is complete, consider disabling the plugin entirely. The build pipeline will fail open without it — which is much safer than running with a backdoored version while you triage.
What This Means for SBOM and Supply Chain Hygiene
The TeamPCP campaign is a clean illustration of why software bill of materials and pinned versions are not the whole answer. Every project in this chain had legitimate version numbers, legitimate signatures, legitimate release notes. The pipeline that produced the artifacts was the part that was wrong.
Three operational changes follow from the campaign for engineering teams.
- Quarantine new plugin versions before auto adoption. Most CI plugin update mechanisms default to pulling the latest. For privileged plugins — anything that runs inside the build runner, anything with access to secrets — treat new versions like new vendors and delay adoption by a quarantine window. A 72 hour delay would have entirely avoided the Checkmarx exposure.
- Short lived credentials in CI. Long lived IAM access keys and PATs in Jenkins credential store are the easy targets. OIDC federation with short lived role assumption, or HashiCorp Vault leases of an hour or less, dramatically reduce the value of any single credential exfiltrated from a runner.
- Per pipeline isolation. Running every pipeline on the same long lived Jenkins agent means a malicious plugin sees the entire credential surface. Ephemeral agents per pipeline, with credentials scoped only to that pipeline, contain the blast radius.
None of this is novel advice in 2026. What the TeamPCP chain proves is that the advice is no longer optional. The threat group has now demonstrated three times — Trivy, Checkmarx March, Checkmarx May — that organizations that auto pull from official channels without quarantine will install backdoored code as soon as one upstream vendor has a credential incident.
Why the Email Layer Matters in This Story
There is a quieter angle to the TeamPCP campaign that mostly stays out of the press. Several of the lateral movements between victims started with phishing emails sent from previously compromised maintainer accounts. The Trivy intrusion in March was preceded by spear phishing of contributors. The downstream npm compromises that fed the mini Shai-Hulud worm propagated via maintainer to maintainer messages that originated from real, hijacked GitHub linked inboxes.
For developers, this means the chain does not end at the package registry. It runs back through your inbox. A message from a known co maintainer asking you to review a malicious PR, a GitHub security alert that turns out to be a phishing redirect, a "your token is about to expire" notice that lands at exactly the moment one of your packages is being targeted — these are the access methods that feed campaigns like TeamPCP's.
Defense at the email layer is the cheapest part of the response. Filtering pixel based tracking out of mail you receive removes a class of behavioral signal that attackers buy from broker pipelines to time their messages. Verifying signatures on automated notifications from registries and security tools — and being suspicious of any urgent action prompt that includes a clickable link — is the kind of habit that prevented several near misses in the TeamPCP timeline. Treat your maintainer inbox like a production system, because as far as TeamPCP is concerned, it is.
What Is Likely Coming Next
Three things follow from the May 9 release that are worth watching over the next few weeks.
- More Checkmarx waves. If the second wave came from credentials TeamPCP retained from March, there is no structural reason a third wave could not come. The Checkmarx incident response needs to be evaluated by independent reviewers, not just declared complete.
- Adjacent Jenkins Marketplace plugins. Jenkins plugins that integrate with security vendors are an especially attractive target — they often have credentials, they often run in privileged context, and they are often auto updated. Expect TeamPCP or imitators to focus on similar plugin namespaces next.
- Regulatory follow up. The same trend that produced the $12.75 million California GM CCPA settlement over data minimization will eventually run into supply chain incidents. Companies that lose customer data because a backdoored CI plugin exfiltrated production credentials will face statutory exposure on top of the reputational hit.
For now, the action is simple. Check your Checkmarx Jenkins plugin version. If it is in the malicious line, rotate every secret that runner could see, and start a hunt. The plugin came from the right place. The release pipeline did not.
Sources
- Official CheckMarx Jenkins package compromised with infostealer — BleepingComputer
- TeamPCP Compromises Checkmarx Jenkins AST Plugin Weeks After KICS Supply Chain Attack — The Hacker News
- Checkmarx Jenkins AST Plugin Compromised in Supply Chain Attack — SecurityWeek
- TeamPCP Supply Chain Attack Campaign Targets Trivy, Checkmarx (KICS), and LiteLLM — Arctic Wolf
- Update: Ongoing Checkmarx Supply Chain Security Incident — Checkmarx