May 11, 2026 · 7 min read
RansomHouse Says It Stole the Source Code Behind Trellix—the Cybersecurity Firm 50,000 Companies and Governments Rely On
The successor to McAfee Enterprise and FireEye has confirmed unauthorized access to "a portion" of its source code. Three days later, a ransomware gang stamped its name on the breach.
On May 4, Trellix posted a public statement saying it had "recently identified" that an unauthorized party gained access to "a portion of our source code repository." The post was short, defensive, and conspicuously light on detail. It named no products, gave no timeline, and quoted no impact assessment. It simply said the company had retained forensic experts, notified law enforcement, and found "no evidence that our source code release or distribution process was affected, or that our source code has been exploited."
Three days later, on May 7, the ransomware operation RansomHouse listed Trellix on its dark web extortion site. The group's writeup claims it took source code; Trellix, asked to confirm or deny the connection, has so far only said it is "looking into" the claim.
Whether or not RansomHouse turns out to be the right name, the underlying fact is settled. A company whose software inspects the network traffic, email, and endpoints of an estimated 50,000 enterprise and government customers has had part of its source code copied off premises by people who should not have it.
Why Trellix Specifically Matters
Trellix is the merged product of McAfee's enterprise security business and the threat intelligence firm FireEye, combined under Symphony Technology Group ownership in January 2022. The portfolio includes endpoint detection and response, extended detection and response, the Helix security operations platform, network security, data loss prevention, and the email security gateway that grew out of FireEye's original anti phishing appliance.
Those products sit at the most privileged points in the customers that use them. An EDR agent runs at kernel level on every monitored endpoint. An email security gateway sees every inbound and outbound message before users do. A SIEM aggregates the authentication logs, network flows, and DNS traffic of an entire enterprise. If someone has the source code for these products, they have a head start on finding bugs, building bypass tools, and understanding precisely which behaviors the product will and will not catch.
Trellix's customer list is also unusual. The FireEye lineage means the company has been embedded in U.S. federal incident response work for more than a decade, including the SolarWinds investigation. Its enterprise customers include defense contractors, banks, and critical infrastructure operators. The risk profile of a partial source code leak at this specific company is therefore unusually high.
What "A Portion" Actually Means
Trellix's public statement uses one of the most under specified phrases in breach disclosures: "a portion of our source code repository." It does not say which products, which subsystems, which release branches, or how much data left the network. It does not say whether the access was read only, whether commit history was modified, or whether internal credentials and signing keys lived in the same repository.
There are three things that would change the severity of the incident dramatically, and Trellix has not confirmed or denied any of them:
- Were code signing private keys exposed? If so, the attacker can issue legitimately signed Trellix updates. This is the SolarWinds scenario.
- Were build pipeline credentials exposed? If so, the attacker could push code into a future release.
- Were internal vulnerability tracking issues in scope? Most security vendors keep their unpatched bugs in the same repository system that holds the code. A repository read often hands the attacker a pre patched list of zero day candidates.
The company's statement that there is "no evidence that our source code release or distribution process was affected" addresses the first two questions only by implication. It is what every customer wants to hear and it is also exactly what every customer would expect to hear in the first 96 hours.
RansomHouse's Track Record
RansomHouse first appeared in late 2021 as a data extortion brand rather than a true ransomware operator. It does not always deploy encryption malware. It steals data, demands payment, and publishes the data on a leak site if the demand is refused. Its known victims include AMD, ShopRite, the African Bank in South Africa, Mission Community Hospital, and Saskatchewan Liquor and Gaming.
The group's pattern is to wait weeks or months between the initial intrusion and the public listing. It tries to negotiate privately first, and only publishes once a deadline passes without payment. That pattern fits the gap between Trellix's May 4 disclosure and RansomHouse's May 7 leak site post. If accurate, it would suggest Trellix declined to pay during the private negotiation window.
RansomHouse has not yet released a sample of the stolen Trellix code to verify its claim. That step is usually how the group proves the breach to outsiders. Until it does, the door is open to the possibility that the listing is opportunistic, attached to a breach that some other actor performed.
The Familiar Pattern of Security Vendors Getting Hacked
The Trellix incident sits inside a depressingly long sequence. In the last 18 months alone:
- Checkmarx had 96GB of source code dumped by LAPSUS$ after a months long backdoor campaign.
- Cisco confirmed a federal agency breach via its own firewall product that persisted across patching.
- A British cybersecurity firm got hacked because it had no MFA enabled, then threatened the journalists who reported it.
- Bitwarden's CLI was used as a delivery vehicle in a supply chain attack.
There is a uncomfortable inversion at the center of this trend. The companies whose entire pitch is that they will catch sophisticated attackers are themselves being caught by attackers who, in many of these cases, are using fairly conventional techniques. The defenders are not failing because the threats are exotic. They are failing for the same operational reasons their customers fail: stale repos, gaps in monitoring, vulnerable third party tools, and humans who make mistakes.
What This Means for Trellix Customers and for the Email Path Specifically
If you are a Trellix customer, the playbook for the next 90 days is well established and should already be running. Audit the integrity of every Trellix update you have installed since the start of 2026. Tighten the network segmentation around management consoles so that even a fully compromised agent has limited movement. Increase the verbosity of logging on the products themselves so that any anomalous behavior is more likely to surface in your own SIEM rather than Trellix's.
For email administrators specifically, the Trellix Email Security gateway is the relevant product. If you rely on it as a primary anti phishing filter, treat the source code exposure as a hypothetical bypass risk and add a second layer of validation upstream or downstream. The conservative move is to assume that an attacker who has read the rule set can craft a phishing email that the rule set will allow.
For everyone else, the broader lesson is the one this incident shares with the DigiCert support chat breach a month earlier: the trust anchors of the security industry are not invulnerable. The signatures on your antivirus, the certificates on your code, the rules in your email gateway, and the source code that produces all of them are all subject to the same operational risks as everything else. Treat them accordingly.
What We Are Still Waiting For
Trellix has committed to share more information "as appropriate." The specific questions outsiders still need answered are concrete:
- Which product source trees were in the affected repository?
- How did the attacker first get in?
- For how many days was access maintained?
- Were signing keys, build pipeline credentials, or vulnerability trackers exposed alongside the code?
- Has the company contacted its largest government customers directly, and what mitigations were recommended?
The Mozilla and Apple CA programs require detailed public incident reports when certificate authorities are compromised. There is no equivalent disclosure regime for security vendors. Until that changes, customers are dependent on what individual vendors choose to share, and the historical record suggests that is significantly less than what customers actually need to defend themselves.