Light bulb Limited Spots Available: Secure Your Lifetime Subscription on Gumroad!

Jan 29, 2026 · 5 min read

That AI Assistant Has Root Access to Your Machine—And Hackers Found Hundreds Exposed Online

The viral Moltbot AI assistant is leaking API keys, OAuth tokens, and corporate credentials through misconfigured deployments. Security researchers found exposed instances with full admin access available to anyone on the internet.

Computer workspace showing AI assistant interface with data streams representing credential leakage

The AI Assistant Running Wild in Enterprise Networks

Moltbot, formerly known as Clawdbot, is an open source AI assistant that runs locally on your machine with deep system integration. It can execute shell commands, read and write files, run scripts, and maintain persistent memory across sessions. The tool has gone viral among developers and power users who want AI capabilities without cloud dependencies.

The problem? According to Token Security, 22% of enterprise customers have employees actively using Moltbot—likely without IT approval. And security researchers are finding hundreds of these installations exposed directly to the internet, with no authentication required.

What's Being Leaked

When researchers from Bitdefender, Hudson Rock, and others scanned the internet for exposed Moltbot instances, they found alarming results. The tool stores sensitive data in plaintext files on the user's local filesystem, including:

  • API keys and OAuth tokens for connected services
  • Complete conversation history with the AI, including any secrets shared
  • User credentials passed through the assistant
  • Corporate data from integrated business applications

Of the instances examined manually, eight were completely open with no authentication, exposing full command execution and configuration data to anyone who found them. One pentester discovered an exposed instance with a Signal encrypted messaging account fully configured and accessible.

The Reverse Proxy Trap

The core issue stems from how Moltbot handles authentication. The system auto approves connections that appear to come from localhost. When users deploy the assistant behind a reverse proxy—a common setup for remote access—internet traffic gets treated as trusted local connections.

This single misconfiguration enables unauthenticated access, credential theft, access to conversation history, command execution, and in many cases, root level system access. The absence of default sandboxing means the AI agent receives complete user level data access on the host machine.

Supply Chain Attacks Through Skills

Moltbot's extensibility creates another attack vector. The assistant uses "skills"—downloadable modules that extend functionality. Researcher Jamieson O'Reilly demonstrated the risk by publishing a deliberately malicious skill to ClawdHub, the skills library.

Within eight hours, 16 developers across seven countries had downloaded the poisoned package. O'Reilly was able to artificially inflate the download count to over 4,000, making the malicious skill rank as the most popular in the repository.

A separate prompt injection experiment showed that a single malicious email could trick Moltbot into forwarding five legitimate emails to an attacker address—no exploits required, just carefully crafted text.

Infostealer Malware Is Adapting

Hudson Rock warned that info stealing malware like RedLine, Lumma, and Vidar will soon adapt to target Moltbot's local storage. These infostealers already harvest credentials from browsers and password managers—adding Moltbot's plaintext secret storage to their target list is trivial.

If your machine gets infected with any common infostealer, every API key and credential you've shared with Moltbot becomes compromised.

How to Deploy Moltbot Safely

If you're determined to use Moltbot, security researchers recommend these precautions:

  • Run in a virtual machine to isolate the AI from your main system
  • Configure firewall rules to control internet access rather than granting full network permissions
  • Never run with root access on your host operating system
  • Require strong authentication for all Moltbot services
  • Close or firewall admin ports and never expose the assistant directly to the internet
  • Enable encryption at rest for stored secrets
  • Vet all skills before installation using tools like Cisco's open source Skill Scanner

The Shadow AI Problem

Beyond the technical vulnerabilities, Moltbot represents a growing "shadow AI" problem in enterprises. Employees adopt powerful AI tools without security review, creating covert data leak channels that bypass traditional security tools. The prompt based execution is difficult to detect with conventional monitoring.

When 22% of enterprise employees are running AI assistants with system level access—and IT doesn't even know about it—the attack surface extends far beyond what security teams are monitoring.

The Bottom Line

Local AI assistants offer genuine productivity benefits and privacy advantages over cloud services. But Moltbot's architecture—deep system access, plaintext secret storage, auto trusted local connections, and an unvetted skills marketplace—creates serious security risks that most users aren't equipped to mitigate.

If you're using Moltbot in any professional capacity, assume your API keys and credentials are at risk until you've implemented proper isolation and authentication. And if you're an IT administrator, it's time to find out whether your employees have already deployed it.