ClawdBot Security Flaw Exposes 900+ AI Gateways Worldwide

  • January 28, 2026

Over 900 ClawdBot AI assistant instances exposed online, leaking API keys and private chats. Learn about the security flaw and how to protect yourself.

ClawdBot Security Flaw Exposes 900+ AI Gateways Worldwide

While open-source AI assistants have introduced powerful tools for developers and tech enthusiasts, a recent security flaw with ClawdBot points to the potential risks of open-source AI agents. Security researcher Jamieson O’Reilly has discovered a security vulnerability with over 900 instances of this popular AI agent gateway.

Therefore, this has led to credential theft, as well as private data leaks, which has created a critical security nightmare for its users. This is a critical reminder of the importance of secure-by-default configurations, especially with the introduction of AI agents.

Intelligence Evolution
ClawdBot to Moltbot
Messaging Hub

Universal connectivity for AI deployment.

Frictionless Flow

Rapid task execution and conversation management.

⚠️
The Security Loophole

Simplicity has created a double-edged sword: a massive architectural flaw actively targeted by global threat actors.

When Convenience Meets Catastrophe

ClawdBot, which was recently rebranded to Moltbot, is an open-source personal AI assistant designed for easy integration with various messaging platforms. For example, it allows users to connect their AI to services such as:

  • WhatsApp
  • Telegram
  • Slack
  • Discord
  • Signal
  • iMessage

Its main appeal lies in its simplicity. Users can quickly deploy it to handle tasks, manage conversations, and execute commands. Nevertheless, this ease of deployment has proven to be a double-edged sword. The platform’s architecture prioritizes a frictionless user experience, which, as a result, has inadvertently created a massive security loophole that threat actors are actively exploiting.

The Discovery That Shook AI Security

The widespread exposure of these gateways was first discovered on January 23, 2026, by security researcher Jamieson O’Reilly. He used the search engine Shodan to search for exposed gateways with the unique HTML title “Clawdbot Control.” The results of this search were alarming. Hundreds of publicly exposed gateways were found almost immediately.

However, there were more than 900 exposed gateways in total, many of which had no authentication. This meant that sensitive data, such as Anthropic API keys, Telegram and Slack tokens, and months of private chat history, were left completely exposed and open for anyone who found them.

Behind the Breach: A Technical Breakdown

The underlying reason for this huge exposure is a critical flaw in the authentication mechanism of ClawdBot. The system was built with a “localhost auto-approval” feature.

This feature, however, becomes a major flaw in the system when the gateway is put behind a reverse proxy server. The proxy servers send the requests using a local system address, i.e., 127.0.0.1. Since there are no trusted proxy rules set by default in ClawdBot, it considers these requests to be safe and grants unauthenticated access to its control panel and WebSockets, bypassing all security measures and leaving the door wide open for attackers.

Real-World Attack Scenarios

The consequences of such exposure are not just hypothetical; attackers are currently exploiting these misconfigurations in the wild. There are many alarming attack vectors that are emerging. For example, attackers are using social engineering attacks on the AI itself. The most common scenarios include:

  • Prompt Injection: Attackers are sending special kinds of prompts in replies on connected social media accounts such as X (formerly Twitter). The attackers want to trick the AI into leaking sensitive data such as API keys and internal system data.
  • Malicious Plugins: Malicious ‘skills’ or plugins are being distributed via community channels. These plugins look legitimate but have hidden code that scrapes data, steals credentials, and enlists devices in botnets.

Protecting Your AI Infrastructure

If you are running a ClawdBot or Moltbot instance, it is crucial to assume compromise and take immediate action. The platform’s developers and security experts recommend a series of urgent remediation steps. Therefore, users should prioritize securing their instances to prevent further damage. Key actions to take include:

Security Protocol: Urgent
Remediation Steps

If you are running a ClawdBot or Moltbot instance, it is crucial to assume compromise and take immediate action.

🔍 Security Audit

Deep-scan system entry points to identify any potential exposure or unauthorized changes.

clawdbot security audit –-deep
🔑 Auth Mode

Enforce password authentication immediately to prevent open gateway access.

gateway.auth.mode: "password"
🛡️ Trusted Proxies

Configure proxy settings to block IP-spoofing and authentication bypass attempts.

trustedProxies: ["LIST"]
🔄 Rotate Keys

Immediately revoke and rotate all API keys, tokens, and secrets that may be compromised.

Action: REVOKE_ALL_TOKENS

Conclusion: When an AI Gateway Goes Public, Everything Behind It Becomes Public Too

The ClawdBot exposure is a clear warning for anyone deploying open source AI assistants. Over 900 internet facing gateways were discovered accessible through “Clawdbot Control”, with many lacking authentication, exposing API keys, messaging tokens, and private chat history. 

Why This Threat Matters

This was not a sophisticated exploit chain, it was a trust failure created by default design choices. A “localhost auto approval” behavior becomes dangerous when a gateway sits behind a reverse proxy, where requests can appear to originate from 127.0.0.1 and bypass expected checks. 

Why Every Team Running AI Agents Is Exposed

AI gateways concentrate high value access, then ship with the wrong assumptions:

  • Reverse proxy deployments that turn “local” into “public”
  • Weak or missing authentication on control panels and WebSockets
  • Long lived API keys and tokens that unlock downstream systems
  • Prompt injection that coerces an assistant into leaking secrets
  • Community plugins that look legitimate, but steal data or deploy additional payloads 

Secure AI Infrastructure Before It Becomes an Attack Surface

AI assistants amplify productivity, but they also amplify blast radius. Lock down authentication, configure trusted proxies correctly, rotate exposed keys, and enforce preventive controls that stop malicious execution paths from turning into breaches.

Like what you see? Share with a friend.

Move Away From Detection With Patented Threat Prevention Built For Today's Challenges.

No one can stop zero-day malware from entering your network, but Xcitium can prevent if from causing any damage. Zero infection. Zero damage.

Book a Demo