OpenClaw is impressive. It's also a security nightmare. Here's the difference.

OpenClaw's vision is compelling, but recent CVEs, exposed instances, and supply chain incidents show how risky a self-hosted AI agent can be when trust and defaults are weak.

March 20264 min read

OpenClaw blew up fast. 68,000 GitHub stars in weeks, glowing tweets, people calling it an "iPhone moment." The appeal is real - a self-hosted AI agent that runs on your own machine, connects to your messaging apps, and works in the background while you sleep. That's genuinely exciting.

But the last few months have been rough for OpenClaw users. And if you're evaluating AI assistants for anything that touches real work, you should know what's been happening.


The security track record

Since launch, OpenClaw has accumulated three high-severity CVEs. Its skill store, where users install extensions, has been found riddled with malicious packages. Skills can be cracked to expose API keys, credit card numbers, and personal data.

In February, researchers found over 135,000 OpenClaw instances exposed directly to the internet. Why? Because out of the box, OpenClaw binds to 0.0.0.0 - meaning it listens on every network interface, including your public IP. That's not a user mistake. That's the default.

That same month, a supply chain attack compromised the Cline CLI npm package and silently installed OpenClaw on around 4,000 developers' machines without their knowledge or consent.

SecurityScorecard's threat intelligence team put it bluntly:

"Convenience-driven deployment, default settings, and weak access controls have turned powerful AI agents into high-value targets."

Jeremy Turner, SecurityScorecard VP of Threat Intelligence, was even more direct:

"Think of it like hiring a worker with a criminal history of identity theft who knows how to code well and might take instructions from anyone."


Why this happens

OpenClaw is open-source and self-hosted, which means you're responsible for securing it. That's fine if you know what you're doing. Most people don't, and the defaults don't protect them.

It was also vibe-coded fast, by the creator's own admission, to ship something exciting quickly. The community is building on top of it at pace, which means the skill store grows faster than anyone can audit it.

None of this is a knock on what OpenClaw is trying to do. The vision is right. But an AI assistant that has access to your inbox, your files, your calendar, and your messaging apps has to earn that trust. Security can't be an afterthought.


How Bogi approaches this differently

Bogi is not self-hosted. You don't manage infrastructure, configure network bindings, or audit a third-party skill store. We handle that.

Every action Bogi takes is logged and visible to you. Anything significant - sending an email, booking a meeting, updating a document - requires your approval before it happens. Bogi doesn't take actions silently or accept instructions from arbitrary sources.

We're also CASA 2 certified by Google, which means our security practices have been independently verified. Not something you can get from a two-month-old open-source project.

The goal is the same as OpenClaw: an AI that works toward your goals, in the background, without you babysitting it. The difference is whether you have to become a sysadmin to use it safely.


The right question to ask

An AI assistant that can access your email, files, and tools is powerful. That power cuts both ways. The question isn't just "what can it do?" - it's "who else might be able to make it do things, and what happens to my data?"

OpenClaw is worth watching. But right now, it's a project for people who want to tinker, not a tool you'd hand to your team.

Bogi is built for people who want the capability without the exposure. Give it a goal, approve the plan, and let it get to work.