Last updated: 2026-02-04 | 8 min read
OpenClaw (formerly Claudebot and briefly Moltbot) just became the fastest-growing open-source project in GitHub history, hitting 90,000+ stars in weeks. It's not just hype—it's a signal that the market is hungry for AI assistants that actually do things instead of just suggesting them.
This isn't just another AI tool. It's a glimpse into where personal computing is heading, and it raises important questions about security, privacy, and the future of agentic AI.
At its core, OpenClaw is simple: an AI assistant that runs on your hardware, talks to you through apps you already use, and executes tasks instead of just chatting about them.
The workflow looks like this:
The tagline says it all: "AI that actually does things."
Technically, OpenClaw is a gateway service that maintains WebSocket connections to messaging platforms, orchestrates interactions with LLM backends (typically Claude, sometimes GPT-4 or local models via Ollama), and uses a growing library of skills—browser automation, file system access, shell commands, calendar integration.
Key difference: The architecture is local-first. Your conversation history stays on your machine. Your credentials stay on your machine. Privacy-first by design.
People aren't just excited about a cool new tool. They're trying to lock in personal compute capacity while they still can. Here's why:
The Mac Mini run isn't just FOMO—it's a hedge against a future where running local AI gets priced out.
That's not a typo. When OpenClaw needs to touch the outside world, it has to expose its home network safely. Cloudflare tunnels provide that secure bridge. The project's documentation recommends it, and developers adopted it enthusiastically.
The signal: AI is moving fast enough to move publicly traded companies. That's how quickly this space is evolving.
The vulnerabilities researchers found are very real and very serious. Some have been patched. But the deeper problem isn't individual bugs—it's architecture.
Security researcher Jameson O'Reilly discovered that the gateway's authentication logic trusted all localhost connections by default. If you run OpenClaw behind a reverse proxy (a common deployment pattern), that proxy traffic gets treated as local.
Result: Full access to credentials, conversation history, and privilege execution.
When he scanned for exposed instances, he found hundreds. At least eight were completely open—API keys, Telegram bot tokens, one even had Signal configured on a public server.
OpenClaw's extensibility is a feature: 50+ bundled skills, a growing marketplace, infinite customization. But every plugin is unaudited code running with the permissions you've granted the agent.
One malicious update and your personal AI assistant becomes an exfiltration tool. The marketplace has zero moderation process.
OpenClaw connects to your email, messaging apps, social accounts. It reads incoming content and acts on it. LLMs cannot reliably distinguish instructions from content.
Send a carefully crafted WhatsApp message with hidden instructions? OpenClaw treats it as trusted input. It forwards your credentials. Executes shell commands. You never see it coming.
This isn't an OpenClaw flaw—it's intrinsic to how language models process text. No one has solved it.
Here's the uncomfortable truth:
Siri is safe because it's neutered. OpenClaw is useful because it's dangerous.
Big tech assistants are products designed to protect corporate liability. They're limited, walled off, can't book flights or manage cross-platform calendars.
OpenClaw is a tool designed to maximize user capability. It manages calendars across platforms, drafts emails in your voice, handles travel logistics end-to-end, commits code to repos, monitors prices and rebooks when deals appear.
The market spoke: 90,000 GitHub stars implies a lot of pent-up demand for assistance that actually assists.
Despite the security risks, here's why people are flocking to OpenClaw:
One user asked OpenClaw to make a restaurant reservation. OpenTable didn't have availability. So OpenClaw found AI voice software, downloaded it, called the restaurant directly, and secured the reservation over phone.
Zero human intervention. The AI recognized the initial approach didn't work and autonomously found a different solution.
Developers are running coding agents overnight. Describe features before bed, wake up to working implementations. One built a complete Laravel application while walking to get coffee, issuing instructions via WhatsApp, watching commits land in the repo as he walked.
Tell OpenClaw to "create a skill to monitor flight prices and alert me when they drop below $300"—it writes that entire automation itself. Tell it to "self-improve"—it does.
The upside isn't worth the extra liability. Let Google, Anthropic, or others figure out the security model first.
OpenClaw exposes something the industry has been talking about for years but never delivered: AI that can handle ambiguous tasks, recover from failures, and find alternative approaches when the first attempt doesn't work.
For our clients building AI-powered applications:
We're already seeing this shift. Lindy, Naden, Gemini in Gmail—VC-funded agents are appearing with professional security guardrails that exceed what open-source can offer.
The opportunity: Build AI experiences that are useful AND safe. That's where the market is going.
Agentic AI is coming regardless. OpenClaw just made it impossible to ignore.
Need help building secure AI agents for your business? Contact our development team for guidance tailored to your specific requirements.
Related Resources:
Ready to make your online presence shine? I'd love to chat about your project and how we can bring your ideas to life.
Free Consultation