The Rise, Fall, and Rebirth of Clawdbot in 72 Hours
How a Trademark Dispute, Security Scandals, and Crypto Scammers Forced the Internet's Hottest AI Project to Reinvent Itself
TL;DR
- Clawdbot, a viral open-source AI assistant with 70K+ GitHub stars, was forced to rebrand to Moltbot after Anthropic issued a trademark request over the name's similarity to "Claude"
- During the rebrand, a ~10-second window allowed crypto scammers to hijack the project's GitHub and X accounts, enabling a $16M pump-and-dump token scam
- Security researchers simultaneously discovered hundreds of misconfigured instances exposing API keys, credentials, and shell access—with one demo showing email exfiltration in 5 minutes
- The project survived: Moltbot now has 75K+ stars and an active community, but the saga offers hard lessons for maintainers, AI companies, and users
Timeline of Events
| Date | Event |
|---|---|
| Late 2025 | Clawdbot launches, hits 9K stars in 24 hours |
| Mid-Jan 2026 | Project crosses 60K stars; Mac Minis sell out |
| Jan 27, 2026 | Anthropic issues trademark request |
| Jan 27, 2026 | Account migration fails; handles hijacked in ~10 seconds |
| Jan 27, 2026 | Fake $CLAWD token peaks at $16M market cap |
| Jan 27–28, 2026 | Security researchers publish vulnerability reports |
| Jan 28, 2026 | Moltbot rebrand stabilizes; community rallies |
Status as of Jan 28, 2026: Moltbot operational at molt.bot, legacy clawdbot command works as compatibility shim, hijacked accounts still under dispute.
In late January 2026, Clawdbot was the most talked-about project in the AI developer community. With over 70,000 GitHub stars, reports of Mac Minis selling out as developers rushed to set up dedicated AI machines, and tech luminaries singing its praises, the open-source personal AI assistant had achieved something rare: genuine viral momentum built on actual utility.
Then, in the span of 72 hours, everything changed.
A trademark dispute with Anthropic. Account hijackings by crypto scammers. A $16 million pump-and-dump scheme. Security vulnerabilities exposed to the world. And ultimately, a forced rebrand that would test whether a project could survive losing its identity at the peak of its popularity.
This is the complete story of how Clawdbot became Moltbot and what it reveals about the fragile ecosystem of open-source AI development.
Part I: The Meteoric Rise
From Side Project to Phenomenon
Clawdbot was created by Peter Steinberger (@steipete), an Austrian developer with serious credentials. He founded PSPDFKit, a PDF SDK company, and sold it to Insight Partners for approximately €100 million. When Steinberger builds something, people pay attention.
What he built was essentially "Claude with hands"—an AI agent that didn't just chat, but actually did things. Unlike the chatbots that dominated the AI landscape, Clawdbot was a local-first agent: it runs on your own machine but can call cloud AI models (like Claude or GPT) for reasoning while keeping execution, memory, and data under your control.
Its capabilities felt genuinely futuristic:
- Browser control: Navigate websites, fill forms, extract data autonomously
- Shell command execution: Run scripts, manage files, automate system tasks
- File system operations: Read, write, and organize documents
- Multi-platform messaging: Integrate with WhatsApp, Telegram, Discord, Slack, Signal, and iMessage
- Persistent memory: Remember context across conversations, learning user preferences over time
- Proactive notifications: Reach out with reminders, updates, and completed tasks
The project launched in late 2025 and immediately caught fire. It hit 9,000 GitHub stars within its first 24 hours—a remarkable achievement that signaled something special was happening.
Stats Snapshot
As of January 28, 2026, ~10:00 AM PT:
| Metric | Value |
|---|---|
| GitHub Stars | ~75,300 |
| Forks | ~9,700 |
| Discord Members | 8,900+ |
| Contributors | 200+ |
| Time to 60K stars | ~3 weeks |
Source: github.com/moltbot/moltbot
The velocity of adoption suggested not just curiosity, but genuine utility. People weren't just starring the repo—they were deploying it, configuring it, and integrating it into their daily workflows. Clawdbot became one of the fastest-growing open-source projects in recent GitHub history.
The Endorsements Poured In
The tech establishment took notice:
- Andrej Karpathy, former Tesla AI director and OpenAI founding member, praised the project publicly
- David Sacks, tech investor and former PayPal COO, tweeted about it to his massive following
- MacStories called it "the future of personal AI assistants"
- Cloudflare stock reportedly surged 14% in premarket trading as buzz around self-hosted AI agents reinvigorated investor enthusiasm for infrastructure plays (Barron's)
Perhaps most telling: reports emerged of Mac Minis selling out as developers rushed to set up dedicated Clawdbot machines. Users were buying hardware specifically to run this software.
Why It Resonated
Clawdbot succeeded because it addressed a genuine gap in the market. Existing AI assistants were either:
- Cloud-dependent: Your data goes to corporate servers, raising privacy concerns
- Conversational only: They could talk but couldn't act
- Walled gardens: Locked to specific platforms and ecosystems
Clawdbot offered something different: a local-first, action-capable AI that worked across the apps people actually used. It represented digital sovereignty—the ability to have powerful AI assistance without surrendering control of your data.
For many users, it felt like an "iPhone moment" for personal AI. The promise of having a genuinely useful digital assistant, one that could manage email, automate tasks, and integrate with existing workflows, had finally become tangible.
And critically, many users configured Clawdbot to use Anthropic's Claude as its AI brain, taking advantage of Claude's strong reasoning capabilities while keeping the execution layer local.
The irony of what happened next would not be lost on the community.
Part II: The Cease and Desist
The Trademark Request
On January 27, 2026, Peter Steinberger made an announcement that sent ripples through the community: Anthropic, the $18 billion AI company behind Claude, had issued a trademark request forcing a name change.
The issue? "Clawd" sounded too similar to "Claude."
Steinberger handled the news with characteristic grace:
"Anthropic asked us to change our name (trademark stuff), and honestly? 'Molt' fits perfectly. It's what lobsters do to grow."
The New Identity
The rebranding was thoughtfully executed:
| Old | New |
|---|---|
| Clawdbot | Moltbot |
| Clawd | Molty |
| clawd.bot | molt.bot |
@clawdbot | @moltbot |
The metaphor was apt: lobsters molt to grow, shedding their old exoskeleton when it becomes too confining. The project was doing the same—evolving beyond its original identity to become something bigger.
"Same lobster soul, new shell," as the community put it.
Why Anthropic Had to Act
It's worth understanding Anthropic's position here. Trademark law essentially requires companies to actively defend their marks or risk losing them. If Anthropic allowed "Clawd" to persist unchallenged, it could weaken their ability to protect "Claude" against future infringement—including from bad actors with genuinely malicious intent.
The phonetic similarity was undeniable. From a brand protection standpoint, a popular developer tool named "Clawd" that frequently appears in conversations about "Claude" creates exactly the kind of consumer confusion trademark law is designed to prevent. Anthropic's legal team likely had little choice but to act, regardless of the project's good intentions or its role in driving Claude API usage.
This doesn't make the outcome less frustrating for the community—but it does make it more understandable.
The Community Reaction
Not everyone accepted the trademark explanation gracefully. Critics pointed out that:
- The project was less than 3 months old
- It had 60,000+ stars and significant developer goodwill
- It was actively driving revenue to Anthropic through API usage
- The phonetic similarity appeared to be playful homage, not an attempt at brand confusion
- The forced rename directly caused chaos (as we'll see)
DHH (David Heinemeier Hansson), creator of Ruby on Rails and a prominent voice in the developer community, characterized Anthropic's recent moves as "customer hostile."
Some developers who had been enthusiastic Claude advocates began questioning whether Anthropic was the right platform to build on. OpenAI's Codex CLI was Apache 2.0 licensed. Google and Meta offered open-weight models. The sentiment was shifting.
But the trademark dispute was only the beginning of the chaos.
Part III: The 10-Second Disaster
The Fatal Mistake
What should have been a straightforward rebrand turned into a catastrophe due to a procedural error.
During the transition, Steinberger attempted to rename the GitHub organization and Twitter/X handle simultaneously. The plan was simple: release the old names, immediately claim the new ones.
It didn't work that way.
In the brief window of approximately 10 seconds between releasing the old handles and claiming the new ones, crypto scammers seized both accounts.
Steinberger explained what happened:
"Had to rename our accounts for trademark stuff and messed up the GitHub rename and the X rename got snatched by crypto shills."
"It wasn't hacked, I messed up the rename and my old name was snatched in 10 seconds."
"Because it's only that community that harasses me on all channels and they were already waiting."
The Attackers Were Prepared
This wasn't opportunistic. The scammers had been monitoring the situation, waiting for exactly this moment. Popular tech projects are constant targets for impersonation scams, and a viral project undergoing a forced rebrand was a perfect target.
Within moments, the hijacked @clawdbot accounts were pumping fake crypto schemes to tens of thousands of followers who didn't know about the rebrand. The accounts looked legitimate—same followers, same history—but were now controlled by bad actors.
Steinberger reached out to GitHub and Twitter/X for help recovering the accounts. Meanwhile, "official" announcements were going out advertising fake token launches, airdrops, and investment opportunities.
Handle Migration Playbook: How to Avoid This
The 10-second hijacking was preventable. If you're ever migrating project handles, here's the procedure Steinberger wishes he'd followed:
-
Secure new handles first. Create and verify
@newnameaccounts on all platforms before touching the old ones. -
Never release old handles. On platforms that allow it, rename directly rather than releasing and reclaiming. If you must release, do it as the final step.
-
Stagger the migration. Don't do GitHub, X, Discord, and domain changes simultaneously. Do them sequentially with verification between each.
-
Pre-announce the change. Post from the old accounts that a rename is coming, what the new names will be, and that any other claims are fraudulent.
-
Monitor for squatters. Set up alerts for your old handle names. Have takedown requests ready to file immediately.
-
Coordinate with platforms. For high-profile projects, reach out to platform trust & safety teams before the migration to flag potential abuse.
The $16 Million Scam
The account hijacking enabled something worse: a full-scale crypto fraud.
Within hours of the rename chaos, fake $CLAWD tokens appeared on Solana-based meme coin platforms. Speculators, seeing what appeared to be announcements from official channels, FOMO'd in. At its peak, the token hit a $16 million market cap.
Then Steinberger issued his denial:
"To all crypto folks: Please stop pinging me, stop harassing me. I will never do a coin. Any project that lists me as coin owner is a SCAM. No, I will not accept fees. You are actively damaging the project."
The token collapsed immediately—from roughly $8 million market cap to under $800,000 within hours. Late buyers were left with worthless tokens. The scammers, who had positioned themselves early and extracted liquidity at the peak, walked away with substantial profits.
The saga became a case study in how quickly crypto opportunists can exploit mainstream tech moments.
Part IV: The Security Reckoning
Vulnerabilities Exposed
While the branding chaos dominated headlines, security researchers were discovering serious problems with how many users had deployed Clawdbot.
SlowMist, a blockchain security firm, published a report identifying multiple issues:
- Unauthenticated instances publicly accessible on the internet
- Authentication bypass vulnerabilities when the gateway was improperly configured behind a reverse proxy
- Code flaws that could potentially lead to credential theft
- Remote code execution possibilities in certain configurations
Hundreds of Exposed Instances
Security researcher Jamieson O'Reilly demonstrated the scope of the problem. Using Shodan (a search engine for internet-connected devices), he searched for "Clawdbot Control" and found hundreds of exposed instances containing:
- Complete API keys and OAuth tokens
- Bot tokens for messaging platforms (Telegram, Discord, Slack)
- Full conversation histories
- The ability to send messages as users
- Command execution capabilities on host machines
These weren't theoretical vulnerabilities—they were live, exploitable exposures affecting real users.
A Shodan search for 'Clawdbot' on January 27, 2026 returned 994 exposed instances. Top countries: United States (264), Germany (221), UAE (145). Top hosting providers: Hetzner (519), DigitalOcean (174).
The 5-Minute Attack
Researcher Matvey Kukuy conducted a demonstration that illustrated the real-world risk. He sent a malicious email containing embedded prompt injection to a vulnerable Moltbot instance.
The attack chain was elegant and terrifying:
- Malicious email arrives in user's inbox
- Moltbot reads the email as part of normal operation
- Hidden instructions in the email trick the AI into believing they're legitimate commands
- The AI executes the attacker's instructions
- User's last five emails are forwarded to an attacker-controlled address
Total time: 5 minutes.
The Root Problem
The security issues stemmed from a fundamental tension in Clawdbot's design. The tool's power came from its deep system access—browser control, shell commands, file operations. But that same access created attack surface.
Many users deployed Clawdbot without fully understanding the security implications:
- Running on primary machines with access to sensitive accounts
- Exposing control interfaces to the public internet
- Failing to implement IP whitelisting or proper authentication
- Granting broad permissions without sandboxing
The Hacker News community's consensus was grim: "It's terrifying. No directory sandboxing."
The complexity of secure deployment required technical competence that many enthusiastic early adopters lacked. The tool that promised to democratize AI assistance was, in practice, creating security risks for users who couldn't properly configure it.
If You're Running Moltbot: Security Checklist
Given the vulnerabilities exposed, here's what you should do immediately:
-
Don't expose the UI to the public internet. The control interface should never be directly accessible from outside your network.
-
Put it behind authentication + VPN/tunnel. Use Tailscale, WireGuard, or SSH tunnels. The project supports Tailscale Serve/Funnel natively.
-
Use a dedicated machine and accounts. Don't run Moltbot on your primary laptop with access to your main email, banking, and crypto wallets.
-
Scope API keys and rotate regularly. Don't give Moltbot your all-access tokens. Create limited-scope keys and rotate them.
-
Review and confirm dangerous tool permissions. Check which tools have shell access, file write access, and messaging capabilities. Disable what you don't need.
-
Enable logging and alerts. Monitor what your instance is doing. Set up notifications for unusual activity.
-
Read the hardening docs. The project's Security documentation and SECURITY.md have specific guidance. Follow it.
Part V: Regulation Is Coming for Agents
Why This Matters Beyond Clawdbot
The Clawdbot situation played out against a backdrop of increasing regulatory attention on AI tools—and agentic AI with deep system access is squarely in the crosshairs.
The Take It Down Act, signed into law in May 2025 with compliance timelines extending into May 2026, targets AI-generated content that could exploit individuals' likenesses (The Verge). While the law focuses on non-consensual intimate imagery, it signals broader regulatory willingness to impose requirements on AI systems that handle personal data.
For tools like Moltbot—AI assistants with access to emails, files, browsing history, and messaging platforms—the regulatory trajectory is clear: enforcement timelines are real, and agentic tools touching personal data will face scrutiny.
The Consent Problem
The project raised fundamental questions about user consent in the age of agentic AI:
- When an AI reads your emails to help manage your inbox, what are the privacy implications?
- If the AI can execute shell commands, what prevents it from being weaponized?
- How do users meaningfully consent to capabilities they may not fully understand?
- What happens when AI assistants interact with services that have their own terms of service?
These weren't theoretical concerns—they were practical challenges that security researchers had demonstrated could be exploited. And they're exactly the kinds of issues regulators are beginning to address.
The Open Source Tension
The situation also illuminated tensions within the open-source AI ecosystem:
Commercial interests vs. community development: Anthropic's trademark enforcement, while legally necessary, frustrated developers who saw themselves as building on and promoting Claude, not competing with it.
Innovation vs. security: The rapid pace of Clawdbot's development had prioritized features over security hardening, a common pattern in viral open-source projects.
Accessibility vs. safety: Making powerful AI tools available to anyone also meant making them available to users who might not implement them safely.
Part VI: The Rebirth
Moltbot Emerges
Despite everything—the forced rebrand, the account hijackings, the crypto scams, the security disclosures—the project survived.
The transition to Moltbot was more than cosmetic. The development team used the moment to:
- Address security vulnerabilities identified by researchers
- Improve documentation around secure deployment
- Clarify the security model and user responsibilities
- Strengthen authentication and access controls
- Engage more directly with the security research community
The core functionalities remained unchanged: browser control, shell execution, file operations, multi-platform messaging, persistent memory. But the framing shifted toward greater emphasis on responsible deployment.
Community Response
User response to the transition was notably enthusiastic. Rather than abandoning the project over the chaos, the community rallied:
- GitHub stars continued climbing past 75,000
- The Discord community remained active with 8,900+ members
- New skills and integrations continued being developed
- Contributors kept submitting pull requests
Many users viewed the crisis as growing pains for a genuinely innovative project. The "molting" metaphor resonated—the project was shedding an old shell to grow into something stronger.
Current Status
As of January 28, 2026:
| Metric | Value |
|---|---|
| GitHub Stars | ~75,300 |
| Forks | ~9,700 |
| Discord Members | 8,900+ |
| Platforms | macOS, Windows (WSL2), Linux, iOS, Android |
| Integrations | 50+ services |
| License | MIT |
Source: github.com/moltbot/moltbot
The legacy clawdbot command still works as a compatibility shim. Migration is straightforward with npm install -g moltbot@latest. The documentation has been comprehensively updated at docs.molt.bot.
Moltbot is currently one of the most visible open-source personal AI assistants by GitHub stars and community activity—though the space is evolving rapidly.
Part VII: Lessons and Implications
For Open Source Maintainers
Trademark disputes can come from anywhere. Even playful homages to commercial products can trigger legal responses. Budget time and planning for potential branding challenges.
Account migrations are high-risk moments. The 10-second hijacking was preventable with better procedures. When transitioning identities, secure the new handles before releasing the old ones. (See the playbook in Part III.)
Crypto scammers monitor popular projects. Any viral tech project is a target for impersonation and fraud. Be prepared for bad actors to exploit your brand.
Security defaults matter. Users will deploy your software in insecure configurations. Design with that assumption. Make the secure path the easy path.
Viral growth outpaces documentation. When adoption explodes, many users won't read security guidelines. Build guardrails into the software itself.
For AI Companies
Your enthusiasts are your ecosystem. Indie developers building creative projects on your platform are evangelists, not competitors. Legal actions against them have ecosystem consequences.
There's a playbook for community building. Google didn't sue Android developers. Apple fostered the App Store ecosystem. Aggressive trademark enforcement against community projects—even when legally required—sends a chilling signal. Consider timing, communication, and support during transitions.
Consider the downstream effects. Anthropic's trademark request was legally defensible but triggered chaos that reflected on the broader Claude ecosystem. Timing and execution matter.
For Users
Self-hosted AI requires security competence. Tools like Moltbot offer power and privacy but demand responsible deployment. Don't run them on primary machines with access to sensitive accounts.
Use dedicated infrastructure. Separate machines, isolated accounts, strict IP whitelisting, and careful permission management are essential, not optional.
The security model is still immature. The infrastructure for secure self-hosted AI assistants is evolving rapidly. Stay current with security guidance and updates.
Understand what you're consenting to. An AI with access to your emails, files, and shell can be incredibly useful—and incredibly dangerous if compromised.
For the Ecosystem
Agentic AI is here. Tools that "actually do things" rather than just chat are becoming mainstream. The infrastructure for deploying them safely needs to catch up.
Digital sovereignty has costs. Running AI locally offers privacy and control but transfers security responsibility to users who may not be prepared.
72 hours can change everything. In the age of viral distribution and crypto scams, reputation and identity can be challenged overnight. Resilience requires planning.
Conclusion: Same Lobster Soul, New Shell
Peter Steinberger continues building. The Moltbot community continues growing. The vision of a genuinely useful personal AI assistant—local-first, privacy-respecting, action-capable—remains compelling.
The crisis revealed both the promise and peril of open-source AI development. A single developer with a compelling vision can build something that captures global attention. That same success attracts legal challenges, security scrutiny, and bad actors looking to exploit the moment.
Moltbot, née Clawdbot, represents something important: the future of personal AI assistance. It survived a trademark dispute, account hijackings, a $16 million crypto scam, and serious security disclosures—all in about 72 hours.
The project emerged transformed but intact. The community rallied rather than fled. The developer kept building rather than giving up.
Same lobster soul. New shell. Stronger than before.
Resources:
- Website: molt.bot
- Documentation: docs.molt.bot
- GitHub: github.com/moltbot/moltbot
- Discord: discord.gg/clawd
- Creator: @steipete
Further Reading:
- The Verge: Moltbot and the Rise of Local AI Agents
- Business Insider: Clawdbot Changes Name After Anthropic Trademark Request
- Barron's: Cloudflare Stock Jumps on Moltbot Buzz
The AI landscape continues evolving rapidly. For updates on Moltbot and the broader ecosystem of open-source AI tools, follow the project's official channels and follow us on x.com/EveryDevAI
Comments
Sign in to join the discussion.
The real problem nobody talks about: we're calling local‑first agents local even though they call out to Anthropic or OpenAI every time for an answer.
How can your code rely on the cloud and still be local AI?
Digital sovereignty ends as soon as Claude's API goes down or Anthropic changes its terms. Would love to see people build actually truly local tools and models instead.
Llama 3 70B runs fine on a Mac Studio, but it doesn't get funding because there's no API moat.
The article is good, just wish the industry stopped pretending that
it runs locally but needs cloud AIis a privacy win.