Moltbot (formerly known as Clawdbot) has recently become one of the fastest growing open source AI tools.
But the viral AI assistant survived a chaotic week in the early stages. It went through a trademark dispute, a security crisis and a wave of online scams to emerge as Moltbot.
The chatbot was created by an Austrian developer, Pete Steinberger, who marketed the tool as an AI assistant that “actually does things.”
The feature that makes it interesting is that it can perform tasks across a user’s computer and apps. For example, managing calendars, sending messages or checking in on flights, primarily access to apps like WhatsApp and Discord.
This remarkable feature fueled its explosive growth and made it popular among AI enthusiasts. However, due to its original name, “Clawdbot”, Anthropic (the creators of Claude) drew a legal challenge.
This forced the developers to rebrand with the name “Moltbot” (a reference to a lobster shedding its shell).
Crypto scammers grabbed abandoned social media usernames and created fake domains and tokens in Steinberger’s name.
This case illustrates the underlying conflict of the tool: its great autonomy is also a source of danger.
Running on the local machine is a privacy benefit, but the risk of allowing an AI system to execute commands is significant.
Despite the tumultuous start, however, Moltbot is at the forefront of what is possible with AI.
It shows the developer’s growing vision for assistants that are proactive, integrated and helpful rather than just chatty. But it also raises security concerns.
For now, it’s a tech-savvy product, but its future looks like the frantic, chaotic start of a new paradigm for personal computing.



