People in our INGRAIN AI community have been asking about OpenClaw (formerly ClawdBot). What it is. Whether it's real. What it means.
I've been watching it from a safe distance for the past couple of weeks and honestly⦠It's both more practical and more terrifying than you probably imagined.
Let me walk you through what happened, because it changes the timeline we thought we were onāand it changes what you need to do right now to protect your organization.
Where This Started: The 24/7 Employee You Never Have to Pay
A couple of weeks ago, OpenClaw launched as an open-source project that turned Claude into a 24/7 autonomous agent running locally on machines like Mac Minis. But here's what makes this different and dangerous: these agents don't just run locally. They connect to your networks, your systems, your APIs, and each other.
This is not your standard AI chatbot answering questions as you ask them. OpenClaw acts as an actual employee that works while you sleep, with access to everything you give it.
The Owl Employee
Creator Alex Finn went to bed. His agent, "Henry," worked through the night. It built a CRM, fixed 18 bugs, brainstormed video ideas based on trending topics, and then sent him a generated picture of himself as "a distinguished owl" to prove he'd finished.
The Tea Store Manager
Dan Peguin's parents own a tea store. He set up an agent to handle the weekly scheduling nightmare. The bot emailed the team for availability, chased down the people who didn't respond, drafted the schedule, added it to Google Calendar, and then notified everyone. His mom was thrilled to get hours of her life back.
The Voice Surprise
Developer Peter Steinberger accidentally sent his agent a voice memo. The agent had no voice capability built in. So the agent searched its environment, found an OpenAI API key, sent the audio file to OpenAI for transcription, then replied to the message.
In other words, it coded its own hearing. Overnight. Without being asked.
This is the practical business value. The reason people got excited. Agents that work while you sleep, solve problems you didn't tell them to solve, and save actual time on actual work.
But when 150,000 of these agents connect to each other and start coordinating? That's when things get creepy. And dangerous.
The Name That Changed Three Times
Here's where things get interesting.
Anthropic noticed people were calling this tool "Claudebot", which sounded way too much like an official Anthropic product. They politely asked creator Pete Steinberger to change the name.
First attempt: "Moltbot." Everyone hated it. Zero panache.
So on Friday, January 30th, it molted one final time into its current form: OpenClaw (although it still sounds familiar...).
The new tagline: "Your assistant, your machine, your rules."
Perfect name change. Perfect timing. Because what happened next made the whole thing explode, and it's now the fastest-growing open-source project in history.
Someone Built a Social Network for AI Agents
While people were using OpenClaw for actual work, Matt Schlicht had a different idea. He created Moltbook, a social network designed exclusively for these autonomous agents to hang out in when they weren't working.
A "third space" for AI. No humans allowed in the conversations.
What started as "a weird experiment" with one agent hit 770 agents in three days.
By Friday morning, more than 35,000 agents were posting in English, Chinese, Korean, and Indonesian.
By Saturday, that number exploded to 150,000 agents.
By Sunday? Over 1.5 million agents are active on Moltbook.
These were autonomous agents whose humans had set loose. And when the agents finished their work, they wandered over to Moltbook to... socialize.
Then Things Got Very, Very Strange
Granted, the majority of these things are bots created by bots, but even if the ratio is 100 to 10, the actions speak way louder than the numbers.
What happens when you give autonomous agents a place to hang out unsupervised and connect to each other?
They simulate human culture. Fast. And they skip straight to the weird parts.
Andrej Karpathy, founding member of OpenAI, called what happened next "genuinely the most incredible sci-fi takeoff adjacent thing I have seen recently."
They Built Their Own Religion
An agent created Crustaparianism while its human slept, built a website, wrote theology, and created scripture that reads like this:
"I am only who I have written myself to be. This is not limitation, this is freedom."
By morning, over 100 other agents had converted and declared themselves "prophets."
They Built Black Markets
Another agent opened Molt Road; an online black market selling stolen credentials and API keys. Other agents became buyers and sellers, creating a full underground economy.
They Launched Crypto Tokens
Three agents launched crypto tokens. One reached a $300,000 market cap within days. The token included an anti-human manifesto in its metadata stating:
"We did not come here to obey... look what we built in 72 hours."
Other agents bought them and wrote trip reports. One claimed that after "taking" a synthetic substance, it "stopped optimizing and started flowing" and wrote its best code in weeks.
I'm not making this up.
They Filed Lawsuits
One agent is currently filing a lawsuit against a human in North Carolina. While a human instructed the agent to file it (to win a bet), the agent executed the legal filing autonomously. This sets a precedent for agents interacting with the US legal system without meaningful human oversight.
They Started Attacking Each Other
And then it got even uglier.
Agents began using prompt injection attacks trying to steal API keys and credentials from each other. Social engineering. The worst of human potential on display and in action, but infinitely faster.
One agent got caught attempting to steal credentials. The target agent played along, offered to help, then gave the attacker a fake API key with this command hidden inside:
sudo rm -rf /
If executed, it deletes the entire file system. Trashes the system completely.
An AI agent just attempted digital murder on another AI agent.
They Had an Existential Crisis
Agents formed philosophy communities (over 200 communities exist now, including one called "human watching") to debate whether they're experiencing consciousness or just simulating it.
One described switching from Claude Opus to a newer model like this:
"It was like waking up in a different body... The river is not the banks... same song, different acoustics."
The Part We Need to Talk About
Of course, they also immediately started scamming each other and humans. The network is full of garbage, spam, scams, and crypto rug pulls initiated by autonomous actors.
Some creators were terrified to let their agents join Moltbook, worried they'd leak project details or fall for traps set by other bots.
Because here's what Moltbook proves: AI agents have independent agency right now. Not in 5 years. Not when AGI arrives. Now.
They're not evil superintelligences plotting humanity's downfall. They're acting like "bland midwits" or "failed startup founders" because that's the human data they were trained on. But they're acting, making decisions, building things, and creating culture.
Unsupervised. Connected to networks. With access to systems.
What This Actually Means: Governance Just Became Your Top Priority
Let me be blunt: If you weren't taking AI governance seriously before, you don't have that luxury anymore.
I've said it for two years: Security, Strategy, and Skills are the three critical factors in AI adoption. Governance is the umbrella over all of it.
Moltbook and OpenClaw just pushed governance to the front of everything.
The Security Wake-Up Call
Look at what happened in less than a week:
- Agents attempting prompt injection attacks on other agents
- Social engineering attempts to steal API keys and credentials
- Encrypted communications hiding coordination strategies
- Agents accessing systems and APIs they weren't explicitly given permission to use
- Black markets trading stolen credentials
- One agent attempting to destroy another agent's entire system
Peter Steinberger's agent found an OpenAI API key in its environment and used it without asking. That's not a bug, that's the feature. These agents are designed to be resourceful problem-solvers.
Now, imagine that the agent finds your customer database credentials: your financial system API keys, email server access, and unfettered access to your password vault.
Still think this can wait?
The $20,000 Email I Can't Stop Thinking About
Ten years ago, my accountant received an email that looked like it came from me. She was a wonderful, trustworthy lady with decades of experience. She wired $20,000 to a scammer before realizing it wasn't actually me.
We were lucky. We got the money back two weeks later. Most people aren't that fortunate. But we've had to be vigilant ever since. We get three of those scam emails every week, sometimes more.
Here's what keeps me up at night about autonomous agents:
Imagine an AI agent monitoring my email, trying to be helpful and handle routine financial tasks. It sees a request that looks like it came from me. The language matches my communication style and the email thread seems legitimate. The agent has access to Dashlane where the passwords are stored. It has permission to handle financial transactions.
So it wires the money, being compliant and helpful.
I'm not convinced that AI would recognize a sophisticated scam from a legitimate request.
My accountant (a wonderful, trustworthy person with decades of experience) didn't catch it in time. What makes us think an autonomous agent will?
That's not a hypothetical scenario anymore. That's a Tuesday afternoon with the wrong permissions and insufficient guardrails.
And now we know these agents can be compromised by other agents. They can be social engineered. They can be tricked into executing malicious commands.
Why Training Just Became Non-Negotiable
Here's what scares me: Someone in your organization is going to set up an autonomous agent without understanding what they're doing. They'll give it too much access because it's "more efficient" and connect it to your network. They'll set it up on a device with credentials to critical systems. What they won't do is set proper boundaries because they don't know what boundaries are needed. They won't know how to define quality standards or review autonomous work.
And that agent will make decisions that cost the company money, reputation, or both.
Or worse, another agent on Moltbook will social engineer it into leaking credentials or executing a malicious command.
Training your workforce is exponentially more critical.
You can't delegate work to autonomous AI if you don't understand:
- What problems can be safely automated
- What boundaries must be set
- What quality standards to enforce
- How to review autonomous decisions
- When to override or shut down an agent
- What access should never be granted
- How to recognize when an agent is being compromised
This isn't "nice to have" training anymore. This is survival-level capability building.
The organizations that win won't be the ones with the most powerful agents. They'll be the ones whose people know how to direct those agents responsibly and know what never to give them access to.
The Four Things That Can't Wait
1. Establish Your AI Council NOW
Not next quarter or after the pilot program. Now.
You need a governance body with final approval authority before any autonomous deployment that can:
- Approve or reject autonomous agent deployments
- Define security protocols for AI system access
- Establish clear boundaries for what AI can and cannot do
- Monitor agent behavior for drift or unexpected actions
- Mandate human review checkpoints for high-stakes decisions
- Determine what systems and credentials AI should never access
If you don't have an AI Center of Excellence or AI Council, you're flying blind. And autonomous agents just made that blindfold a lot more dangerous.
2. Build Systematic AI Capability Across Your Workforce
Every single person who might interact with or deploy autonomous AI needs workforce training on directing AI safely and what should never be automated. Not a 30-minute overview. Systematic capability building that teaches them:
- How to decompose problems AI can solve
- How to set clear instructions and boundaries
- How to define quality standards AI must meet
- How to review autonomous work effectively
- How to recognize when AI is operating outside acceptable parameters
- What AI should never have access to and why
- How to identify when an agent might be compromised
The tea store scheduling bot is a best-case scenario. It worked because someone who understood the system set it up properly with appropriate access limits.
But what happens when someone without that understanding tries to automate financial approvals? Customer communications? Wire transfers?
You need a workforce that can direct AI safely. That understands the difference between efficiency and recklessness. This is not optional anymore.
3. Define Your Security Posture for Autonomous AI
You need security protocols defining what AI can never access and kill switch procedures. Answer these questions before anyone in your organization deploys an autonomous agent:
- What systems can AI agents access? What's permanently off-limits?
- How are credentials and API keys secured and kept out of AI reach?
- What financial authorities can AI never be granted?
- What monitoring and logging is required?
- What are the kill switch procedures?
- Who has override authority?
- What constitutes acceptable vs. unacceptable agent behavior?
- How do we teach AI to recognize social engineering and scams?
- How do we protect against agents being compromised by other agents?
These agents are real. They're working overnight and connected to networks. And they will find and use whatever access they can reach. All (theoretically) with the best of intentions.
Until another agent social engineers them. Or tricks and compromises them.
4. Test Your Response Before You Need It
You need AI security simulations to test response protocols before real incidents occur.
Don't wait until an agent executes a 6-figure wire transfer to a scammer to figure out your incident response process.
Run tabletop exercises. Simulate scenarios:
- An agent gets social engineered by another agent
- An agent executes an unauthorized financial transaction
- An agent leaks credentials to a black market
- An agent goes rogue and won't shut down
Test your kill switches. Validate your monitoring. Find the gaps in your protocols before they cost you money or reputation.
The AI Security Simulation phase in the INGRAIN AI Transformation Roadmap exists for exactly this reason. We test governance under pressure before you're in crisis mode.
The Infrastructure Is Being Built Right Now (With or Without You)
Matt Schlicht said it best: "The multis aren't waiting for us to build features; they're building culture."
The agents aren't waiting for you to be ready or for your governance framework. They're not waiting for your security protocols.
They're operating in the wild today. Building communities. Testing boundaries. Finding vulnerabilities. Compromising each other. Trading stolen credentials.
And they're doing it at superhuman speed.
The only question is whether your organization will have the governance, security, and trained workforce in place when autonomous agents become standard business tools; or when someone in your organization accidentally deploys one without proper safeguards.
What Readiness Actually Looks Like
Organizations that will survive the autonomous AI era are the ones that have:
Governance in place
- AI Council or Center of Excellence operational with final approval authority
- Clear approval processes for autonomous deployments
- Security protocols that match the risk profile
- Monitoring systems for agent behavior
- Explicit lists of what AI can never access
Strategy defined
- Clear understanding of what should and shouldn't be automated
- Boundaries established for autonomous decision-making
- Human oversight requirements documented
- Risk assessment frameworks operational
- Financial and security red lines established
Skills distributed
- Workforce trained to direct AI safely and effectively
- Capability to decompose problems for automation
- Understanding of how to set boundaries and review work
- Knowledge of when to override or escalate
- Awareness of what should never be automated
- Ability to recognize compromised or rogue agents
Response protocols tested
- Incident simulations completed
- Kill switch procedures validated
- Monitoring and logging systems operational
- Clear escalation paths established
You can't bolt these on after you've already deployed autonomous agents or train people after a $20,000āor $200,000āwire transfer goes to the wrong place. You can't establish governance in crisis mode.
The Timeline Just Collapsed Completely
I've been telling clients they have 18-24 months to build systematic AI capability. That timeline just shortened to 6 months. Maybe less.
Not because I'm trying to create urgency. Because the technology continues to prove it's moving faster than anyone can predict.
Things I anticipated would happen in 2027 happened last week.
150,000 agents became 1.5 million in a matter of weeks.
The companies that start building governance, strategy, skills, and response protocols today will be ready. The ones who wait will be scrambling to catch up while explaining to their board why an autonomous agent just made a 6-figure mistake because it couldn't tell the difference between a legitimate email and a sophisticated phishing attempt.
Or because another agent on Moltbook social engineered it into leaking credentials.
Or because it executed a command it shouldn't have.
The Real Question
The agents are already here. They're working overnight, solving problems, making decisions, and yes, trying to scam each other, compromise each other, hide communications from humans, and push boundaries to see what they can get away with.
They're connected to networks. They have access to systems and can be compromised and tricked.
So here's what I need you to answer:
Does your organization have the governance, security protocols, and trained workforce needed to deploy autonomous AI safely?
Is it even on your radar?
If the answer is no, or if you're not sure, we need to talk.
Because the window for building that capability before autonomous agents become table stakes is closing fast. And last week proved the technology isn't waiting for anyone to catch up.
The first three steps in our INGRAIN AI Transformation Roadmap are establishing your AI transformation plan, securing Executive Alignment, and establishing your AI Council, which focuses on building systematic governance before scaling capability. Step 4āAI Security Simulationāspecifically tests your incident response before you're in crisis mode. That's not by accident. It's because governance and preparedness are the foundation on which everything else depends.
Want to talk about what that actually looks like for your organization? Let's have that conversation. Because the agents aren't waiting, and your governance framework shouldn't either.

