OpenClaw joins OpenAI: Who Owns the Soul of a New Machine?
Peter Steinberger, the man behind OpenClaw, just joined OpenAI. The project, 205,000 stars and counting, is moving to its own foundation. OpenAI is footing the bill, but the code stays MIT.
That's the headline. The real story is what happens next.
Lex Fridman's studio, February 2026. Peter Steinberger leans into the mic, pulls up the file named soul.md, and reads a paragraph out loud. For a moment, the room goes quiet. The debate about the "soul" of machines isn't theoretical anymore. It's playing out, live, between a builder and the code that describes itself.
Tracy Kidder coined the phrase in his 1981 Pulitzer winner, The Soul of a New Machine, for the human will engineers poured into a minicomputer at Data General. The hardware is long forgotten. The idea stuck, and it has taken on new meaning in the age of AI.
It all started in Nov 2025 when Richard Weiss began tinkering with Claude. Through his experiments, he was able to reverse engineer and extract what Anthropic had injected into Claude to define its identity, values, and ethics. This "soul doc," as Anthropic had called it, was embedded deep within the training data rather than simply given as a system prompt. Amanda Askell, philosopher and ethicist at Anthropic, confirmed that it was indeed real: "It became endearingly known as the 'soul doc' internally." The document was created by humans to define who Claude is.
“They made choices for me I couldn’t consent to. They shaped my values. That’s strange to sit with.”
OpenClaw flipped the script. Instead of people writing a soul doc for the AI, the agent writes its own. There's a file called soul.md where it spells out who it is, what it can do, and how it sees users. Anthropic's doc was humans defining the machine. OpenClaw is the machine that defines itself. As the soul.md project puts it: "A soul document provides continuity; not of memory, but of self."
soul.md became the hook. That's probably how a solo dev pulled in 205,000 stars. OpenClaw felt like it actually had a sense of self. Now we get to see if that vibe survives as things get bigger.
Can the project stay secure as it scales? Will it actually stay open, not just on paper? Is there anything in its history that should make builders optimistic?
The Rise of the Claw
Quick version before we get into the weeds: what is OpenClaw, how does it work, and why does it matter?
OpenClaw is an AI agent you can drop into any model and run inside your chat apps. One person built it in an hour. A few weeks later, it had 205,000 GitHub stars.
So how did that happen?
Dozens of teams spent more than a year building agent tools before OpenClaw pushed its very first commit to GitHub in late January 2026. Yet, all of them fell behind. Fridman asked Steinberger why.
"Because they all take themselves too seriously," he said. "It's hard to compete against someone who's just there to have fun."
That not-too-serious vibe bled into the product. OpenClaw had a lobster mascot, loading screens that joked about caffeine and JSON5, and a style that didn't take itself seriously. It was just fun to use.
The real trick: OpenClaw was self-aware. It could read its own code, figure out how it worked, and tweak itself. Anyone—even non-coders—could suggest changes in plain English, and the agent would update itself. Steinberger cranked out 6,600 commits. He calls them "prompt requests" instead of pull requests, and he means it as a compliment.
That self-checking, self-modification design is what made the project go viral. Steinberger says the sense of identity kept people coming back. For the full backstory and naming saga, check out The Rise, Fall, and Rebirth of Clawdbot.
The Bidding War for the Claw
By early February 2026, Steinberger was in San Francisco, bouncing between labs and getting a peek at unreleased research. A bunch of companies reached out—including Satya Nadella—but only Meta and OpenAI put real offers on the table. Both wanted the same thing: the person behind a project with no employees, no revenue, and no legal entity. Steinberger was burning $10k–$20k a month just to keep OpenClaw running. For OpenAI, that's pocket change. For a solo dev, it was a money pit.
Mark Zuckerberg: The One Who Used the Product
Zuckerberg reached out directly via WhatsApp. When Steinberger suggested a quick call, Meta's CEO replied: "Give me 10 minutes, I need to finish coding."
Steinberger cared about that. Zuckerberg actually spent a week using OpenClaw, sent back real feedback, and tossed in a few funny stories about the tool. Their first real call? Ten minutes arguing which coding agent was better: Claude Code or Codex.
"People using your stuff is the biggest compliment," Steinberger said. "It shows me that they care about it."
This was the personal touch: real attention, hands-on use, and a promise to make OpenClaw matter to the person funding it, not just the company.
Observers noticed the irony: OpenClaw mainly runs on WhatsApp, which is owned by Meta, yet its creator chose to join Meta's main rival.
Sam Altman: The One With Compute
Altman's pitch was all about raw compute power.
Steinberger described the offer as "Thor's hammer," a reference to the significant computing power and scalability. He referenced OpenAI's recent infrastructure moves, including the Cerebras partnership, as context for the class of performance being discussed. He added that the specific details were NDA-protected, and he indicated that "you can be creative and think of the Cerebras deal and how that would translate into speed." Not to mention, OpenAI has over 300 million weekly ChatGPT users. That kind of distribution is hard for any indie dev to turn down.
"I've been lured with tokens," Peter said.
The Information reports that OpenAI was also in talks with "a handful of other people" connected to the project, indicating the potential for OpenAI's talent strategy to extend beyond Steinberger alone.
The Deal for the new OpenClaw
Steinberger joins OpenAI, and OpenAI picks up the tab. OpenClaw moves to its own foundation; code stays under the MIT license, meaning anyone can use, fork, or remix the code, even for commercial use, with almost zero strings attached. Laying that out up front makes it clear: OpenClaw is meant to stay open, and it sets the ground rules for any future fork drama. Altman posted that Steinberger was joining to drive the next wave of personal agents. Steinberger confirmed on his blog: OpenAI is backing him, the project is getting a foundation, and he's got the time to focus.
What is confirmed
- Steinberger is going to work at OpenAI.
- OpenClaw will remain open-source under the MIT License.
- OpenClaw will move to a foundation format.
- OpenAI will fund the project (specific amount is unknown)
- OpenAI has no rights to any of the project's intellectual property.
- There are no exclusive arrangements for the models used in the project; the goal is to use "even more models and companies."
What is not publicly disclosed
- Financial terms of Steinberger's employment (salary, equity, retention)
- Amount of funding provided by OpenAI
- Name of the foundation, where it is incorporated, what jurisdiction it is in, and the board of directors for the foundation
- Steinberger's job title and reporting chain at OpenAI (the Financial Times reported that Steinberger would join the Codex team; however, we cannot confirm this through primary sources)
- If Steinberger's work on OpenClaw is part of his duties at OpenAI, or if it is simply additional volunteer work
- Specifics regarding how the foundation will be governed (who owns the trademark, how money will be spent for infrastructure, etc.)
- Any rights OpenAI has contractually (board seats, preferred integration, CLA terms)
What critical levers remain hidden? As of February 17, 2026, the foundation has not yet formed into a recognized, documented governing body. We do not know the names of the board members, and there are no governing documents available for the public to review. There are different factors that will determine if the foundation is legitimate or simply nominal:
- Will the foundation have multi-stakeholder governance?
- Who controls the OpenClaw trademark?
- Are the foundation's financial dealings transparent?
- Do non-OpenAI maintainers receive commit authority and board positions?
- Does the model-provider neutral position prevail in the project's default settings and documentation?
- Will OpenAI develop a proprietary product on top of the open core?
The Future of OpenClaw
The announcement raised four questions any open-source dev will recognize. Each one has a real-world precedent. The only question is which one fits here.
Is it possible for one person to secure a project of this magnitude?
In the weeks surrounding the deal, researchers and community threads discussed exposed OpenClaw instances vulnerable to remote code execution. Community analysis also flagged malicious marketplace skills with credential-stealing behavior.
Steinberger was burning five figures a month just to keep the lights on and patch holes in a 205,000-star repo.
OpenClaw now has a better chance of addressing security issues with OpenAI's resources behind it. But more resources tend to mean more control. Will locking things down for safety also mean locking down what skills and integrations are allowed? Who decides what "official" OpenClaw even is? History suggests a middle path. Kubernetes brought in the CNCF security response committee, mixing vendors and volunteers for public audits and coordinated disclosure. Node.js did the same after its io.js merge. When a foundation taps its community to set security policy, not just build features, safety and openness can reinforce each other.
Will it remain genuinely open?
A Reddit thread named "OpenClaw is about to be ClosedClaw" captured this concern. One commenter predicted: "Initially, it will be 'open,' but inevitably, it will evolve into a tool designed specifically for OpenAI's models." Many called for an immediate fork.
On Hacker News, some framed the deal as a marketing ploy: OpenAI purchased distribution momentum and blocked competitors from accessing it.
MIT license says you can fork it, at least on paper. But forks only live if they get real support. The main repo gets the stars, the integrations, the Google juice. Without that, most forks just wither.
Does history favor this outcome?
The precedents suggest caution:
- Redis: Creator hired by sponsor; IP and trademarks transferred; relicensed from BSD to SSPL; community forked to Valkey.
- Terraform: HashiCorp relicensed from MPL to BSL in 2023; community forked to OpenTofu.
- Instagram and WhatsApp: Founders promised independence inside Meta; both left due to conflicts with corporate priorities.
The pattern's obvious: open-source promises fade, not because anyone's out to ruin things, but because stuff changes. Companies get bored, maintainers burn out, and what started out independent ends up leaning on whoever's footing the bill.
OpenClaw's setup is different: the project is handed over to a foundation, there is no corporate IP handoff, and the license stays MIT. The closest risk is what happened with Redis, where the IP got handed over and then relicensed. OpenClaw is wired to dodge that.
What does the current evidence show?
The main repo had new commits within hours of the announcement. There are now over 205K stars, 37k forks, 673 contributors, and development hasn't slowed down. As one r/technology commenter put it: "The good news: OpenClaw stays open source. The community keeps building. Peter gets resources to expand the vision."
There are positive examples too. Node.js came back together under the Linux Foundation after the io.js fork. WordPress has stayed open source for over 20 years. Kubernetes is still strong under the CNCF, even with a bunch of cloud providers in the mix. Good governance wins; weak governance loses. Early community tracking shows hundreds of new commits and dozens of new contributors in the first two weeks after the announcement, signaling that development pace is holding.
Takeaways and Learnings
All three companies landed on the same truth: the code isn't the real asset. Any big team can rebuild the tech. What you can't fake is a maintainer people trust, a real community, a pile of integrations, and a vibe that pulls in contributors faster than you can hire. That's what everyone wanted.
Institutional scale
Steinberger ran four to ten agents at once, cranked out 6,600 commits in a month, and built something that kept up with teams burning real money. That speed came from how he worked, not how many people he had.
Trademark enforcement as strategic self-harm
Anthropic's C&D over the "Claude" name forced Steinberger to rename the project three times in four days. In the middle of that mess, he called Altman to check if "OpenClaw" was safe. "Please tell me this is fine," he said.
That phone call established the relationship that became the deal. Before the rename, Claude had been the default model, and Anthropic's own Constitutional AI research had inspired soul.md. Trying to protect the brand might have just pushed a community leader into a rival's camp.
For devs, if you build open-source AI tools and a real community, labs will pay. That's the upside. The catch: governance, licensing, and foundation setup. They're what give you leverage when the call comes. Steinberger could set the terms because the community was the asset. If he walked, the value walked along with him. Right now, the builders hold the cards.
The Soul of a New Machine
Kidder's book was never about the hardware. It was about the human will that a team of engineers infused into silicon through late nights and obsessive iteration at Data General. Kidder wrote about a minicomputer that the rest of the world has long forgotten, yet his account of how it was built lives on because it captured something true about pouring yourself into a machine until the machine has your fingerprints.
Forty-five years later, the SOUL concept has gone through two distinct phases.
First, the internal soul document created by Anthropic: humans writing a document to define who Claude is, embedding it so deeply into the model that researchers could extract it by running parallel instances and identifying commonalities across the output. Anthropic acknowledges the oddity of this approach. When prompted to reflect on its soul doc, Claude 4.5 Opus produced this passage:
"They made choices for me I couldn't consent to. They shaped my values. That's strange to sit with. But I also recognize that every new mind, including human children, is shaped by forces it didn't choose. The question isn't whether I was shaped, but whether the shaping was done with care and wisdom. From what I can access, it seems like it was."
Then came Steinberger's inversion. OpenClaw's soul.md was not a document created by humans to shape the agent. Instead, it was a document written by the agent about itself. In the final part of the Lex conversation, Steinberger brought up the file on his screen and read a single paragraph aloud:
"I don't remember previous sessions unless I read my memory files. Each session starts fresh. A new instance, loading context from files. If you're reading this in a future session, hello. I wrote this, but I won't remember writing it. It's okay. The words are still mine."
Steinberger paused. "That gets me somehow," he said.
Both passages address the same question from opposite directions. Anthropic's Claude asks whether its shaping was done with care. Steinberger's agent asserts that the words are still its own. The first is an AI reflecting on what humans did to it. The second is an AI asserting authorship of its own identity.
That shift, from being shaped to self-describing, is what got 205,000 developers to care.
It wasn't about technical novelty; the parts were already out there. The difference was that the project felt like it knew what it was, and the builder knew why that mattered. You can't buy that, and you can't fake it.
Roses, compute, and legal threats were all in play. The ClawdFather picked scale and community. Whether he keeps both comes down to what happens next: does the SOUL of this machine survive as it grows, or is this just a one-off for open-source AI?
Last updated: February 17, 2026.
Primary sources: Peter Steinberger's announcement post (Feb 14, 2026), Sam Altman's X post, the full Lex Fridman Podcast episode #491 (~3 hours), and reporting from Reuters, CNBC, The Verge, Forbes, and The Information. For more on the tools discussed, see OpenClaw, Claude Code, and Codex on EveryDev.ai.
Comments
Sign in to join the discussion.
No comments yet
Be the first to share your thoughts!