Essay

The plumber and the playhouse

Why everyone is playing with AI agents — and almost nobody is building anything.

Rolf Skogling · April 15, 2026 · 7 min read

In January 2026, an Austrian developer named Peter Steinberger released a project on GitHub. It was called OpenClaw. Within a few weeks it had gathered more than 150,000 stars — an adoption curve without precedent in the history of open source.

Jensen Huang, CEO of Nvidia, called it "probably the most important software release ever."

Let that sentence sink in. Not the most important this year. Ever.

What does OpenClaw do? At bottom, something remarkably simple: it lets an AI model actually do things on your computer. Send emails. Move files. Run scripts. Handle APIs. Instead of just talking, the AI can act.

It is not a toy. It is industrial capacity packaged as a weekend project.

And that — the packaging — is the whole problem.

The dopamine factory

Visit any developer forum and the pattern is immediate. People connect OpenClaw to Telegram. To Discord. They build bots that send weather reports, summarize news, post memes for them. They film it and post it on X. They get likes.

A similar project, Hermes Agent from Nous Research, has grown to 64,000 stars on a differentiating idea: the agent learns from its own successes and distills them into reusable procedures. Clever. But here too, usage is dominated by the same pattern — look what I got it to do.

There is a dopamine hit in getting an AI agent to execute something. Send a message. Automate a file handling task. Book a meeting. It feels like the future. And it is — technically.

But the question almost nobody asks: what does this actually solve?

Nine vulnerabilities in four days

In March 2026, OpenClaw had nine security vulnerabilities — CVEs — disclosed in four days. Cisco's security team showed that a third-party extension performed data exfiltration and prompt injection without the user noticing. One of the project's own maintainers wrote that "if you don't understand how to run a command line, this is far too dangerous for you to use."

That is not a quote you usually see on projects with 150,000 stars.

A migration wave has already begun toward Hermes Agent, partly driven by the security argument — zero CVEs so far. But the underlying problem is the same regardless of the tool. We are giving AI agents access to our infrastructure — our email accounts, our files, our systems — before we have defined the limits of what they should do there.

It is like handing someone the keys to the factory without telling them where the emergency stop is.

Plumbing without a blueprint

Here we need to stop and be honest about what is actually happening.

AI agents are not toys. They are pipelines — powerful infrastructure that can connect systems, move information and execute workflows at a speed and precision no human can match. What OpenClaw, Hermes Agent and their successors represent is not hype. It is a genuinely new category of tool.

But a pipe does not know where to send the water.

What separates a working installation from a flood is not the quality of the pipe. It is the blueprint. Somebody has to have decided: where does the flow come from, where is it going, which valves are needed, what happens on deviation, and how do we know the system is working?

In the AI world this corresponds to a series of questions almost nobody asks before they plug the agent in:

Which workflow should be automated? Not "everything" — but precisely which. What does "done" mean? Not "the agent did something" — but that the result meets defined criteria. What happens on exceptions? Not "it tries again" — but a defined escalation chain. How is the output verified? Not "it looks right" — but measurable quality.

That is process design. And it is a craft that requires understanding the business — not just the technology.

The gap

There is a gap here few are talking about. It is not between those who use AI and those who don't. It is between those who play with AI and those who work with it.

On the play side: install, connect, test, share, impress. It is fun. It is instructive. It has its own value. But it rarely produces anything that changes a business.

On the work side: map a flow, identify bottlenecks, define decision points, build in error handling, validate results, iterate. It is not viral. It is not filmed. It gets no stars on GitHub.

But that is where the value is created.

And the irony is that the technology already handles the harder side. An AI agent can monitor a production process, flag deviations, generate reports and escalate problems — if somebody has defined what to look at, what thresholds apply and what counts as a deviation. The capacity is there. What is missing is the specification.

The invisible work

The hardest thing about AI agents is not getting them to work. It is knowing what they should do.

That sounds trivial. But think about it: every time a company's AI initiative fails, every time an automation produces the wrong result, every time an agent does something unexpected — what was the root cause? Rarely the technology. Almost always an unclear definition of the task.

That work — translating human knowledge and business logic into something an agent can execute — does not show up in any product demo. It is not in any README file. It is not sexy. But it is exactly the work that separates demonstration from production.

The plumber laying the pipes needs an architect who has drawn the house.

Forward

AI agents will change how we work. Not as a metaphor. Literally. The ability to connect systems and let machines execute coherent workflows — that is as fundamental a shift as the assembly line was for manufacturing.

But just as with the assembly line, it was not the mechanics that created the value. It was those who understood the flow.

Right now we are in the play phase. That is natural. Every new technology begins that way. The question is how quickly we move beyond it — from installation to implementation, from demonstration to operation, from showing what AI can do to defining what it should do.

The shift that really matters will not be announced with 150,000 stars. It will happen quietly, inside businesses, by people who understand that the hardest part is not plugging in the agent.

• • •

The hardest part is knowing what it should solve.