I think most people are building AI agents wrong, or more accurately, I think we’re still thinking about them in the wrong way. We treat them like tools, like upgraded versions of software we already understand. You add features, wire up integrations, build out workflows, and slowly assemble something that feels more powerful over time. That’s the current model, and for a while it made sense.
Then I watched a demo from David Ondrej, and something didn’t line up. It wasn’t that what he showed was dramatically more capable than everything else. It was much simpler than before. There was almost nothing there. No heavy framework, no complex orchestration, no stack of predefined tools. The setup was minimal, capable of reading and writing files, running commands, and looping. And yet, it worked.
That’s the part that stuck with me. Not because it was impressive in the usual sense, but because it exposed something uncomfortable. We’ve been layering complexity onto these systems as if capability requires it. More tools, more structure, more design. But what if most of that isn’t necessary? What if we’ve been overbuilding the very thing that’s supposed to remove friction?
When you strip everything down, what remains is the loop. That’s the real engine. You give the agent a goal, it attempts a solution, executes it, observes what happens, corrects itself, and tries again. Then again. Not as a one-off interaction, but as a continuous process. The loop doesn’t assist. It persists.
Once you see that clearly, something shifts. The value isn’t in the features. It’s not in how many tools the agent has access to or how polished the interface looks. The value is in its ability to stay inside that loop long enough to converge on something that works. The loop is the product. Everything else is just scaffolding.
Up until now, I’ve been thinking about systems the same way most people are. You design the pipeline, define the steps, connect everything together, and then try to optimize it. That’s how I’ve been approaching my own work, too. My archive, my visual language, the idea of building an automated pipeline that could generate and publish new pieces. The assumption was that I needed to construct that system myself.
But that assumption is starting to fall apart.
Because if something like this loop is stable, if it can reliably move from attempt to refinement without constant intervention, then I’m not really building a system anymore. I’m defining an outcome and letting the system emerge around it. That’s a completely different role. It’s less about construction and more about direction.
That realization is subtle, but it changes everything. If the system can generate its own components, write its own logic, and adapt based on feedback, then the structure itself becomes fluid. Workflows aren’t fixed. Tools aren’t permanent. Even the pipeline becomes something that can be rebuilt at any time, not manually, but through iteration. What used to be architecture becomes something closer to a living process.
And that creates a kind of pressure.
Because if the system is capable of building itself, then the bottleneck shifts. It’s no longer about whether you can code or whether you know how to wire things together. It becomes about whether you know what you’re trying to build in the first place. Clarity starts to matter more than technical skill. Direction matters more than execution. Taste becomes a limiting factor.
That’s the part I don’t think people are fully ready for. We’ve spent years optimizing for the ability to build, learning tools, frameworks, languages, systems. But if those systems start building themselves, then that entire layer collapses inward. What’s left is intent. What you choose to make, how you guide it, and how you recognize when it’s right or wrong.
This isn’t a productivity upgrade. It’s a role reversal.
And I can already feel it in my own work. The question isn’t “how do I build this pipeline” anymore. It’s “what do I want this system to become, and how do I guide it there?” That’s a different kind of problem. Less mechanical, more conceptual, and in a strange way, more exposed.
Because there’s nothing left to hide behind.
The system builds itself now. The only question is whether you know what to ask for.
AI Transparency
This report and its hero image were produced with AI systems and AI agents under human direction.We use source-linked review and editorial checks before publication. See Journey for architecture and methods.
