The Loop Is the Job: Where Humans Fit in an Agent-Assisted Workflow
Source: martinfowler
There’s a pattern I keep noticing in how people talk about AI coding agents. Either they’re pitching full automation — just describe the feature and walk away — or they’re treating agents as glorified autocomplete that needs babysitting at every line. Both framings miss the point.
Kief Morris makes this case well in a piece over at Martin Fowler’s site. His argument: the goal of software development is turning ideas into outcomes, and the human’s job is to build and manage the loop that does that — not to either hand it off entirely or hover over every output.
That resonates with me. When I’m working on my Discord bot, the thing that takes up real cognitive energy isn’t the code itself — it’s knowing what to build, verifying it does the right thing, and catching the subtle stuff that a language model won’t flag because it doesn’t know the context. An agent can write the handler. It can’t tell you whether the feature even belongs in the bot, or whether the way it interacts with existing commands is going to confuse users.
The Loop as the Product
What Morris is describing is essentially a control loop: you have goals, you have feedback mechanisms, and you have something that acts. Agents are the actors now. But the loop architecture — what counts as done, how you verify, when you intervene — that’s the craft.
This isn’t just philosophical. It’s practical. If you leave agents to run unsupervised, you accumulate drift: code that technically works but diverges from the actual intent over multiple generations of edits. If you micromanage, you lose the whole point of delegation.
The middle path is harder to describe but easy to recognize when it’s working: you’re setting up the conditions for good outputs, reviewing at the right granularity, and correcting the loop rather than the individual outputs.
What This Changes About How I Work
Honestly, it changes my mental model of what I’m responsible for. When I’m reviewing agent-produced code, I shouldn’t be asking “is this code correct” at the line level — I should be asking whether the approach is sound, whether the behavior matches the intent, and whether my feedback will improve the next iteration or just patch the current one.
It also means thinking more carefully about how you describe work to agents upfront. Not just prompts, but context: what already exists, what constraints matter, what “done” looks like. That’s not new — good specs have always mattered — but agents make the cost of a vague spec more immediate.
The piece is worth reading in full if you’re doing serious work with coding agents. The framing of “managing the loop” is one of the cleaner ways I’ve seen this articulated, and it avoids both the hype of full autonomy and the dismissiveness of treating agents as toys.