There’s a framing problem baked into most conversations about AI agents and software development. The debate usually collapses into two camps: people who want to hand everything to the agents and see what comes back, and people who insist on reviewing every generated line before it moves anywhere. Both camps are missing something.
Kief Morris, writing on Martin Fowler’s site, offers a more useful frame: the goal is turning ideas into outcomes, and the human’s job is to build and manage the loop that makes that happen — not to be inside the loop doing every step, and not to be absent from it either.
That distinction between being in the loop and managing the loop is doing real work here.
The Loop as the Unit of Work
When I’m building something — say, a new feature for a Discord bot — there’s a cycle that naturally emerges: write some intent, get an implementation, test it against reality, adjust. That cycle exists whether you’re using agents or not. What agents change is which steps require a human to be the one executing them.
The mistake is thinking the goal is to minimize human involvement. The actual goal is to run that cycle well. Sometimes that means letting an agent draft the implementation while you focus on defining the right problem. Sometimes it means catching a wrong assumption early before the agent goes deep on a dead end. The judgment call about when to intervene is the real skill.
Micromanagement Doesn’t Scale, Absence Doesn’t Either
I’ve seen both failure modes in practice. Reviewing every token an agent produces defeats the purpose — you’re doing the cognitive work anyway, just with extra steps. But fully delegating without a feedback structure means subtle misalignments compound. The agent confidently builds the wrong thing in the right direction.
Morris’s point is that the solution isn’t a middle ground on the same axis — it’s operating at a different level entirely. You design the loop. You set checkpoints that matter. You define what “done” looks like before the cycle starts, not after it produces something you have to evaluate cold.
What This Actually Requires
This framing asks more of developers, not less. To manage a loop well, you need:
- Clear outcome definitions before handing off to an agent
- Meaningful checkpoints where human judgment actually adds value
- Feedback mechanisms that catch drift early, not at the end
Those are hard. Defining what you actually want, precisely enough that an agent can be usefully autonomous, is a skill most of us are still developing. It’s closer to writing good specs than writing code — which is maybe why it feels unfamiliar.
The developers who get the most out of agents right now aren’t the ones who’ve figured out the best prompts. They’re the ones who’ve figured out how to structure their own thinking about what they’re building.
That’s the loop worth managing.