· 2 min read ·

The Loop Is the Job: Where Humans Actually Belong in Agentic Development

Source: martinfowler

There’s a framing problem baked into most conversations about AI agents and software development. People tend to ask: how much should I let the agent do? As if the job is to find the right slider position between “full autonomy” and “human does everything.”

Kief Morris pushes back on this in a recent piece on Martin Fowler’s site, and I think he’s onto something important. The argument is that the question itself is wrong. The right frame isn’t how much you delegate — it’s whether you’re actually managing the loop that turns ideas into outcomes.

What Loop Are We Talking About?

Software development has always been a feedback loop. You have an idea, you build something, you observe what happens, you adjust. The loop has gotten tighter over decades — unit tests, CI/CD, feature flags, observability. Agents are just the latest thing that can fit inside that loop, or potentially run it.

Morris’s point is that the loop itself is what humans should own. Not the individual tasks inside it, and not just the final output review. The loop design — what gets measured, when it terminates, what counts as done — that’s where human judgment is irreplaceable.

This resonates with how I think about building bot automation. When I write a Discord bot that does something on a schedule or reacts to events, the tricky part was never the individual handlers. It was deciding: what does the bot do when it gets confused? When does it ask for help? When does it just fail loudly? Those are loop decisions, not task decisions.

The Two Failure Modes

Morris identifies two ways this goes wrong:

Leaving agents to it — you prompt the agent, walk away, and hope the output is good. The loop runs without you. You show up at the end to accept or reject, but you’ve abdicated control over the process itself.

Micromanaging the output — you’re in every step, reviewing every file the agent touches, second-guessing each decision. The agent becomes a fancy autocomplete. You’ve eliminated the leverage.

Both failure modes share a root cause: treating the agent as a black box you interact with at the boundary, rather than as a component running inside a loop you designed.

The Practical Implication

If you buy this framing, it changes what skills matter. You spend less time writing perfect prompts and more time asking: what’s the feedback mechanism here? How will I know the agent is going in the wrong direction before it’s too late? What are the checkpoints?

This is closer to how good engineering managers think about their teams — not “I’ll review every PR in detail” and not “I’ll trust everyone to figure it out,” but “I’ve set up a system where problems surface early and we course-correct fast.”

Morris calls this “building and managing the working loop,” and it’s a more durable framing than most of what’s been written about AI-assisted development. The tools will keep changing. The loop won’t.

Was this interesting?