· 2 min read ·

Managing the Loop: Where Humans Actually Belong in Agentic Development

Source: martinfowler

There’s a pattern I keep noticing in conversations about AI-assisted development: people are framing it as a binary. Either you let the agent run loose and hope for the best, or you hover over every output and micromanage it into submission. Neither of those is actually working well for anyone.

Kief Morris has a piece on Martin Fowler’s site — Humans and Agents in Software Engineering Loops — that cuts through this pretty cleanly. His argument is that the right framing isn’t “how much do I trust the agent” but rather “what is the loop we’re trying to run, and where do humans add the most value in it?”

The core idea: focus on turning ideas into outcomes. Humans should be building and managing the working loop, not executing inside it.

This resonates with me a lot. When I’m building something like an idle processor or a scheduled task runner for a Discord bot, I don’t sit there executing every step myself. I design the loop — what triggers it, what it checks, what it does when something’s off — and then I let it run. My job shifts to monitoring, tuning, and improving the loop over time. The loop does the work.

Agentic coding tools are pushing software development toward the same model. The agent is inside the loop: reading code, writing changes, running tests, iterating. The human’s job is to define what “done” looks like, watch for the loop drifting off course, and intervene at the right level of abstraction.

What I find interesting is how poorly most people — myself included — have internalized this. We either:

  • Over-delegate: hand a vague task to an agent, get back something plausible-looking but subtly wrong, and only notice after it’s shipped
  • Under-delegate: accept every suggestion one line at a time, at which point you might as well be writing it yourself

The productive middle ground is designing the loop well. That means:

  • Being precise about the goal upfront, not just the immediate task
  • Building in checkpoints where human judgment is actually necessary
  • Treating agent output as draft material that needs a review pass at the right granularity
  • Knowing when to stop the loop and reconsider the approach entirely

This is a skill that’s genuinely new. We’re used to either writing code ourselves or reviewing code that another human wrote. Reviewing agent-generated code at loop-management level — evaluating whether the loop is heading in the right direction, not just whether the last output is correct — is a different mental mode.

I think this framing also explains why “just use the AI for everything” goes sideways for complex projects. It’s not that the agent can’t produce good code. It’s that nobody is managing the loop. The goal drifts, small wrong assumptions compound, and by the time you notice, you’re deep in something that technically works but doesn’t do what you actually needed.

The human’s job isn’t to be inside the loop anymore. It’s to be the one who decides what loop to run.

Was this interesting?