· 2 min read ·

Stop Fixing AI Code by Teaching It Your Codebase First

Source: martinfowler

There’s a specific kind of frustration that comes with AI coding assistants. You prompt, you get code, and it looks right — until you actually read it. Wrong import paths, a utility function that already exists somewhere, naming conventions that clash with everything else in the repo. You spend more time correcting than you saved generating.

Rahul Garg calls this the frustration loop, and he’s written up a pattern to break it. His article over at Martin Fowler’s site describes five patterns for improving LLM interactions during coding, starting with what he calls knowledge priming — deliberately feeding the model context about your codebase before asking it to do anything.

What Priming Actually Means

The core idea is straightforward: LLMs hallucinate details they don’t have. If your project has a withRetry() utility, a specific error-handling convention, or a folder structure the model has never seen, it will invent something plausible. Priming is giving it that context upfront.

In practice this might look like:

  • Pasting in your project’s README or architecture notes at the start of a session
  • Showing the model a representative existing file before asking it to write a new one
  • Explicitly stating your conventions: “We use named exports, no default exports. Error handling goes through our AppError class.”

It’s the difference between asking a contractor to build a room in your house versus handing them the blueprints first.

Why This Resonates With Me

Building Discord bots, I hit this constantly. The bot has a specific command registration pattern, a particular way of handling slash command interactions, and shared state that gets threaded through everything. When I ask an AI assistant to “add a remind command,” without context it reaches for the generic discord.js docs pattern — which doesn’t match how anything else in the bot works.

Once I started dropping in a sample existing command and a quick description of the event architecture before asking, the generated code went from “needs heavy editing” to “needs light review.” It’s not magic, it’s just reducing the gap between what the model knows and what your codebase actually is.

The Cost of Priming

This approach does have overhead. You’re spending tokens and time on setup. For quick throwaway scripts it probably isn’t worth it. But for any codebase you’re actively maintaining, that upfront investment pays back quickly — especially on anything that touches shared patterns or project conventions.

The more interesting implication is that this puts pressure on teams to actually document their conventions. If you can’t articulate your coding patterns clearly enough to prime an LLM with them, they probably aren’t as consistent as you think. Knowledge priming forces the kind of explicit thinking that good codebases benefit from regardless of AI tooling.

Garg’s article is worth reading in full — he has four more patterns beyond this one, and the framing of the frustration loop is useful vocabulary for conversations with teammates about why AI-assisted development sometimes feels slower than expected.

Was this interesting?