If you’ve spent any time using AI coding assistants on a real project, you’ve probably hit the wall Rahul Garg describes in his piece on knowledge priming over at Martin Fowler’s site. You get code back fast. It looks reasonable. Then you spend the next hour untangling it because it doesn’t fit your patterns, ignores your abstractions, or reinvents something you already have three files over.
Garg calls this a “frustration loop,” and that’s exactly the right framing. The model isn’t broken — it just doesn’t know your codebase. It can’t.
The Context Problem
LLMs are trained on broad patterns, not your specific project. When you drop a prompt like “write a service that handles user authentication,” the model fills in the blanks with whatever it’s seen most — generic patterns, common libraries, whatever makes statistical sense. It has no idea that your project uses a custom middleware chain, a specific error-handling convention, or that you made a deliberate decision to avoid a particular dependency two sprints ago.
Knowledge priming is the practice of front-loading that missing context before you ask for code. Concretely, this means giving the model:
- Architectural context: what the system looks like, how layers interact
- Coding conventions: naming patterns, error handling style, preferred abstractions
- Existing patterns: real examples from your codebase that illustrate how things are done
- Anti-patterns: what the team has explicitly decided not to do and why
This isn’t magic. It’s the same thing you’d do when onboarding a new developer — except the LLM needs it every session.
What This Looks Like in Practice
I’ve been doing something like this informally for a while. When working on my Discord bot project, if I want help adding a new command handler, I’ll paste in an existing handler as an example before asking for the new one. The difference in output quality is significant. Without the example, I get something technically correct but stylistically foreign. With it, I get something I can drop in almost directly.
Garg is formalizing this instinct into a repeatable pattern, which is useful. If you’re working on a team, this is the kind of knowledge that belongs in a shared priming document — something you include at the start of any AI-assisted session. Think of it like a project-specific system prompt.
For teams using tools like Cursor or Copilot, this maps to their “rules” or context file features. Claude Code has CLAUDE.md for exactly this purpose.
The Broader Lesson
The frustration loop Garg describes is a symptom of treating AI assistants like search engines — ask a question, expect a complete answer. They work better as pair programmers who need onboarding. The more context you invest upfront, the less correction you do afterward.
This is the first of five patterns Garg outlines, and it’s probably the highest-leverage one. Getting this right doesn’t just reduce fix-up time; it changes the feel of the interaction from adversarial to collaborative.
If your AI-generated code keeps missing the mark, the question worth asking isn’t “is this model good enough” — it’s “does this model know enough about what I’m building.”