One of the most common failure modes when working with an AI coding assistant isn’t a hallucination or a wrong answer. It’s coherence degradation: the assistant starts contradicting decisions it helped you make two hours ago, or forgets the constraints you established at the start of the project. The longer the session, the worse this gets. Start a new session, and you’re back to zero.
Rahul Garg’s article on Martin Fowler’s site names this problem and proposes a concrete solution called Context Anchoring. The core idea is to externalize the decision context, the architectural choices, constraints, and their rationale, into a living document. That document travels with the AI across sessions, preventing the gradual erosion of shared understanding.
The idea is sound, and worth examining not just at the level of what it is, but why it works, where it breaks down, and how it fits into practices that already exist.
The Mechanics of Forgetting
To understand why context anchoring matters, you have to understand what’s happening under the hood when an LLM processes a long conversation.
Transformer-based models process all tokens in the context window with self-attention. In theory, every token can attend to every other token. In practice, the attention mechanism tends to weight recent tokens more heavily, a phenomenon sometimes called recency bias or the “lost in the middle” problem. Researchers have consistently demonstrated that LLMs underperform on tasks where critical information is placed in the middle of long contexts, performing best when key information appears at the very beginning or end. The information isn’t gone; it’s present in the context but exerts less influence on the model’s output as subsequent tokens accumulate around it.
This has a direct consequence for long working sessions. The constraints you established upfront, the architectural decision you made in the first twenty messages, the things you explicitly ruled out, all of these progressively lose weight relative to whatever you’ve been discussing in the last ten minutes. The model isn’t being careless; this is just how attention works across long sequences.
Between sessions, there’s no ambiguity: everything is gone. The new session starts with whatever you put in the initial prompt, nothing more.
What a Context Anchor Actually Is
A context anchor is a document, usually a plain text file, that captures the goals and constraints of the current work, key architectural or design decisions with their rationale, things explicitly ruled out and why, and the current state of progress. Structure matters less than the habit; the document needs to be included at the start of every session and updated whenever a significant decision is made.
Here’s a minimal example for a backend service project:
# Project Context
## Goals
Building a webhook ingestion service for a Discord bot.
Priority is low latency and reliability over throughput.
## Architecture Decisions
- Using SQLite, not Postgres. Rationale: single-server deployment, no need for replication.
- Webhook verification handled at the HTTP layer before any processing.
- No retry queue for now; failed events are logged and dropped.
## Out of Scope
- Multi-guild support (deferred to v2)
- Backfill of historical events
## Current State
Implemented: webhook registration, HMAC verification, event logging.
In progress: scheduled delivery for deferred events.
This format isn’t novel. It’s close to what good engineers have always written as project specs, technical memos, or onboarding documents. The difference is the explicit framing as AI context: written to be consumed by the model at the start of every session, not filed away in a wiki.
The Relationship to ADRs
Architecture Decision Records have been a staple of engineering documentation for years, popularized in part by Michael Nygard’s original proposal and widely used through templates like Joel Parker Henderson’s ADR format. An ADR captures a specific decision: the context, the choice made, the consequences, and the alternatives considered.
Context anchoring serves a similar purpose but with different emphasis. ADRs are primarily for human readers, particularly future contributors who need to understand why things are the way they are. They tend toward completeness and include substantial background context. A context anchor is leaner, optimized for token efficiency and immediate relevance to the current work.
There’s a good argument for maintaining both, not as redundant efforts but as complementary ones. The ADR is the archival record. The context anchor is the working summary, continuously updated to reflect where things stand right now. The ADR explains why a decision was made; the anchor communicates that the decision exists and must be respected.
Where It Lives: CLAUDE.md and Friends
If you’re using Claude Code, you’ve probably already encountered this pattern under a different name. The CLAUDE.md file is a project-level instruction file that gets loaded into every conversation. It’s a context anchor with tooling support built in, intended precisely for this purpose: giving the model persistent knowledge about the project that persists across sessions.
Cursor has .cursorrules. GitHub Copilot workspace configuration serves a similar role. These tools have converged on the same underlying insight: persistent, project-level context is load-bearing infrastructure for AI-assisted development, not a nice-to-have.
The challenge with all of them is maintenance. A CLAUDE.md that describes the project as it was six months ago is not just useless; it actively misleads. The AI will confidently make decisions based on stale constraints, and those decisions will be wrong in exactly the ways that are hardest to notice, because they’ll look coherent.
Garg’s framing of this as a living document is the important part. The document needs to evolve alongside the project. Someone has to own it, updating it should be part of the standard workflow rather than a separate chore, and it should be version-controlled alongside the code. The context anchor belongs in the repository, visible in pull requests, reviewable, and subject to the same change management as everything else. It’s code-adjacent, not documentation-adjacent.
The Problem of Context Anchor Drift
The failure mode that comes up most often in practice: the context anchor falls behind the code. You make a series of implementation decisions, your AI assistant helps with each one, but nobody updates the document. Over time, the anchor describes a project that no longer exists.
One mitigation is to make the AI part of the update cycle. At the end of a session, as a routine step, ask the model to suggest what should be added to or changed in the context document based on the decisions made during the session. The model can draft the update; you review and commit it. This keeps the friction low enough that the habit sticks without requiring a separate documentation effort.
Another approach is to build context anchor review into the code review process. When a PR makes a significant architectural change, the corresponding context anchor update should be part of the same PR. Code that changes the architecture but leaves the anchor unchanged is incomplete work, and a reviewer can flag it as such.
Anchoring at Different Scales
Context anchoring doesn’t have to be project-scoped. The same pattern applies at different granularities.
Session-level anchors capture the specific goal for a single working session: “Today I’m implementing the scheduled delivery feature. The decision to use a simple polling loop rather than a proper job queue stands. Focus on correctness first, not performance.”
Feature-level anchors describe the scope and constraints for a specific piece of functionality being built across multiple sessions, including the edge cases already considered and the ones deliberately deferred.
Project-level anchors describe the overall architecture and long-standing decisions that shouldn’t change without explicit discussion.
Nesting these gives the AI a progressively specific picture of what’s relevant without flooding the context with history that doesn’t bear on the current work.
The State Management Frame
When a human engineer joins a project, they don’t start from scratch. They read the README, the ADRs, maybe an onboarding document. They build a mental model of the project’s constraints before making contributions. The decisions that came before theirs are not invisible; they’re part of the environment they’re working in.
An AI assistant, absent explicit context, starts every session as if it just arrived and hasn’t read anything. It has no ambient knowledge of what was decided, why, or what was deliberately left out. Context anchoring is the onboarding document for the AI, structured to be consumed efficiently and updated to stay relevant.
This is, at bottom, a state management problem. The AI has no persistent state. You supply it externally, deliberately, and maintain it as the project evolves. The techniques for doing this, living documents, version-controlled context files, session-level goal statements, are not technically complicated. The discipline is in applying them consistently, particularly under the time pressure that makes AI tools attractive in the first place.
Working with AI effectively over longer projects is less about finding the right prompts and more about treating the decision context itself as a managed artifact. The conversation is ephemeral; the document is not. That distinction is where the real leverage is.