The Shared Anchor: What Context Anchoring Requires at Team Scale
Source: martinfowler
The context anchoring pattern, as described by Rahul Garg in his article on Martin Fowler’s site, frames the living document as a personal productivity practice. You maintain a structured file that externalizes decisions from your AI sessions, re-inject it at session boundaries, update it as the project evolves, and stop relying on transformer attention to preserve early context through a long conversation.
This is the right response to a real problem. Attention over long sequences is not uniform. The Lost in the Middle paper from Liu et al. demonstrated empirically that LLM recall degrades for content placed in the middle of a long context, producing a U-shaped performance curve where the beginning and end of a conversation receive reliably more attention weight. An architectural constraint described at turn three of a sixty-turn session sits in the worst possible position.
The living document is the correct structural response. But the pattern is described and primarily practiced at the individual level. When a team of developers is all running AI-assisted workflows against the same codebase, the anchor becomes shared infrastructure, and shared infrastructure requires different discipline than a personal file on your machine.
Content Stratification
Before the team question, there is a prior one: what belongs in a context anchor versus other persistent artifacts?
The answer falls along a spectrum from ephemeral to permanent. Session-specific state, meaning the current task, what is in scope today, what is blocked, lives only in the anchor and gets discarded when the task completes. Sprint-level decisions, architectural choices made this week that might not survive the next sprint review, belong in the anchor but warrant periodic review. Project invariants, coding standards, dependency constraints, never-break rules, belong in CLAUDE.md, .cursorrules, or equivalent tool-level injection files that get committed to the repository. Formal architectural decisions, significant choices that future engineers will need to understand, warrant a proper Architecture Decision Record.
The failure mode of an undifferentiated anchor is that it tries to carry all of these at once. Session-specific cruft accumulates alongside lasting architectural decisions. The document grows large. The model attends less reliably to the sections that matter. The anchoring stops working.
A practical heuristic: if a constraint has survived three sessions unchanged, it probably belongs in CLAUDE.md. If it has survived a quarter unchanged, it belongs in an ADR. The context anchor is for decisions that are real but transient.
Splitting Personal from Shared
Consider four developers working on a backend service, each using AI-assisted sessions. Each has distinct working context: different features in progress, different local decisions made during the day. But shared ground exists: database schema decisions, error handling patterns, API versioning choices.
A personal anchor file conflates personal and shared context in one place. When another developer starts a session, they have no visibility into the shared decisions their colleague’s session just produced. If both sessions are converging on the same library from different angles, that conflict surfaces at merge time rather than when the decisions get made.
The natural split is structural:
docs/
ai-context.md # committed, PR-reviewed, team-visible
.session-context.md # gitignored, personal session state
The shared layer functions as a lightweight ADR, committed to the repository and readable by every AI session that runs against it. The personal layer holds in-progress task state that does not need to survive past the current session.
This split creates its own discipline. The shared decisions file is now a team artifact. Changes require review. Conflicts need resolution. Ownership needs to be clear. All the coordination challenges of any shared document apply, and ignoring them produces the same failure modes: stale entries, contradictory content, nobody confident about which version is authoritative.
Merge Conflicts as Signal
Merge conflicts in a context anchor are worth examining before resolving. Two developers updating the shared decisions file with contradictory entries signals that the codebase or the team process is generating ambiguity fast enough that two people arrived at different answers to the same question.
A conflict in a decisions file is worth a brief synchronous conversation. The conflict represents a real fork in the project’s direction, not just a text editing collision. Resolving it mechanically gives you a correct document state but loses the conversation about why the conflict happened.
Attribution helps make this auditable. Each entry in the shared file can include a date and initiator:
## Active Decisions
- **Auth**: JWT with 7-day expiry; refresh tokens in httpOnly cookies
_Added 2026-03-10, aligned with @rahul and @priya_
- **Database**: PostgreSQL 15, raw SQL via pgx, no ORM
_Added 2026-02-28, full reasoning in [ADR-004](docs/adr/004-database.md)_
The date and attribution make the history of the conflict auditable. You can trace each decision to the session or discussion where it originated, which is useful when a decision needs to be superseded and you want to understand what you are replacing.
Version Control Strategy
Whether to commit the anchor or gitignore it depends on which content it holds.
The shared decisions layer should be committed. It is team knowledge. Run it through the normal PR process; changes to shared decisions are architectural changes in miniature, and the review discipline that applies to architecture applies here.
Personal session state should be gitignored. Task context, in-progress work, session checklists: these are useful during a session and irrelevant to anyone who was not present. Committing them accumulates noise without adding value.
The CLAUDE.md file occupies a different position. It is committed, stable, and serves as the highest-attention injection point for invariants. Many teams should treat it as requiring a PR and code review, since a change to it affects every AI session that runs against the repository. A stale or wrong constraint in CLAUDE.md propagates to every session until someone catches it and files a fix; treating it as casual documentation understates the stakes.
CI as Enforcement
An extension of the pattern that moves it from documentation to verification: if the shared decisions file encodes specific invariants, some of those invariants can be tested in CI.
If the file says “no ORM; raw SQL via pgx,” a CI step can scan for ORM imports and fail if it finds them. If it says “PostgreSQL only; no SQLite in production code,” a linter rule can enforce it. The context anchor becomes not just documentation for the AI session but a specification the pipeline can validate.
# .github/workflows/context-checks.yml
- name: Verify no ORM usage
run: |
if grep -r "gorm\|ent\." --include="*.go" internal/; then
echo "ORM usage found; violates shared decision on raw SQL"
exit 1
fi
Not every decision in a context anchor can be verified this way, but the ones that can are worth the CI step. It creates a feedback loop that the living document alone cannot provide: the team discovers violations automatically rather than waiting for an AI session to drift and for a developer to notice.
Temporal Maintenance
An anchor maintained across a long project will accumulate entries that no longer reflect reality. The longer a project runs, the more the document can drift from the actual codebase state.
ADR practitioners know this failure mode. The canonical response in that tradition is to supersede rather than modify: when a decision changes, write a new record that explicitly supersedes the old one rather than editing the existing one in place. The history is preserved; the current state is unambiguous.
For context anchors, a lighter-weight approach is appropriate. A periodic review pass, treating the shared decisions file as something that needs quarterly gardening, helps. Entries now encoded in CLAUDE.md or a proper ADR can be removed from the anchor. Superseded decisions can be moved to an Archived section rather than deleted outright.
The goal is keeping the active section genuinely accurate. An anchor with twenty stale entries and five live ones is only marginally better than no anchor, because the model will attend to all twenty-five with comparable weight. The document becomes noise before it becomes wrong, and noisy anchors misdirect sessions in ways that are harder to diagnose than simple gaps.
Team-Scale Implications
When context anchoring moves from solo to team practice, several things shift structurally.
The anchor is no longer a personal productivity tool; it is shared infrastructure, with the same weight as CI configuration or build scripts. Changes to it affect every developer’s AI sessions, not just the person who made the change.
The update discipline is no longer entirely personal. Someone needs to own the shared decisions layer and review changes to it. Without clear ownership, it will drift unmaintained, accumulating entries that nobody is confident enough to remove.
The failure modes become more consequential. A personal anchor that goes stale affects one developer’s sessions. A shared anchor that goes stale affects everyone’s, and wrong constraints propagate to every session until someone notices and corrects them.
None of this undercuts the practice at team scale. It clarifies what kind of practice it is: a piece of shared technical documentation that needs version control, ownership, and regular maintenance, the same as any other living artifact the team depends on to make good decisions. The pattern Garg describes scales well; it just requires a different level of intentionality once the audience is a team of people rather than a single developer and their AI session.