From Static Instructions to Live System State: MCP as a Context Layer
Source: martinfowler
Back in February 2026, the Martin Fowler blog published a survey of context engineering for coding agents, noting that the options for configuring an agent’s context had “exploded” and that Claude Code was leading with innovations in this space. Of the mechanisms it surveys, the Model Context Protocol is the one whose implications extend furthest, because it changes the boundary of what counts as context in the first place.
The model has a filesystem. It has a conversation history. With MCP, it gains access to the live state of any external system that exposes a server, and that changes the context engineering problem considerably.
What MCP Actually Is
MCP is a standardized JSON-RPC 2.0 interface over stdio or HTTP transport. It defines three primitive connection types between clients (coding agents) and servers (external systems).
Tools are callable functions with typed JSON Schema parameters. The agent receives a description of each tool when it connects to the server and can invoke them during task execution. A GitHub MCP server might expose tools for listing open issues, creating pull requests, or fetching code review comments. The agent calls these using the same mechanism it uses for any built-in tool.
Resources are readable data streams, analogous to file contents but sourced from external systems. A documentation server might expose API specifications as resources. An internal wiki server might expose architecture decision records.
Prompts are reusable template definitions the server offers to the client. This primitive is the least widely used currently, but it enables servers to define common workflows or decision templates the agent can invoke by name.
From the model’s side of the interface, there is no meaningful difference between “read file via built-in tool” and “fetch GitHub issue via MCP server.” Both arrive through the same tool-call mechanism. Both produce results that land in the context window as structured content. The agent reasons about them the same way.
What This Changes for Context Engineering
That uniformity has a specific implication. Before MCP, if relevant context lived in an external system, the developer had to retrieve it manually and paste it into the conversation. A database schema, a Jira ticket, the output of a CI run: all of these required a copy-paste step, introduced copying errors, and degraded as the source data changed between session starts.
MCP enables on-demand retrieval of live external data using the same mechanism the agent uses for everything else. An agent working on a bug can call the GitHub MCP server to retrieve the issue that reported it, then the pull request that introduced the regression, then the CI failure output that confirmed it, all at the moment of need, with current data, without developer intervention.
The contrast with static CLAUDE.md content is instructive. A CLAUDE.md entry documenting your database schema is accurate as of the day it was written. An MCP server connected to your database returns the actual current schema on each query. The CLAUDE.md version degrades silently as the schema evolves; the MCP version is always fresh. For slow-moving information like project conventions and architectural constraints, static context in CLAUDE.md is appropriate; the overhead of an MCP server is not justified. For fast-moving information like schema state, issue status, or CI results, MCP is the right layer.
# Appropriate for CLAUDE.md (slow-moving, team convention):
Do not add nullable columns to existing tables without a migration plan
reviewed by the database team.
# Should NOT be in CLAUDE.md (fast-moving, fetch live via MCP):
Current schema: table `users` has columns id, email, created_at, role
This also affects how you think about context staleness as a failure mode. Missing context produces gaps the agent may correctly identify and try to fill. Stale context produces confident misinformation: the agent uses the schema entry from three months ago, writes code that references a column that was renamed, and produces something syntactically plausible that fails at runtime. The MCP approach eliminates this class of failure for the data it covers.
The Ecosystem That Developed Through 2025
The MCP server ecosystem grew considerably through 2025. Available integrations now span GitHub, Linear, Jira, Postgres, SQLite, filesystem access, browser automation, Sentry, CI systems including GitHub Actions and CircleCI, and dozens of domain-specific implementations. Anthropic publishes official reference servers for common use cases; the open-source ecosystem covers the rest.
The standardization effect compounds across tools. An MCP server built for Claude Code works with any compliant client that has adopted the protocol. Investment in exposing your internal systems via MCP translates to richer context for every agent in your workflow, not just the one you were using when you built it. This portability is the structural advantage of standardization: the work of building an integration accrues to the protocol, not to a single client.
WebMCP, in early preview in Chrome, extends the pattern to the browser. Web applications can register structured tools via navigator.mcp.registerTool(), making browser-native functionality accessible to agents through the same interface. The direction is toward MCP as a general-purpose layer between agents and external systems, reaching well beyond the IDE.
Designing MCP Servers Well
The quality of context that MCP servers provide varies with their design, and the design decisions are not obvious.
Response concision matters significantly. A tool that returns five thousand tokens of raw JSON when the agent needed three specific fields burns context budget and adds irrelevant noise. Designing responses to be minimal and structured, returning exactly what is useful, is context engineering at the tool layer. The cost of verbose MCP responses is the same as the cost of verbose CLAUDE.md content: wasted token budget and degraded signal density.
Schema precision matters for tool calling reliability. A tool description that clearly specifies required parameters, what each field means, and what the response format will be enables the model to call the tool correctly on the first attempt. Poorly described tools produce incorrect calls, error messages, and retry loops that consume context budget without progress. Writing a good tool description is closer to writing a careful API contract than to writing a helpful comment.
Scope matters for security. The MCP specification supports configuring server permissions, but the burden of appropriate scoping falls on the administrator who connects the server. An MCP integration with write access to production systems, connected to an agent running autonomous loops, needs explicit authorization controls. Prompt injection via malicious content in retrieved tool responses is a documented attack vector: an agent reading a GitHub issue that contains injected instructions can be manipulated if the agent does not maintain clear boundaries between instructions and retrieved content. MCP’s architecture does not eliminate this risk; it requires both the server author and the deploying developer to account for it deliberately.
What the Discipline Actually Looks Like
The Fowler article frames context engineering as a discipline developers need to engage with seriously. MCP is a significant part of what makes that discipline extend beyond maintaining CLAUDE.md files.
The decisions involved in MCP integration are architectural. Which external systems should the agent have access to? What scope should each integration expose, read-only or read-write? What should the response contracts look like, and who is responsible for keeping them accurate? How do you test that the agent is using retrieved context correctly rather than ignoring it or being misled by it? These questions do not resolve through prompt design. They require engineering judgment about the information environment the agent operates in.
Most development teams using coding agents are not yet thinking systematically about this layer. The available MCP ecosystem has matured enough to support sophisticated integrations. The practices around designing, deploying, and maintaining those integrations are still developing. The teams getting the most out of context-aware agents tend to be the ones treating MCP server design as an engineering problem, with the same rigor they bring to API design, rather than as a configuration exercise that ends when the server connects successfully.
The shift that the Martin Fowler article describes, context engineering becoming a necessity, is visible in this specific area. Adding an MCP server is not the end of the work. It is the start of a new maintenance surface: schemas change, APIs evolve, scope decisions need revision, and the agent’s behavior with a given integration needs the same kind of ongoing evaluation you would give any other system dependency.