Why the Model Context Protocol Will Outlast the Current Generation of Coding Agents
Source: martinfowler
The Martin Fowler article from February 2026 treats the Model Context Protocol as one item in a taxonomy of context engineering options. CLAUDE.md files, dynamic retrieval, conversation compaction, MCP servers: the article presents them as coordinate mechanisms for filling a coding agent’s context window. They are not coordinate in terms of long-term significance. MCP represents a different class of bet, and understanding why requires looking at the protocol design rather than the servers built on top of it.
What MCP Actually Is
The Model Context Protocol is a JSON-RPC 2.0 specification released by Anthropic in November 2024. The transport layer runs over stdio for local server processes or HTTP with Server-Sent Events for remote deployments. The stdio case is what most developers encounter first: the MCP server runs as a subprocess, the agent communicates through stdin and stdout, and the entire server is independently deployable and trivially sandboxable.
The protocol defines three primitive object types. Tools are callable functions with JSON Schema input specifications. The agent receives the schema at session initialization, decides when to invoke the tool based on task context, constructs arguments, and receives a structured result. Resources are readable data objects addressed by URIs rather than called as functions. A PostgreSQL MCP server can expose a table schema as postgres://mydb/public/users/schema, a stable address the agent fetches when it needs the ground truth rather than what the ORM model file claims. Prompts are parameterized message templates that return pre-assembled sequences; they let server authors encode complex prompt patterns server-side rather than scattering them across client configurations.
The configuration in .claude/settings.json looks like this:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_TOKEN": "ghp_..." }
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"]
}
}
}
This configuration is version-controlled alongside the code, which makes it a project artifact rather than a personal setting. Every developer working on the project gets the same context sources.
The Standardization Effect
Before November 2024, every coding assistant built its own integrations. GitHub Copilot had GitHub context built in as a first-party feature. Cursor built its own documentation fetcher. If a team used an internal wiki, they were on their own for every tool they adopted. Integration work was duplicated across the ecosystem, and developers were locked into whatever their chosen assistant happened to support.
MCP breaks that coupling in the same way the Language Server Protocol broke the coupling between language tooling and editor implementations. Before LSP, building TypeScript IntelliSense for VS Code was separate work from building it for Neovim or Sublime Text. After LSP, the TypeScript language server was written once and consumed by every editor that implemented the protocol. The investment in correctness and completeness could be concentrated in the server implementation rather than diluted across N editor integrations.
MCP creates the same dynamic for context sourcing. A GitHub MCP server written once works with Claude Code, Cursor (which added MCP support in version 0.43), Continue.dev, and any future client that implements the protocol. An organization that builds a server for their internal documentation or proprietary database gets that investment amortized across every tool their developers use. A team migrating from Cursor to Claude Code keeps their MCP server configurations. The context sourcing layer becomes portable in a way that CLAUDE.md and .cursorrules files are not, because those file formats are tool-specific.
The official MCP GitHub organization has shipped reference server implementations for Slack, Google Drive, Brave Search, Sentry, and a range of databases. Community implementations extend this significantly. The pattern of internal MCP servers for organization-specific context is emerging in teams using these tools seriously: custom knowledge base access, proprietary API integration, deployment system queries. When the protocol is the standard, the server ecosystem compounds.
The Separation of Concerns MCP Enables
For context engineering specifically, MCP creates a clean boundary that static configuration files cannot provide. CLAUDE.md handles what the agent should always know: architectural decisions, team conventions, build system details, things that are stable and not retrievable from external systems. Dynamic tool calls handle what the agent might need: file contents, function definitions, test results. MCP handles what the agent cannot derive: external system state that exists authoritatively somewhere else.
The database schema is not derivable from ORM model files. ORM definitions lag behind migrations; they reflect what the models have been written to expect, not necessarily what the database currently contains. A coding agent writing a migration that adds a foreign key constraint needs the actual schema. An MCP PostgreSQL server with a read-only connection delivers it fresh at the moment the agent needs it rather than from a snapshot that may be days or weeks old.
This matters more as agents take on longer tasks. A session that runs for an hour and involves multiple file edits, external API calls, and database operations cannot hold a static snapshot of the world from session initialization. External state changes during that time. The resource-addressed model in MCP, where data is fetched at a URI rather than preloaded into the system prompt, is the right architectural response to that constraint. The agent fetches the schema when it needs it, not when the session starts.
The distinction also maps cleanly to the three context lifetimes that become visible in long agentic sessions. Persistent context (conventions that survive compaction) belongs in CLAUDE.md. Ephemeral context (specific file contents fetched for a task) comes from tool calls. Transient context (external system state at a specific moment) comes from MCP. Each layer operates at a different timescale and has a different cost profile in tokens.
The Security Implications of Standardized Context Access
Standardized context access is also standardized attack surface. When an agent fetches a GitHub issue body through an MCP server, that content arrives in the context window without a trust marker distinguishing it from the agent’s own instructions. A malicious instruction embedded in issue comments competes for model attention with legitimate project conventions. The “lost in the middle” research from Liu et al. suggests that attention is not uniformly distributed across context positions, but this provides uncertain protection against well-positioned injected content.
The architectural response has two parts. First, MCP servers should run with minimum permissions: read-only database connections, scoped GitHub tokens without write access, no credentials for production write operations. The blast radius of a successful prompt injection attempt is bounded by what the MCP servers can actually do. If the GitHub token only has read permissions on a specific repository, an injection attempt that tries to exfiltrate data to a third-party service cannot use the GitHub MCP server as the exfiltration channel.
Second, the hooks system that Claude Code provides complements the MCP configuration at the enforcement layer. A PreToolUse hook can validate tool calls against a policy before they execute, blocking operations that fall outside expected scope regardless of what the model concluded from injected content in fetched data. CLAUDE.md instructions are advisory; hooks are enforced. When an agent is connected to external systems through MCP, combining minimum-permission server configuration with hook enforcement creates defense that does not rely on the model reliably interpreting security instructions under adversarial input conditions.
What This Means for Teams
The practical implication of treating MCP configuration as infrastructure rather than personal tooling preference is that it belongs in version control under team review, the same way CI configuration or package manifests do. The servers your coding agent can reach are as much a part of your development environment as the linters and formatters you run.
This also means the security posture of your MCP configuration deserves the same attention as other infrastructure. A developer who configures an MCP server with a write-capable production database token in a shared .claude/settings.json has created a shared risk for everyone on the team using that project configuration. The connection strings and tokens in MCP configuration are credentials, and they should be managed as such.
The Fowler article captures the current state accurately: context engineering has become a necessity for working seriously with coding agents on real codebases. What becomes clearer six weeks on is that MCP is the mechanism most likely to define the long-term shape of that discipline. Static configuration files are tool-specific and will continue to diverge across assistants. Dynamic retrieval strategies vary by implementation and use case. MCP server implementations, once written, work across the ecosystem and compound in value as the protocol adoption grows. The investment that belongs in the center of context engineering strategy is the one that remains useful regardless of which coding agent is current a year from now.