· 2 min read ·

WebMCP Brings the Model Context Protocol to the Browser

Source: chrome-devblog

If you have spent any time wiring up AI agents to real websites, you know the pain: brittle CSS selectors, flaky vision-based automation, and APIs that were never designed to be called by a non-human. Chrome’s new WebMCP early preview is trying to fix that at the platform level.

What WebMCP Actually Is

WebMCP is a browser-native implementation of the Model Context Protocol, the same structured tool-and-resource standard that has been gaining traction in desktop AI tooling over the past year. The idea is simple: instead of an AI agent reverse-engineering your site’s UI to click a button, your site declares what actions are available and what they expect, and the agent calls them directly.

Think of it as a manifest.json for AI agents. You define tools, the browser surfaces them, and any MCP-compatible agent can discover and invoke them without needing to know anything about your DOM.

Why This Matters More Than It Sounds

Right now there are roughly two ways an AI agent interacts with a webpage:

  1. Browser automation (Playwright, Puppeteer, etc.) - works, but is fragile and slow
  2. Direct API calls - works well, but requires the site to have a public API in the first place

Most sites do not have a public API. Most sites do have UI. So agents end up using browser automation, which breaks every time someone changes a button’s class name.

WebMCP sits in a third lane: the site author defines structured actions in JavaScript, and the browser acts as the transport layer between those actions and whatever agent is running. It is closer to an API than DOM scraping, but requires no separate backend infrastructure.

// Rough idea of what a WebMCP tool registration might look like
navigator.mcp.registerTool({
  name: 'search_products',
  description: 'Search the product catalog',
  inputSchema: {
    type: 'object',
    properties: { query: { type: 'string' } },
    required: ['query']
  },
  handler: async ({ query }) => {
    return await fetchProducts(query);
  }
});

My Take

I have mixed feelings, and they are mostly positive.

On one hand, standardizing this at the browser level is the right call. The alternative is every AI company shipping their own agent-to-website protocol, which ends up worse for everyone. MCP already has decent momentum, so using it as the foundation makes sense.

On the other hand, adoption is the hard problem here. Getting site owners to instrument their pages with WebMCP tools is the same challenge that killed structured data pushes before it. Schema.org has been around since 2011 and most sites still do not bother. The difference this time is that the incentive is clearer: if you want AI agents to use your site reliably rather than destructively, you implement the tools.

For me, building Discord bots that occasionally reach out to web services, the more interesting angle is what this means for agents running in headless contexts. If a browser can expose MCP tools to a local agent, you start to get a pretty clean architecture: the browser handles the authenticated session and the DOM complexity, and your agent just calls typed functions.

This is early preview, so the API surface will shift. But the direction is right. Worth watching.

Was this interesting?