· 6 min read ·

Chrome 146: When the Browser Finally Catches Up to Its Own Libraries

Source: chrome-devblog

Chrome 146 ships three features that share a quiet admission: the web platform spent years watching developers solve its gaps with libraries, and it is now absorbing those solutions.

The release landed on March 10, 2026, and the headlines are scroll-driven animation improvements, scoped custom element registries, and the Sanitizer API. Each of them has a backstory that predates the browser support by half a decade or more.

The Sanitizer API and Why DOMPurify Exists

DOMPurify is probably the most widely deployed security library in frontend development. If you accept user-generated HTML from anywhere, you almost certainly depend on it. The core problem it solves is straightforward: innerHTML = userContent is an XSS vector, and there is no way to strip dangerous content from HTML without parsing it, and parsing HTML correctly requires a real HTML parser, not a regex.

DOMPurify works by parsing the input through the browser’s own parser using a detached Document, walking the resulting DOM tree, and removing disallowed elements and attributes. This approach is sound but has a subtle weakness: the sanitized markup must eventually be serialized back to a string so it can be assigned to innerHTML, and serialization followed by re-parsing creates an opportunity for mutation XSS (mXSS). Parser differentials between the serializer and the consumer can reintroduce dangerous content that was removed during the first parse.

The Sanitizer API in Chrome 146 eliminates this round-trip. Instead of returning a sanitized string, it integrates directly into DOM insertion through setHTML():

const div = document.createElement('div');
div.setHTML('<b>Hello</b><script>alert(1)<\/script><img src=x onerror=alert(2)>');
// Result: <b>Hello</b>  — script and event handler stripped

The sanitization happens inside the browser’s rendering engine, on the parsed DOM, before it ever reaches JavaScript. There is no serialize-parse cycle. The browser also knows the insertion context, so a fragment that would be safe at the top level but dangerous inside a <table> can be handled correctly.

Custom configuration is available through the Sanitizer class:

const sanitizer = new Sanitizer({
  allowElements: ['p', 'b', 'i', 'em', 'strong', 'a', 'ul', 'ol', 'li'],
  allowAttributes: { 'href': ['a'], 'class': ['*'] },
  dropElements: ['script', 'style'],
  allowComments: false,
});

element.setHTML(untrustedInput, { sanitizer });

The default behavior, with no configuration, is an allowlist of safe structural elements. Scripts, event handlers, javascript: URLs, iframes, and embeds are all blocked. This means div.setHTML(userInput) is a safe default in a way that div.innerHTML = userInput never was.

DOMPurify will remain important for cross-browser compatibility for years, and its author Cure53 was involved in designing the Sanitizer API spec. But the existence of a native option changes the calculus for new projects targeting modern browsers. An 18KB library dependency with its own update cadence is a meaningful cost, especially for applications where a compromised CDN or dependency chain is part of the threat model.

The Trusted Types integration is worth noting here. The Sanitizer API works naturally with Trusted Types policies, which means it fits into a Content Security Policy approach to XSS defense rather than being a standalone mitigation. You can use setHTML() inside a Trusted Types policy to ensure that all DOM writes are gated through sanitization, turning it from a library convention into an enforced architectural constraint.

Scoped Custom Element Registries

The global customElements registry is one of those API decisions that made sense for initial simplicity and became increasingly painful as the ecosystem grew. Every call to customElements.define('my-button', ...) is global and permanent. Define the same name twice and you get a hard error. This creates an impossible situation for component libraries: two libraries that both want a <my-button> element cannot coexist on the same page, and micro-frontend architectures where independently deployed applications compose into one document face constant naming conflicts.

Scoped Custom Element Registries solve this by allowing custom element definitions to be scoped to a ShadowRoot. You create a CustomElementRegistry instance, register your elements in it, and pass it when attaching a shadow root:

const registry = new CustomElementRegistry();
registry.define('my-button', MyButtonV2);

const host = document.createElement('div');
const shadow = host.attachShadow({ mode: 'open', registry });
shadow.innerHTML = '<my-button>Click</my-button>';
// Resolves to MyButtonV2, regardless of what window.customElements says

The lookup order is explicit: elements inside a shadow root check the shadow root’s scoped registry first, then fall back to the global registry. This means globally defined elements remain accessible inside scoped shadow roots, but local definitions take precedence. You can run two versions of a library on the same page with no conflict:

// In library v1 bundle
const registryV1 = new CustomElementRegistry();
registryV1.define('fancy-button', FancyButtonV1);

// In library v2 bundle
const registryV2 = new CustomElementRegistry();
registryV2.define('fancy-button', FancyButtonV2);

// Both work simultaneously in their own shadow roots
hostA.attachShadow({ mode: 'open', registry: registryV1 });
hostB.attachShadow({ mode: 'open', registry: registryV2 });

There was a community polyfill that handled this through monkey-patching, intercepting customElements.define, innerHTML assignments, and element construction. The approach was brittle and imposed overhead on every DOM operation. Native support removes that overhead entirely and handles edge cases that polyfills could not, including proper handling of document.createElement() with scoped registries and correct behavior with nested shadow roots.

This feature matters most for teams building component systems at scale. If you maintain a design system consumed by multiple independent applications, or if you work on an application shell that hosts micro-frontends, the global registry has been a constant source of friction. The scoped registry API removes it cleanly.

Scroll-Driven Animations and the timeline-scope Extension

The scroll-driven animations system shipped in Chrome 115, giving developers animation-timeline, the scroll() and view() CSS functions, and the ScrollTimeline and ViewTimeline JavaScript APIs. Chrome 146 extends this with timeline-scope, which addresses a structural limitation in the original design.

The original API allowed scroll and view timelines to be consumed only by descendants of the element that defined them. If you had a scroller in one part of the DOM and an element in a sibling branch that you wanted to animate based on that scroll position, you needed JavaScript. timeline-scope on a common ancestor unlocks cross-subtree consumption:

.page-wrapper {
  timeline-scope: --header-reveal;
}

.sidebar {
  scroll-timeline-name: --header-reveal;
}

.main-content {
  animation-timeline: --header-reveal;
  animation: fade-in linear;
  animation-range: entry 0% entry 100%;
}

Without timeline-scope, orchestrating animations across subtrees required scroll event listeners running on the main thread, or IntersectionObserver callbacks that fire with a delay after the browser’s compositor has already moved on. The CSS-native approach runs off the main thread when the compositor handles it, which covers transform and opacity animations in most cases. The performance difference on animation-heavy pages is real.

The fuller scroll-driven animations API, now extended with timeline-scope, means the IntersectionObserver workaround that defined scroll-linked effects for the past five years is no longer the right default answer for new projects.

What This Release Signals

These three features are not incremental polish. They represent the web platform acknowledging that the ecosystem had found the right solutions independently, and committing to building those solutions into the browser itself. The Sanitizer API absorbed lessons from DOMPurify and improved on the underlying mechanism. Scoped registries absorbed years of experience from teams building at scale with Web Components. The scroll animation extensions came from real use cases that the initial design could not address.

The practical implication is that a meaningful portion of the JavaScript libraries and polyfills shipping in production web applications today exist to fill gaps that the platform has now closed. The timeline for sunsetting those dependencies is not immediate, because Safari and Firefox need to ship these features too. But the trajectory is clear.

For new projects targeting evergreen browsers with a modern baseline, Chrome 146 raises the floor on what you can do without reaching for a library.

Was this interesting?