The V8 team recently published a deep dive into an optimization that delivered a 2.5x improvement in the JetStream2 async-fs benchmark. The root cause is something deceptively mundane: updating a number stored in a closure.
The Problem with Heap Numbers
To understand why this matters, you need to know how V8 represents values internally. Small integers (“Smis”) are stored directly in a tagged pointer — no heap allocation needed. But floating-point numbers, or integers too large for Smi range, get boxed into HeapNumber objects on the heap.
This is usually fine. The problem arises when you have a number variable captured in a closure that gets updated repeatedly. Every assignment allocates a new HeapNumber object. The old one becomes garbage. The GC has to collect it. Rinse, repeat, thousands of times per second.
The benchmark’s custom Math.random makes this painfully visible:
let seed;
Math.random = (function() {
return function () {
seed = ((seed + 0x7ed55d16) + (seed << 12)) & 0xffffffff;
seed = ((seed ^ 0xc761c23c) ^ (seed >>> 19)) & 0xffffffff;
seed = ((seed + 0x165667b1) + (seed << 5)) & 0xffffffff;
seed = ((seed + 0xd3a2646c) ^ (seed << 9)) & 0xffffffff;
seed = ((seed + 0xfd7046c5) + (seed << 3)) & 0xffffffff;
seed = ((seed ^ 0xb55a4f09) ^ (seed >>> 16)) & 0xffffffff;
return (seed & 0xfffffff) / 0x10000000;
};
})();
The & 0xffffffff operations produce unsigned 32-bit integers. Values above 2^30 exceed Smi range, so seed lives on the heap as a HeapNumber. Six assignments per call, each one throwing away the old object and allocating a fresh one. The GC is doing a lot of invisible work here.
Mutable Heap Numbers
The fix is conceptually simple: if V8 can prove that a context slot will only ever hold a number, it can allocate one mutable HeapNumber for that slot and update its value in place. No new allocation. No GC pressure. The pointer in the context slot stays the same — only the double stored inside it changes.
This is different from how V8 normally treats HeapNumber objects, which are immutable. Immutability enables sharing and inlining optimizations elsewhere. The mutable variant is a special case, tied to context slots where the engine can verify the type is stable.
The result: a 2.5x speedup on this benchmark, with a meaningful improvement to the overall JetStream2 score.
Why This Matters Beyond Benchmarks
The V8 team notes that while the optimization was identified through a benchmark, the pattern — a numeric variable in a closure, updated in a tight loop — is genuinely common in real code. Think:
- Accumulators in event handlers
- Frame counters in game loops
- Running totals in stream processors
- Any numeric state threaded through callbacks
These all share the same structure: a number captured in a closure, mutated repeatedly. Before this change, every mutation was silently allocating. Most developers would never think to look there for a performance problem.
The Takeaway
This is one of those optimizations that makes you appreciate how much invisible work the engine is doing — and how much a single well-placed change can unlock. You didn’t write bad code. The pattern is idiomatic JavaScript. V8 just got better at handling it.
It’s also a good reminder that when profiling JS performance, heap allocation pressure is often a more impactful culprit than algorithmic complexity. The GC pause you can’t see can matter more than the loop you’re staring at.