· 2 min read ·

How V8 Stopped Allocating a New Object Every Time You Update a Float

Source: v8

If you’ve ever wondered how much work your JavaScript engine does just to update a single number, the V8 team’s latest deep-dive into their mutable heap numbers optimization is a sobering read.

The story starts with the JetStream2 benchmark suite and a specific test called async-fs — a JavaScript file system implementation built around asynchronous operations. The V8 team noticed a strange performance cliff and traced it back to something unexpected: the benchmark’s custom Math.random implementation.

let seed;
Math.random = (function() {
  return function () {
    seed = ((seed + 0x7ed55d16) + (seed << 12))  & 0xffffffff;
    seed = ((seed ^ 0xc761c23c) ^ (seed >>> 19)) & 0xffffffff;
    seed = ((seed + 0x165667b1) + (seed << 5))   & 0xffffffff;
    seed = ((seed + 0xd3a2646c) ^ (seed << 9))   & 0xffffffff;
    seed = ((seed + 0xfd7046c5) + (seed << 3))   & 0xffffffff;
    seed = ((seed ^ 0xb55a4f09) ^ (seed >>> 16)) & 0xffffffff;
    return (seed & 0xfffffff) / 0x10000000;
  };
})();

Nothing unusual on the surface — it’s a standard LCG-style PRNG using bitwise operations. But the culprit is seed, and specifically where it lives: a ScriptContext.

The HeapNumber Problem

In V8, JavaScript values that don’t fit in a pointer-sized slot get heap-allocated as objects. Floating-point numbers — even integers that overflow Smi range — become HeapNumber objects. Historically, these were immutable: updating a variable like seed meant V8 had to allocate an entirely new HeapNumber on every write.

Call that PRNG a million times and you’ve just generated a million garbage objects for the GC to clean up. For a tight loop updating a single outer-scope variable, this is brutal.

The fix — mutable heap numbers — allows V8 to recognize that certain HeapNumber slots (like those in a ScriptContext) can be updated in-place rather than replaced. The variable cell holds a stable pointer, and the float value inside gets mutated directly. No allocation. No GC pressure.

The result: a 2.5x speedup on async-fs, and a noticeable improvement in JetStream2’s overall score.

Why This Matters Beyond Benchmarks

The V8 team is careful to note that this optimization was inspired by the benchmark but applies to real code. Any pattern where you’re accumulating into a module-level or closure-captured float — a running average, a physics simulation tick, a statistical counter — can benefit from this.

It’s also a good reminder of how JavaScript performance is rarely about the algorithm on paper. You can write perfectly correct, mathematically sound code and still hammer the allocator because of how values are represented internally. The engine has to make guesses about what you’ll do with a value, and sometimes those guesses are expensive.

For those of us building bots or tools that run long-lived processes with hot inner loops, it’s worth keeping an eye on V8’s blog. These aren’t academic wins — they’re the kind of changes that quietly make your production Node.js service a bit snappier without you touching a line of code.

Was this interesting?