JavaScript engines have spent decades getting fast by being a little reckless — making educated guesses about types at runtime and generating tight machine code based on those guesses. If the guess is wrong, you bail out (“deoptimize”) and fall back to something slower. It works remarkably well for dynamic languages. The interesting news from the V8 team is that they’ve now brought this same philosophy to WebAssembly.
The V8 blog post on speculative WebAssembly optimizations details two features shipping in Chrome M137: speculative call_indirect inlining and deoptimization support for WebAssembly. Together, they let the compiler make bets about which function a call_indirect will target, inline that function’s body directly, and then gracefully recover if the bet turns out to be wrong.
Why This Matters for WasmGC
call_indirect is how WebAssembly implements dynamic dispatch — think virtual method calls in object-oriented languages. It’s essential for WasmGC, the extension that lets garbage-collected languages like Dart, Kotlin, and Java compile to Wasm. The problem is that call_indirect is inherently indirect: you look up a function in a table at runtime, which kills inlining opportunities and makes the optimizer’s job hard.
The fix the V8 team landed is conceptually similar to what V8 already does for JavaScript’s polymorphic call sites. Collect feedback on which target functions actually get called. If one target dominates, emit a guarded inline:
if (actual_target == expected_target) {
// inlined fast path
} else {
// deoptimize or slow path
}
For monomorphic call sites — where the same function gets called almost every time — this is a big win. The inlined code can be further optimized in context, register allocation improves, and you avoid the overhead of the indirect call mechanism entirely.
The Numbers
On a set of Dart microbenchmarks, the combined effect is over 50% average speedup. On larger, more realistic applications the gains are 1–8%. That’s a wide range, and the microbenchmark number is almost certainly not representative of most real workloads. But 1–8% on production-sized apps is genuinely meaningful — that’s the kind of improvement that changes the calculus on using WasmGC languages for performance-sensitive code.
The more interesting note is the framing around future work. Deoptimization support is described as a “building block” — once you have the machinery to bail out of speculative code safely, you can layer on more aggressive speculations. The V8 team has been methodical about this: JavaScript got decades of JIT refinement, and now that infrastructure is being ported to Wasm piece by piece.
What This Means in Practice
If you’re shipping a WasmGC-compiled app — a Flutter web build, a Kotlin/Wasm module, or anything Dart-based — Chrome M137 should make it noticeably faster without any changes on your end. The engine does the work.
For the rest of us watching WebAssembly mature, this is a good signal. WasmGC is still relatively young, and closing the performance gap between hand-tuned C/C++ Wasm and GC language Wasm is essential for the ecosystem to grow. Speculative optimizations are how JavaScript got fast, and there’s no reason they can’t do the same for Wasm.