WebAssembly's Second Act: From Compilation Target to Language Platform
Source: hackernews
WebAssembly shipped in 2017 with a clear and limited story: take compiled C, C++, or Rust, strip out the OS-level calls, and run it near-native speed in the browser. The format was deliberately low-level. It had linear memory, numeric types, and a stack machine. That was most of it. The pitch was performance, not portability across languages.
That story is now visibly incomplete. Mozilla’s February 2026 post on the WebAssembly roadmap frames the goal explicitly: Wasm should be a first-class target for any language, not just systems languages that compile through LLVM. Getting there has required a coordinated series of proposals that touch the type system, the memory model, the module system, and the async boundary. None of them are cosmetic additions. Together they represent a different conception of what Wasm is.
The Linear Memory Problem
The original design worked well for C and Rust because those languages give the programmer full control over memory. You compile to Wasm, your allocator lives in linear memory, your GC (if you have one) lives in linear memory, your string encoding lives in linear memory. Everything is bytes at offsets. The Wasm VM knows nothing about your objects; it just gives you a flat address space.
This is precisely why shipping a managed language to Wasm was painful for years. If you wanted Kotlin or Java or C# in the browser via Wasm, you had two options: compile the entire runtime, including the garbage collector, into the Wasm binary; or compile to JavaScript as an intermediate step. The first approach produced multi-megabyte binaries even for trivial programs. JetBrains measured Kotlin/Wasm with an embedded GC at roughly 2MB for a simple Hello World; the Dart team saw similar numbers. The second approach surrendered the performance advantages Wasm was supposed to provide.
WasmGC Puts the GC in the VM
The WebAssembly Garbage Collection proposal shipped in Chrome 119 and Firefox 120 in November 2023, and it changes the equation completely. Instead of encoding your objects as raw bytes in linear memory and managing them yourself, you declare structured heap types directly in the Wasm module:
(type $Point (struct
(field $x (mut f64))
(field $y (mut f64))
))
(func $make_point (param $x f64) (param $y f64) (result (ref $Point))
(struct.new $Point (local.get $x) (local.get $y))
)
The VM manages lifetimes. The garbage collector is the browser’s GC, not something you bundled. The ref.cast and ref.test instructions give you checked downcasting for polymorphism. Array types handle homogeneous sequences. The type system is expressive enough to represent the object models of Java, Kotlin, Dart, OCaml, and Haskell without encoding them as untyped bytes.
The size impact is immediate. JetBrains reported that Kotlin/Wasm output dropped from around 2MB to approximately 30KB for comparable programs once WasmGC removed the need to ship a custom allocator and collector. Dart/Flutter Web saw similar reductions. The download cost that made managed languages impractical on Wasm essentially disappears.
Exception Handling and Tail Calls Fill in the Rest
Two other proposals that reached Phase 4 and shipped across browsers matter here. The exception handling proposal introduces tag-typed exceptions with try, catch, throw, and rethrow instructions. Tags are typed signatures that distinguish exception kinds, so a Java IOException and a Kotlin IllegalStateException can coexist and be caught selectively without encoding exception identity in some ad-hoc scheme. C++ try/catch and Java-style exception propagation now compile down to instructions the Wasm VM understands directly.
The tail call proposal, which shipped in Chrome 112 and Firefox 121, adds return_call and return_call_indirect. This is not a minor optimization for functional programmers. OCaml, Scheme, and Haskell rely on guaranteed tail call elimination as a correctness property, not just a performance property. Without it, deeply recursive functional code overflows the call stack. With return_call, the Wasm VM handles tail recursion the way those languages expect, which means a compiler no longer has to implement a trampoline or CPS transform just to handle standard recursion patterns.
The Async Boundary: JS-PI
Even with WasmGC and exception handling, a class of languages hit a different wall: the JavaScript async boundary. The browser’s APIs are almost entirely async. fetch, IndexedDB, AudioContext, file system access through the Origin Private File System, all of it returns Promises. A language like Python assumes synchronous I/O. If you compile Python to Wasm and it calls into JS to do a network request, it cannot simply block and wait; blocking the main thread locks the browser.
The traditional workaround was Asyncify, a post-processing pass developed for Emscripten that rewrites Wasm bytecode to make synchronous-looking code suspendable. It works but has a compile-time cost, a binary size cost, and produces code that is difficult to reason about.
The JS Promise Integration proposal (JS-PI) solves this properly using stack switching. The API surface is small:
// Wrap a Wasm export so it returns a Promise
const asyncFn = WebAssembly.promising(instance.exports.myFunction);
// Wrap a JS async import so Wasm can suspend on it
const suspendingFetch = new WebAssembly.Suspending(async (url) => {
const resp = await fetch(url);
return await resp.text();
});
When Wasm calls through a Suspending-wrapped import, the VM saves the current Wasm call stack and hands control back to the JS event loop. When the Promise resolves, the stack is restored and execution continues from where it left off. Pyodide, the Python distribution for Wasm, has been using this to enable synchronous-looking Python code to call fetch and other async APIs without a full CPS transform of the Python interpreter.
JS-PI shipped as an Origin Trial in Chrome 123 and has been progressing through the W3C WebAssembly Working Group standardization process.
The Component Model and Cross-Language Interop
All of the above proposals address how individual languages compile to Wasm. The WebAssembly Component Model addresses something different: how Wasm modules written in different languages talk to each other.
Core Wasm modules exchange integers, floats, and memory references. There is no standard representation for strings, records, or variant types at the module boundary. Two Wasm modules compiled from different languages cannot call each other without agreeing on an encoding convention, which in practice means writing manual glue code.
The Component Model introduces a higher-level module format and the WebAssembly Interface Types (WIT) IDL for describing interfaces:
interface image-processor {
record dimensions {
width: u32,
height: u32,
}
resize: func(data: list<u8>, target: dimensions) -> result<list<u8>, string>;
}
A Rust component and a Python component implementing or consuming the same WIT interface can interoperate without sharing a runtime, a memory model, or any language-specific convention. The WASI 0.2 release in January 2024 formalized this by rebuilding the WebAssembly System Interface on top of the Component Model, making WASI itself language-neutral in a way the earlier version was not.
Tooling has followed: wit-bindgen generates bindings for Rust, C, and Java; jco transpiles components to JavaScript for use in Node.js; cargo-component wraps the Rust toolchain so components are the default output format.
What Changes in Practice
The cumulative effect is that the set of languages that can target Wasm without embarrassing compromises has expanded substantially. You can compile Kotlin to Wasm and ship it without a bundled GC. You can compile Python and have it call fetch without blocking. You can write a Rust library and expose it to a Kotlin consumer through a WIT interface without writing C-style glue. You can use OCaml’s tail-recursive standard library without engineering around stack overflow.
None of this erases the cases where JavaScript remains the better choice. JS engines are mature, the JIT compilers are excellent, and JS-native code still outperforms Wasm on workloads that benefit from JS-specific optimizations. Wasm is not a replacement for JavaScript; it is an expansion of what the web platform can host.
What it does replace is the older story where Wasm was essentially a compilation format for systems languages only. The proposals Mozilla is pushing toward reflect a platform that takes seriously the full range of languages developers actually use. The machinery is mostly there. The remaining work is standardization, tooling maturity, and browser adoption of proposals still moving through the pipeline.