· 6 min read ·

Why JSPI Is the Right Fix for WebAssembly's Async Problem

Source: v8

WebAssembly’s execution model is synchronous by design. C and C++ programs, which make up the majority of code ported to Wasm, assume that when you call read() or recv(), execution halts until the data arrives. The operating system kernel handles the blocking. This is a reasonable assumption when you control the runtime, but the browser is not a kernel. The browser’s environment is built entirely around asynchronous callbacks and Promises. Blocking the main thread is not just frowned upon; it freezes the UI and, in some contexts, terminates the tab.

This mismatch is structural, and it has forced developers to choose between two bad options for years: either restructure C or C++ code to avoid any synchronous I/O (invasive and often impractical for large existing codebases), or use Asyncify, Emscripten’s compiler-level workaround that transforms the entire call graph at build time. JSPI, the JavaScript Promise Integration API, is a third option that fixes the problem where it actually lives: inside the JavaScript engine.

The Asyncify Approach and Its Costs

Asyncify has been part of Emscripten since 2019 and is the tool most Wasm developers have reached for when they need to call async browser APIs from synchronous C code. The mechanism is a compiler transformation: Emscripten analyzes which functions on the call stack might need to suspend, then rewrites each of them to save and restore their local state explicitly. The result is that any function on a potential async call path becomes a state machine that can be interrupted and continued.

This works, but the costs are significant. Binary size increases by 20 to 100 percent in typical cases because every affected function needs additional code for save/restore logic. Every call on a transformed call path carries overhead even when no suspension actually happens, because the transformed functions always carry the machinery for state management. Deeply nested call stacks can overflow during transformation. Stack traces in the debugger become difficult to read because the transformation restructures what V8 sees as the call graph.

Compile times also increase meaningfully for large projects since Asyncify requires a whole-program analysis pass. For a project like SQLite compiled to Wasm, which has hundreds of functions that might touch I/O, this adds up.

Asyncify was the right engineering decision given the constraints of 2019. It solves a real problem without requiring any changes to the JavaScript engine or the WebAssembly specification. But it is a workaround built on top of existing primitives rather than a solution to the underlying mismatch.

What JSPI Actually Does

The JSPI proposal moves the suspension mechanism from the compiler into the engine. When a WebAssembly module calls a JavaScript import function that returns a Promise, V8 can now suspend the entire Wasm execution stack at that boundary, return control to the event loop, and resume the Wasm stack when the Promise resolves. From the Wasm module’s perspective, the call returned synchronously with the resolved value. From the browser’s perspective, the main thread was never blocked.

The API is minimal. Two sides need wrapping:

// Wrap an async JS function that a Wasm import will call
const importObject = {
  env: {
    fetch_data: WebAssembly.promising(async (ptr, len) => {
      const url = readStringFromMemory(ptr, len);
      const resp = await fetch(url);
      const buf  = await resp.arrayBuffer();
      writeBufferToMemory(buf);
      return buf.byteLength;
    })
  }
};

const { instance } = await WebAssembly.instantiateStreaming(
  fetch('app.wasm'),
  importObject
);

// Wrap the Wasm export so it returns a Promise
const asyncMain = WebAssembly.promising(instance.exports.main);
const result    = await asyncMain();

The C code that gets compiled into app.wasm contains no async machinery at all:

extern int fetch_data(const char* url, int len);

int main() {
    int bytes = fetch_data("https://api.example.com/data", 34);
    process(bytes);
    return 0;
}

This is ordinary synchronous C. No callbacks, no continuation-passing, no restructuring. The JSPI wrapping happens entirely on the JavaScript side at instantiation time, and the engine handles the stack management at runtime.

Note that the API went through a revision during standardization. The original prototype used a WebAssembly.Suspending constructor for the import side. The new API described in a subsequent V8 post unified both sides under WebAssembly.promising(), removing WebAssembly.Suspending entirely. If you are reading older tutorials, check which API they are using.

Stack Switching Under the Hood

JSPI is built on top of a lower-level engine capability that V8 calls stack switching, also described in the broader WebAssembly stack switching proposal. When a JSPI-wrapped import is called and returns a Promise, V8 creates a continuation that captures the current Wasm stack frame, chains a resumption callback onto the Promise with .then(), and unwinds back to the event loop. When the Promise resolves, the resumption callback restores the captured stack and provides the resolved value as the synchronous return value of the original import call.

This is essentially the same mechanism that underlies coroutines and green threads in language runtimes like Go’s goroutines or Python’s asyncio event loop internals. The difference is that JSPI exposes this at the Wasm-to-JS boundary specifically, rather than as a general-purpose coroutine system. The general capability is being standardized separately in the stack switching proposal, which would expose first-class stack switching to Wasm modules themselves.

The cost of a suspension and resumption is measured in microseconds, comparable to a coroutine context switch. For applications where async calls are infrequent relative to compute work, the overhead is negligible. More importantly, non-suspending code paths carry zero overhead because there is no code transformation involved.

The SQLite + OPFS Case

The use case that drove the most interest in JSPI is probably SQLite compiled to WebAssembly, combined with the browser’s Origin Private File System (OPFS) for persistent storage. OPFS provides a private, sandboxed file system that browsers expose through an async API. SQLite expects synchronous file I/O.

With Asyncify, the entire SQLite call graph that touches file operations has to be transformed. SQLite has a well-tested, stable C codebase that has accumulated decades of careful engineering. Running it through Asyncify produces a binary that is substantially larger and that carries runtime overhead on essentially every code path, since file I/O is not isolated to a few leaf functions.

With JSPI, SQLite’s read() and write() imports can be wrapped with WebAssembly.promising() on the JavaScript side, backed by await opfsFileHandle.read(...) under the hood. The SQLite Wasm binary itself is unchanged. The file I/O calls look synchronous to SQLite and behave asynchronously from the browser’s perspective.

This is a meaningful practical difference. The official SQLite Wasm distribution has been one of the primary motivators for advancing JSPI through the standardization process.

Emscripten Integration

Emscripten has supported JSPI as an alternative to Asyncify since version 3.1.47. The compiler flag is -sJSPI:

emcc app.c -o app.js \
  -sJSPI \
  -sEXPORTED_FUNCTIONS=_main

Emscripten generates the WebAssembly.promising() wrapping in its output JavaScript automatically. The POSIX filesystem and network emulation layers in Emscripten can route through JSPI when it is enabled, which means fopen, fread, fwrite, recv, and send work without Asyncify for applications targeting browsers that support JSPI.

Deployment Reality

JSPI entered Chrome’s Origin Trial starting with Chrome 117, which means you can deploy it to production in Chrome today if you register for an Origin Trial token. For local development and testing, the flag --enable-features=WebAssemblyJSPI enables it without a token. Node.js v22 and later support JSPI with the --experimental-wasm-jspi flag.

Firefox and Safari have not shipped JSPI. The proposal is at Phase 3 in the WebAssembly CG process, which means the design is stable but full cross-browser standardization is still in progress. This limits where you can rely on JSPI without a fallback.

The practical deployment strategy today is to use JSPI when available and fall back to Asyncify otherwise. Emscripten does not yet do this automatically, but it is a reasonable pattern to build into a production application’s loading logic given that JSPI support is detectable at runtime.

What This Changes

Asyncify solved a real problem and deserves credit for enabling a class of applications that would otherwise be impossible in the browser. But it was always a compiler-level approximation of a runtime capability. Every binary it produced carried the shape of the problem it was working around.

JSPI is the right fix because it addresses the mismatch where it exists: at the boundary between Wasm’s synchronous execution model and JavaScript’s async environment. The engine handles stack suspension; the Wasm binary remains unchanged; the JavaScript wrapper is explicit and auditable. Code size stays constant, non-suspending paths carry no overhead, and the debugging experience reflects the actual code structure.

The standardization process is slower than developers would like, and the lack of Firefox and Safari support means JSPI cannot yet be treated as a baseline. But the direction is clear. Applications that need to port synchronous C or C++ code to the browser without restructuring it are going to benefit from JSPI substantially as it moves from origin trial to stable availability.

Was this interesting?