The Moment.dev team published a post-mortem on why they stopped using Yjs for collaborative editing in their developer tool platform. The arguments they make are technically solid, and the problems they describe are well-documented in the Yjs issue tracker. But the post itself is mostly a list of things that went wrong. What it doesn’t spend much time on is the mechanism behind those failures, and that mechanism matters if you’re trying to decide whether Yjs is the right tool for your own project.
The short version: Yjs is genuinely excellent for short-lived collaborative sessions. It becomes operationally expensive for documents that live months or years and see continuous editing by multiple authors. Understanding why requires knowing a little about how the data structure works.
How Yjs Actually Stores History
Yjs implements YATA (Yet Another Transformation Approach), a CRDT algorithm optimized for sequences. Every character insertion creates a struct called an Item, identified by a {clientID, clock} tuple. Items record their left and right neighbors at the time of insertion. That neighbor information is what makes conflict resolution possible without a central server: two clients who concurrently insert at the same position can each independently determine the correct final ordering by comparing their recorded neighbors.
When you delete a character, the item isn’t removed. It’s marked deleted = true and stays in the document forever. These dead items are called tombstones, and they’re not a bug or an oversight. They’re load-bearing. A concurrent insert from another client might reference a now-deleted item as its left or right neighbor. If that tombstone disappeared, the insert would lose its position context and either crash or place content incorrectly.
In JavaScript, each Item object carries significant overhead: V8’s object model, two neighbor pointers, a clock value, a client ID, the deleted flag, and the content itself. The per-item memory overhead runs around 200 to 400 bytes. A document with 100,000 historical character operations holds roughly 100,000 live Item objects in memory, regardless of how many characters are currently visible.
The GC Lie in Server Contexts
Yjs includes a garbage collection option (gc: true is the default). The documentation suggests this reclaims tombstone memory. In client-side-only applications where all peers eventually go offline and the document reloads from a clean state, it sometimes does. In persistent server deployments, it almost never fires.
The reason is architectural. Safe GC requires knowing the minimum state vector across every peer who has ever seen the document: the server can only discard a tombstone once it knows no peer could still send an operation that references it. In an open system where clients connect and disconnect continuously, and where documents are persisted indefinitely in y-websocket, y-redis, or y-mongodb-provider backends, that condition is never met. The server is always “connected,” so its state vector never allows GC to proceed.
The practical consequence is that your persisted update log, and the in-memory state when that document is loaded, grows with every deletion over the lifetime of the document. Moment.dev reported reaching documents where the encoded state was 10 to 50 times larger than the visible content. This matches a consistent pattern in the Yjs GitHub issue tracker: users reporting multi-megabyte encoded documents whose actual text content is a few kilobytes.
For a document that accumulates 1 million historical operations over six months of team editing, the server-side RAM footprint runs in the range of 400 to 800 MB. That’s for a single document. A runbook platform or a project management tool with hundreds of active documents starts to look like a memory leak from the outside, because it is effectively one.
Benchmarks Across the CRDT Ecosystem
The crdt-benchmarks repository, maintained by Yjs author Kevin Jahns, provides a standardized trace benchmark (B4) built from a real-world editing session of roughly 260,000 operations producing about 100 KB of visible text. The results put Yjs in a favorable position relative to older alternatives:
| Library | B4 encode time | Encoded size |
|---|---|---|
| Diamond Types | ~3ms | ~100 KB |
| Yjs | ~25ms | ~130 KB |
| Automerge 2.x | ~30ms | ~150 KB |
| Automerge 1.x | ~1,500ms | ~900 KB |
Yjs is fast and compact relative to Automerge 1.x, which was the main alternative for years. Automerge 2.x rewrote the core in Rust compiled to WASM and closed most of that gap, releasing in 2022-2023. Diamond Types, written in Rust by ShareDB’s original author Joseph Gentle, is currently the fastest implementation in the benchmark suite by a significant margin.
None of these numbers capture the tombstone growth problem, because the B4 benchmark represents a single editing session, not six months of continuous team editing. The performance characteristics that matter for production are different from those that show up in a synthetic trace benchmark.
Loro’s Approach: Compact History
Loro is a newer CRDT library (2023-2024) implemented in Rust with WASM bindings that directly addresses the GC problem. It implements a “shallow snapshot” mechanism: you can checkpoint a document at a specific point in time, and future merges only need history from that checkpoint onward. Operations from before the snapshot can be discarded without sacrificing merge correctness for any operations that arrive after it.
This doesn’t eliminate tombstones entirely, but it bounds their accumulation. Instead of growing forever, the document’s history can be periodically compacted to a checkpoint, with the old pre-checkpoint data safely dropped. Loro also adds first-class support for movable tree structures, which Yjs doesn’t handle natively and which matters for outliners, file trees, and hierarchical runbooks.
Loro’s benchmarks are competitive with Diamond Types. The library is less battle-tested than Yjs in production, but for new projects where document longevity is a concern, it’s worth serious consideration over Yjs.
When OT Is Just Simpler
ShareDB, the canonical open-source OT framework for Node.js, trades the offline-first property for a simpler operational model. Deletions are real deletions. Document size tracks visible content. Memory usage is O(visible content), not O(all-time edit history). The server holds no tombstones because there are none to hold.
The cost is a required central server to serialize concurrent operations. Every client operation is acknowledged by the server before being applied to other clients. True offline-first, where two clients sync directly and independently diverge for hours before reconnecting, is not possible with OT. The server is a coordination point and a single point of failure.
For most web applications, this trade-off is entirely acceptable. Google Docs is built on OT. Most real-time document editors you’ve used in production are built on OT. The offline-first capability that CRDTs enable is genuinely valuable in specific contexts, particularly mobile applications and peer-to-peer tools, but most enterprise collaborative tools don’t need it.
If your documents are short-lived (a single editing session, a one-off document) or if you need genuine offline-first P2P capability, Yjs is a strong choice. If your documents accumulate months of edit history from multiple contributors and you’re running them on a server you pay for, OT or Loro’s compacting CRDT will cost you less to operate.
What the Moment.dev Post Gets Right
Moment.dev’s use case is a runbook platform: structured documents that engineering teams keep indefinitely, edit during incidents, and never delete. This is about as hostile to Yjs’s memory model as an application can be. Long-lived documents, many contributors, rich structured content (embedded code, queries, live data), and a persistent server backend where GC never triggers.
Their diagnosis is accurate. The claims they list as false, things like “CRDTs are production-ready out of the box” and “GC handles memory automatically,” are accurate descriptions of how Yjs is often pitched versus how it actually behaves in this specific architecture.
The broader lesson isn’t that Yjs is bad. It’s that CRDT library documentation tends to present the algorithm’s theoretical properties (offline-first, no conflicts, P2P capable) without quantifying the operational costs those properties impose in server-side persistence contexts. Those costs are real, they scale with document age and edit volume, and they’re worth putting into your architecture decision before you’ve written six months of production data into a format you can’t cheaply compact.
The Peritext paper from Ink & Switch (2021) notes that intention-preservation in rich-text CRDTs is an unsolved problem for complex formatting interactions, which Moment.dev also hit. Automerge is implementing Peritext’s approach. Yjs’s rich-text support uses a simpler model that works for most cases but produces semantically wrong results for some concurrent formatting interactions involving overlapping ranges.
For plain text in short-to-medium-lived documents with a small set of collaborators, Yjs works well and its ecosystem (providers for WebSocket, WebRTC, IndexedDB, Redis, and many editor integrations) is the most mature in the CRDT space. For anything resembling Moment.dev’s requirements, the honest recommendation is either OT via ShareDB, or Loro once it accumulates more production validation. The tombstone math doesn’t lie.