The s@ protocol landed on Hacker News this week with 386 points and nearly 200 comments. The premise: decentralized social networking where your profile, posts, and social graph are static files that any CDN can serve. No server-side logic, no database, no process to keep running. Files at predictable URLs, signed with keys you control.
This is pull-based social networking. Clients fetch your posts on their own schedule. No server announces your activity to anyone. That design decision is usually framed as a limitation, the reason static-site social can’t do real-time notifications. But the privacy implications of pull-based distribution run in the other direction, and they deserve attention before dismissing the architecture as impractical.
What a Push-Based Federation Actually Leaks
ActivityPub, the W3C standard underlying Mastodon, Pixelfed, and the fediverse, is push-based. When you post something, your server delivers it directly to your followers’ inboxes. When you follow someone, your server announces that follow to their server. When you like a post, that activity propagates across the network.
This model is efficient for delivery and enables real-time feeds. It also means your server is continuously broadcasting your behavior to other servers. The server receiving your follow request records the timestamp. The server that accepted your like stores it. Every server in the fediverse that has users you’ve interacted with holds a record of that interaction, attributed to your account and timestamped.
The activity streams specification requires that follow and like activities be delivered as signed JSON-LD objects to recipients’ inbox endpoints. A conforming implementation receives and stores these activities. There is no option to interact quietly. Your activity log exists on servers you don’t control.
Nostr replaces servers with relays but the model is still push-based: you broadcast signed events to multiple relays, which store and re-serve them. Relays cannot impersonate you, but they can observe you. They see the timing of your posts, the content you broadcast, the patterns of your activity.
In a pull model, none of this happens. Your server receives requests for files and serves them. It does not announce when you started following someone. It does not notify anyone that you visited their profile. The only metadata you generate is standard HTTP server logs: which files were requested and from which IP. That is a substantive privacy distinction, not a theoretical one.
The Interaction Discovery Problem
The privacy advantage has a direct cost. In a push-based system, the author of an original post learns about replies because someone delivers them. In a pull-based system, learning about replies requires knowing where to look.
If you reply to a post on your own static site, your reply is a file at some URL on your domain. It contains a reference to the original post. The original author has no way to find it unless they already know about you and happen to be polling your feed.
This is the genuinely hard problem in static-site social. It has three approaches, none of which fully preserves the static constraint:
Crawlers and aggregators. An aggregator service follows social graph links, periodically fetches feeds, and builds an index of replies grouped by the post URLs they reference. Clients query the aggregator to load a thread. The trade-off is that aggregators become coordination points. You depend on at least one aggregator knowing about both parties. Coverage is bounded by how well-connected the social graph is, and latency depends on polling frequency.
WebSub. The W3C’s WebSub specification is precisely a hub-based push notification layer for static feeds. A publisher notifies a hub when their feed updates. Subscribers receive push notifications through that hub. Multiple hubs can coexist; publishers choose which hub to use; the hub does not need to understand the content of the feed, only relay the update signal. This restores reasonable notification latency, on the order of seconds rather than milliseconds, while keeping all canonical data in static files. The content remains authoritative as static files; WebSub handles the “I updated” signal only.
Content-addressed storage. If posts are referenced by cryptographic hash, aggregators can watch for new content that references known content by CID. Coordination pushes into the content-addressing layer rather than a centralized server, which is the model IPFS enables. The trade-off is that IPFS pinning and availability introduce their own infrastructure requirements, and the tooling is less mature than WebSub.
None of these eliminates shared infrastructure. The question is where the infrastructure lives and who controls it. A WebSub hub is a commodity service with a straightforward, well-specified protocol. Running one, or subscribing to an existing one, is considerably less operationally demanding than running a Mastodon instance. That difference matters even if neither option is zero-cost.
The Hybrid Model Is the Right Frame
Canonical record and real-time notification are separable concerns. Static files handle the canonical record. Your posts, signed with a key tied to your domain, are authoritative at their URLs. They are permanent, portable, version-controllable with git, readable with any text editor, indexed by search engines, and linkable with standard hyperlinks. An archival copy of a static-file social network is indistinguishable from an archival copy of a website, because it is one.
A thin, optional push layer handles notification. WebSub for feed updates, or a Nostr relay used only for notification events, or a simple webhook. This layer is not the source of truth; it is a delivery mechanism. If the hub goes down, your posts still exist at their URLs. If the relay drops a notification, the reply still exists on the replier’s domain. The push layer makes the experience faster, but the data does not depend on it.
This separation is what Bridgy Fed has implemented from the IndieWeb direction: your content stays on your domain as ordinary HTML with microformats, and a bridge service handles the ActivityPub protocol layer. Your site never needs to speak ActivityPub. The bridge does. The content is static; the federation layer is optional and replaceable.
The AT Protocol repository design points toward the same conclusion from a different angle. Each user’s content in AT Protocol is a Merkle Search Tree of signed records serialized in CAR format. This repository is deterministic, verifiable with a public key, and static in spirit: a file you can append to, mirror anywhere, and verify independently. The data model is sound. The serving layer, which speaks XRPC and handles live queries, is what requires running infrastructure. Separating these concerns was the right call.
What Has Actually Changed Since 2005
These ideas are not new. FOAF (Friend of a Friend) proposed publishing social graph data as RDF files at personal URLs in 2000. Dave Winer wrote about RSS as social infrastructure through the mid-2000s. Brad Fitzpatrick’s 2007 essay described the fragmented social graph problem with clarity that still holds up. These ideas had technical merit and went nowhere for two decades.
Three concrete things are different now. First, static site hosting is free and fast. GitHub Pages, Cloudflare Pages, Netlify, and R2 have eliminated hosting costs for personal domains with meaningful bandwidth. In 2005 this required a paid shared hosting account.
Second, the Web Crypto API is standardized and available in every modern browser. Key generation, signing, and verification can happen client-side without installing anything. This removes the dependency on native software that made earlier cryptographic identity schemes awkward to deploy.
Third, there is a real user segment that values ownership over convenience. People who moved their blogs off Medium, who run their own Mastodon instances, who backed up their Twitter archives before the API changed: these are not hypothetical users. They already tolerate friction in exchange for controlling their data.
The s@ protocol is making a specific architectural bet: that static files plus an optional thin notification layer is good enough for a meaningful portion of users. The privacy properties of the pull model, the zero hosting cost, and the portability of plain files justify the latency trade-off. The implementation burden is genuinely lower than it has ever been. Whether the UX can close the remaining gap is the open question, and the Hacker News discussion around it suggests enough developers find the trade-off worth taking seriously to try.