Social Networking Built on Static Files: The Design Space Behind s@
Source: hackernews
The Show HN thread for s@ landed 386 points and nearly 200 comments, which is a reasonable signal that the project touched something real. The premise is straightforward: decentralized social networking where your profile, posts, and social graph live as static files that any CDN can serve. No server-side logic, no database, no process to keep running. That’s a bold constraint to impose on a social protocol, and the implications run deeper than the pitch suggests.
Why static hosting is a harder requirement than it sounds
When we say “static site” we mean a directory of files served over HTTP without server-side computation. GitHub Pages, Cloudflare Pages, Netlify, S3 with a bucket policy — all of these qualify. The hosting cost approaches zero. CDN distribution is automatic. There’s no infrastructure to maintain beyond updating files.
The constraint is that the server cannot execute code in response to a request. It returns the file that exists at a given path, or a 404. Nothing more.
Every major decentralized social protocol fails this test somewhere.
ActivityPub, which underlies Mastodon and the broader fediverse, requires a live server that accepts HTTP POST requests to an inbox endpoint. You can serve your outbox (what you’ve published) as a static JSON-LD file, but your inbox — where other people’s replies and mentions land — requires a running process that can receive, authenticate, and store incoming messages. Every Mastodon instance is a piece of infrastructure that someone operates and pays for.
AT Protocol, which powers Bluesky, made a more interesting structural choice. Each user’s content lives in a “repository” — a Merkle tree of signed records serialized in the CAR (Content Addressable aRchives) format. This repo is essentially a static artifact: a deterministic file you can verify with a public key, append to as you publish new content, and mirror to any host. The data model is genuinely static-friendly. But the Personal Data Server that hosts and exposes this repo still needs to speak the XRPC protocol, handle session tokens, and answer live queries. The data is static in spirit; the serving layer is not.
Nostr went furthest toward minimalism. Events are signed JSON objects that clients broadcast to relays. Relays are simple WebSocket servers. A relay has no concept of users or social graphs — it just stores and forwards events. The signing model means you don’t need to trust any particular relay, and you can broadcast to many simultaneously. But relays need to be running, and client UX depends on finding and maintaining connections to them. A relay is not a static file server.
The three things RSS didn’t solve
The oldest social-over-static-files protocol is RSS. Publish a feed at a predictable URL, aggregators poll it, readers follow the URL. This is pull-based distribution with no server logic. It worked.
The reason RSS never became a full social protocol is that it solved distribution while leaving identity, content integrity, and interaction discovery untouched.
Identity: there’s no way to verify that the feed at example.com/feed.xml is cryptographically controlled by any particular keypair. Anyone who can write to that path can impersonate you. Ownership of the domain is the only identity signal, and domain ownership is a social convention backed by registrar infrastructure, not a cryptographic proof.
Content integrity: if someone mirrors or caches your RSS feed and modifies an item, there’s no standard way for a reader to detect the tampering. The content isn’t signed.
Interaction discovery: if someone replies to your post by publishing their reply on their own site, you have no way to find out without running a receiving endpoint. Webmention, the IndieWeb specification for notifying a URL that it has been mentioned, requires the target URL to advertise a live endpoint that can accept POST requests. The static constraint breaks the notification model.
A protocol that wants to build social on static files needs to solve all three of these. The solutions are well-understood individually; the challenge is composing them into something with a good UX.
Cryptographic identity binding is achievable with existing primitives. A keypair is generated client-side. The public key is published in a file at a well-known path (something like /.well-known/satproto.json) and potentially also registered in a DNS TXT record. This ties the key to a domain without requiring server logic. AT Protocol’s did:web method uses exactly this pattern: a DID document at /.well-known/did.json that lists the controlling keys.
Signed content follows naturally. Each post is a JSON object containing the content, a timestamp, the author’s key identifier, and a signature over all of that. Any reader who has resolved the author’s public key can verify the signature regardless of where the content was retrieved. This is what Nostr events do, and it works.
Interaction discovery is the hard part, and it’s where static-only approaches make their most significant sacrifice.
The inbox problem
If you reply to a post by publishing a signed JSON file on your own static site, the original author has no mechanism to discover it without running server code or relying on a third party that does. You know the URL of the post you replied to. The author doesn’t know you exist.
There are a few ways to approach this:
Pure crawling: an aggregator service discovers the network by following social graph links, crawls everyone’s feeds periodically, and indexes replies by the post they reference. Any client can query the aggregator. This works, but aggregators become coordination points, and running one that covers a meaningful fraction of the network is non-trivial infrastructure.
WebSub: the W3C WebSub specification lets publishers notify a hub when their feed updates, and subscribers receive push notifications through the hub. The hub is a shared piece of server infrastructure, but it’s a commodity service — multiple independent hubs can coexist and any publisher can choose among them. This preserves most of the static-hosting benefit while restoring reasonable latency.
Content-addressed storage: if every piece of content is stored by its hash (as in IPFS), then referencing a post means referencing its CID. Aggregators can discover content by watching for new CIDs that reference known ones. This pushes the coordination into the content-addressing layer rather than a centralized server, but IPFS pinning and availability have their own infrastructure story.
None of these completely solve the problem without reintroducing some form of shared infrastructure. The question isn’t whether to have that infrastructure, but where it sits and who controls it. A static-file social protocol shifts the infrastructure from application servers (which have substantial operational complexity) to commodity services like CDNs and hubs (which have minimal operational complexity for the user).
Prior art worth understanding
Secure Scuttlebutt (SSB) has been doing something close to this since 2014. Each identity has an append-only log of signed messages, identified by a keypair. Logs can be stored as files and replicated peer-to-peer. There’s no required server; the protocol works entirely through gossip between clients that happen to be online simultaneously, or through “pubs” — relay servers that store and forward messages. The SSB data model proves that a signed-append-only-log approach can sustain a real community; the UX challenges (peer discovery, log pruning, the complexity of the gossip protocol) explain why it stayed small.
Beaker Browser built a social layer on Hypercore (the protocol formerly known as Dat) with similar principles. Profiles were Hypercore feeds identified by public keys; follows were links between feeds; posts were appended records. The hosting model was seeding: any peer who had downloaded your feed could re-serve it, similar to BitTorrent. Beaker’s social features were genuinely interesting and had good UX within the browser. The project wound down, but the data model was sound.
AT Protocol’s repository design is worth studying closely even if the full protocol isn’t static-compatible. The AT Protocol repo spec defines a user’s content as a Merkle Search Tree of signed records, with each commit producing a new root CID. This structure means the entire content history of an account is a self-verifying static artifact. Bluesky’s infrastructure complexity comes from the serving and coordination layers on top of this, not from the data model itself. If you strip out XRPC and the relay network and just look at the repo format, it’s a blueprint for what a static-file social protocol’s data layer could look like.
Why this might matter more now
The hosting economics have shifted. Five years ago, a “free” static host usually meant bandwidth limits that made a moderately popular social profile impractical. Today, Cloudflare Pages and Netlify both offer generous free tiers with real CDN distribution. R2 and similar S3-compatible storage cost fractions of a cent per GB per month. IPFS pinning services like Pinata and web3.storage have matured. The infrastructure that makes static social networking viable at meaningful scale exists and is cheap to use.
Browser-native cryptography has also improved substantially. The Web Crypto API is standardized and available in every modern browser. Key generation, signing, and verification can happen client-side without any native dependencies. This means a fully static social client — one that manages keypairs in the browser, signs posts locally, and writes files to a static host via their API — is buildable as a plain web app with no backend.
The combination of cheap static hosting and capable browser crypto removes two of the three barriers that made prior art in this space niche. The remaining barrier is the UX gap between polling and push: a network where you see replies minutes or hours after they’re posted feels different from one where they arrive instantly.
The gap that hasn’t closed
RSS lost to Twitter not because RSS was technically inferior but because Twitter made the social graph feel alive. Replies appeared in real time. Trending topics surfaced what was being discussed right now. The follower count was a live number that updated as you watched. None of this is possible in a pure polling model without WebSub or a similar push layer, and even with WebSub the latency is in the tens of seconds rather than under one second.
That gap is a product and UX problem as much as a protocol problem. Scuttlebutt’s community found it acceptable because the value proposition was different — local-first, works offline, high trust social graphs — rather than trying to replicate the feel of Twitter. Beaker’s social features leaned into the hypertext/personal-web aesthetic rather than competing on real-time velocity.
What s@ is proposing is a bet that a meaningful slice of people want the hosting economics and censorship resistance of static files more than they want millisecond reply latency. That’s a reasonable bet. The design space is legitimate, the prior art is instructive, and the tooling available in 2026 is genuinely better than it’s ever been for this kind of project. Whether the protocol design and implementation can get the UX to “good enough” is the open question, and a 386-point Show HN suggests at least a few hundred people are curious to find out.