The read side of a static social protocol like s@ is almost embarrassingly simple. An HTTP GET to a predictable URL returns a JSON feed. Any language with an HTTP client handles it in twenty lines. The write side is where the design gets honest about its constraints.
Publishing a post to a static social profile means generating a signed file, uploading it to a static host, and atomically updating an index that points to it. For a developer with a GitHub-backed site and a CI pipeline, these steps are routine. For someone who wants to reply to a post from their phone, it is a multi-step process with latency measured in minutes. How an implementation resolves that asymmetry determines whether the protocol is a tool for developers or a viable social platform.
The Atomic Update Problem
Static file hosting does not have transactions. If you publish a post by uploading a new file and then updating your feed index, there is a window between those two operations where the index does not reflect the new post. For personal use, this rarely matters. For a client that might process simultaneous writes (a shared account, a client that queues posts written offline), it is a real race condition.
GitHub’s contents API handles this with optimistic concurrency. When you update a file, you supply the SHA of the blob you are updating. If that SHA does not match what is currently on disk, the API returns a 409 Conflict and you retry by fetching the current state and reapplying your change:
async function publishPost(post, feedPath, token) {
// Fetch current index with its SHA
const res = await fetch(
`https://api.github.com/repos/alice/social/contents/${feedPath}`,
{ headers: { Authorization: `token ${token}` } }
);
const { sha, content } = await res.json();
const feed = JSON.parse(atob(content));
// Upload post file
const postPath = `posts/${post.id}.json`;
await fetch(
`https://api.github.com/repos/alice/social/contents/${postPath}`,
{
method: 'PUT',
headers: { Authorization: `token ${token}`, 'Content-Type': 'application/json' },
body: JSON.stringify({
message: `post: ${post.id}`,
content: btoa(JSON.stringify(post, null, 2))
})
}
);
// Update feed index with optimistic lock
feed.items.unshift({ id: post.id, url: postPath, created: post.created });
const updateRes = await fetch(
`https://api.github.com/repos/alice/social/contents/${feedPath}`,
{
method: 'PUT',
headers: { Authorization: `token ${token}`, 'Content-Type': 'application/json' },
body: JSON.stringify({
message: 'update feed index',
content: btoa(JSON.stringify(feed, null, 2)),
sha // If this is stale, API returns 409
})
}
);
if (updateRes.status === 409) throw new Error('concurrent write, retry');
}
This is database optimistic concurrency control implemented over HTTP against a file store. It is the same pattern Git itself uses for ref updates. The interesting thing is that static social protocols get this coordination primitive for free from the hosting provider’s API, without needing any server-side logic of their own.
GitHub Pages deploys typically take 30 to 60 seconds after a commit, which means there is a window where the post exists in the repository but is not yet publicly visible at the CDN URL. This is acceptable latency for a protocol that has already traded real-time delivery for hosting simplicity.
Signing With What the Browser Actually Has
A static social protocol without content signing is RSS with a social graph, which means it inherits RSS’s two oldest problems: no way to verify that the feed at a given URL is controlled by who you think, and no way to detect if a cached or mirrored copy has been tampered with.
The Web Crypto API is available in every modern browser and handles key generation, signing, and verification without any native dependencies. The friction is that it supports ECDSA over P-256 and P-384, but not Ed25519, which is what most modern signing schemes use. Nostr uses secp256k1, AT Protocol uses Ed25519. For a protocol that wants to stay browser-native without WASM dependencies, P-256 ECDSA is the pragmatic choice:
async function generateIdentityKey() {
const keyPair = await crypto.subtle.generateKey(
{ name: 'ECDSA', namedCurve: 'P-256' },
true,
['sign', 'verify']
);
const publicJwk = await crypto.subtle.exportKey('jwk', keyPair.publicKey);
const privateJwk = await crypto.subtle.exportKey('jwk', keyPair.privateKey);
return { publicJwk, privateJwk };
}
async function signPost(postContent, privateKeyJwk) {
const key = await crypto.subtle.importKey(
'jwk', privateKeyJwk,
{ name: 'ECDSA', namedCurve: 'P-256' },
false, ['sign']
);
const data = new TextEncoder().encode(JSON.stringify(postContent));
const sig = await crypto.subtle.sign(
{ name: 'ECDSA', hash: 'SHA-256' },
key, data
);
return btoa(String.fromCharCode(...new Uint8Array(sig)));
}
This works, but there is a correctness trap waiting: JSON.stringify is not deterministic. Two conforming JavaScript implementations can serialize the same object with different key ordering. Signing the output of a naive JSON.stringify means clients that serialize slightly differently will produce different byte sequences and thus fail signature verification, even for identical data.
The established solution is JSON Canonicalization Scheme (JCS, RFC 8785), which defines a canonical JSON representation with lexicographic key ordering and deterministic number formatting. Any protocol spec that skips canonicalization will have interoperability bugs in the first week of multi-client testing. It is the kind of detail that seems like an implementation concern but is actually a spec requirement.
Identity Resolution Without a Server
The user@domain addressing format that s@ uses is familiar from email and ActivityPub, but the resolution path matters for whether the protocol can stay static.
WebFinger is the standard resolution mechanism: a client wanting to look up alice@example.com makes a GET request to https://example.com/.well-known/webfinger?resource=acct:alice@example.com and receives a JSON resource descriptor pointing to the profile document. This is how Mastodon handles cross-instance user lookup.
For a single-user site, WebFinger can be served as a static file: the resource parameter will always be the same, so the response is always the same document. For multi-user hosting (multiple alice, bob, carol accounts under one domain), a static file cannot route by query parameter, which breaks the pure static model.
The workaround is convention over configuration: alice@example.com resolves to https://example.com/.well-known/satproto/alice.json directly, without a dynamic lookup step. Clients that know the convention can skip WebFinger entirely. The tradeoff is interoperability: clients that implement WebFinger-based discovery (like Mastodon) cannot look up these profiles without a WebFinger handler.
DNS TXT records offer a third path. A TXT record on alice.example.com or _satproto.example.com can contain the public key fingerprint and profile URL. DNS is read-only infrastructure for the user’s purposes (adding TXT records does not require a running server), survives host migrations gracefully, and provides a level of identity binding that is hard to spoof without compromising DNS for the domain. The AT Protocol’s did:web method uses /.well-known/did.json for the same purpose.
What the Client Architecture Looks Like
A minimal s@ client with no server component:
- Key management in IndexedDB, never leaves the device
- On startup, fetch the user’s following list from their static profile
- For each followed user, GET their feed JSON and merge into a local timeline
- Post composition signs content locally with the stored private key, applies JCS canonicalization, then calls the hosting provider’s API directly from the browser
- The hosting provider token (GitHub, Cloudflare, Netlify) is stored locally and the client authenticates via OAuth; no backend touches the token
This is architecturally similar to a 1999 desktop email client: all the logic runs locally, the server is a dumb file store, and federation happens through a shared addressing convention. The failure modes are also similar. Spam has no infrastructure-level defense because the protocol has no mechanism for signaling to a hosting provider that a sender should be blocked. Discovery beyond known addresses requires either social graph traversal or a third-party crawler index.
Those are real limitations, but they are the same limitations email had for its first twenty years and it still became infrastructure. The question is whether the tooling can absorb the complexity of the write path well enough that the user experience feels like composing a post, not like deploying a static site. Bluesky’s clients do exactly this for AT Protocol: users write to a Merkle search tree without knowing it because the client handles all the data model complexity transparently.
The Tooling Gap Is the Real Spec
A well-specified protocol with poor tooling stays a demo. The s@ protocol’s Hacker News thread attracted 182 comments, most of them from developers, which is the right early audience but also a signal about who the current tooling serves.
The path from developer curiosity to practical social infrastructure runs through a CMS layer that handles signing, canonicalization, atomic uploads, and feed index updates without exposing any of it to the user. The static files remain the authoritative record; the CMS is just a client that knows how to write them correctly. Getting that layer right, particularly the mobile write path where deployment latency is most noticeable, is a harder engineering problem than the protocol spec itself.