From WebFinger to Webhook: What s@ Looks Like as an Implementation Target
Source: hackernews
The s@ protocol landed on Hacker News with 386 points and nearly 200 comments. The premise is that your entire social presence can live as static files, no server process, no database, just files at predictable URLs that any CDN can serve.
The concept is tidy. What is less obvious is what it means to sit down and build against it. The architecture decisions that make static social elegant at the design level create specific engineering choices when you implement a client, a feed reader, or a bridge like a bot that surfaces s@ activity in a Discord server. Working through those choices is where the protocol’s strengths and gaps become concrete.
The Addressing Chain
The user@domain format is familiar from email. Resolution works in a specific sequence: given alice@example.com, split on @, take the domain, then query the WebFinger endpoint defined in RFC 7033:
GET https://example.com/.well-known/webfinger?resource=acct:alice@example.com
Accept: application/jrd+json
WebFinger returns a JSON Resource Descriptor with links to the user’s profile document. The profile document, served as a static file, contains the public key and pointers to the user’s content. Something like:
{
"id": "alice@example.com",
"type": "Person",
"publicKey": {
"id": "alice@example.com#main-key",
"type": "Ed25519VerificationKey2020",
"publicKeyMultibase": "z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK"
},
"outbox": "https://example.com/alice/outbox.json",
"following": "https://example.com/alice/following.json",
"followers": "https://example.com/alice/followers.json"
}
This is structurally close to an ActivityPub actor document, minus one field: the inbox URL. No inbox endpoint means no place to POST incoming activities. That omission is the load-bearing constraint of the protocol’s entire design. Every other tradeoff follows from it.
Signed Posts
Each post is a JSON object the client signs before publishing:
{
"id": "https://example.com/alice/posts/2026-03-12-0001.json",
"type": "Note",
"author": "alice@example.com",
"content": "Testing the static social protocol.",
"published": "2026-03-12T10:00:00Z",
"inReplyTo": null,
"signature": {
"type": "Ed25519Signature2020",
"created": "2026-03-12T10:00:01Z",
"verificationMethod": "alice@example.com#main-key",
"proofValue": "z3FXQjecWufY..."
}
}
The signature covers a canonical serialization of the JSON object, minus the signature field itself. A verifier resolves alice@example.com, fetches the profile, extracts the public key, and checks the signature against the content. This is functionally identical to Nostr’s event signing model, but transported over HTTP rather than WebSocket relays.
The Web Crypto API makes browser-side signing practical without any native dependencies:
// Key generation
const keyPair = await crypto.subtle.generateKey(
{ name: "Ed25519" },
true,
["sign", "verify"]
);
// Signing a post
const encoder = new TextEncoder();
const data = encoder.encode(JSON.stringify(postWithoutSignature));
const signature = await crypto.subtle.sign(
"Ed25519",
keyPair.privateKey,
data
);
// Base64url-encode for transport
const proofValue = btoa(String.fromCharCode(...new Uint8Array(signature)));
The private key never leaves the browser in this model. Publishing is a separate step that writes the signed JSON to a file and pushes it to a static host.
Publishing Without a Server
The most practical publish flow for a developer in 2026 runs through a version-controlled repository:
- Client generates a signed JSON post in the browser
- Client calls the GitHub API to create or update a file in the repository
- GitHub Actions, or Cloudflare Pages’ deploy hook, picks up the new file and deploys
The GitHub API call is a single authenticated PUT:
PUT /repos/{owner}/{repo}/contents/posts/2026-03-12-0001.json
Authorization: Bearer {github_token}
{
"message": "Add post 2026-03-12-0001",
"content": "{base64-encoded post JSON}"
}
Round-trip time from posting to live content is 30 to 90 seconds with a standard GitHub Actions workflow. Cloudflare Pages with a direct upload API cuts this to a few seconds. The latency is fine for a blog-style posting cadence; it would be noticeable in a rapid back-and-forth exchange.
For a Discord bot that bridges s@ content into a server, the direction reverses. Register a GitHub webhook that fires on push events, parse the commit to find new post files, fetch and verify the signed JSON, then relay the content to the appropriate channel. The webhook fires before the CDN deploy completes, so the bot can surface content nearly immediately after commit.
Feed Aggregation
The outbox file lists post URLs:
{
"type": "OrderedCollection",
"totalItems": 42,
"orderedItems": [
"https://example.com/alice/posts/2026-03-12-0001.json",
"https://example.com/alice/posts/2026-03-11-0003.json"
]
}
Reading a timeline means fetching each followed user’s outbox, collecting post URLs, and fetching posts you have not seen. At small scale this is a handful of HTTP requests. At larger scale you want to cache outbox files and re-fetch only when something has changed.
Conditional GET with ETags or Last-Modified headers makes polling efficient. A CDN returns a 304 without serving the full body when a file has not changed. A feed aggregator following thousands of outboxes can poll them frequently at very low bandwidth cost. RSS readers solved this problem in the early 2000s, and the same techniques carry over directly.
Reply threading is where the polling model hits a genuine limit. Finding all replies to a given post requires knowing who has published a post with inReplyTo pointing to it. Without a central index, a client can only discover replies from accounts it already follows. The IndieWeb Webmention ecosystem built optional shared infrastructure around this problem: publishers ping a service when they mention another URL, and the target URL can query that service for incoming mentions. A s@ equivalent would be a simple aggregator that crawls known profiles, indexes inReplyTo fields, and answers queries like “who has replied to this post URL.” That is modest infrastructure, roughly comparable to running a webhook receiver.
What Still Needs to Be Built
The protocol design is not the blocking problem. Content-addressed, cryptographically signed JSON over HTTP is well-understood, and the Web Crypto API makes client-side signing viable in any modern browser without native code.
The gaps are in the tooling layer:
- A reference implementation of the WebFinger plus profile resolution chain, with a defined JSON schema for profiles, posts, and social graph files
- A publish client that abstracts the sign-and-deploy workflow for users who do not want to manage a GitHub repository manually
- A shared WebSub hub that publishers can notify on update, allowing subscribers to receive push notifications rather than polling
- A crawling aggregator that builds a searchable index of profiles and
inReplyTorelationships across the network
None of these are large projects. A WebSub hub is a few hundred lines. A publish client that signs posts and calls the GitHub Contents API is roughly the complexity of any OAuth-based posting tool. The aggregator is the most involved, but existing RSS aggregator architectures apply directly: poll with conditional GETs, normalize into a content store, expose a query interface.
The AT Protocol repository format, which defines user content as a Merkle Search Tree of signed records, is worth studying as prior art for the data layer. The complexity in AT Protocol comes from the serving and coordination layers on top of the data model, not from the data model itself. A s@ implementation that takes only the data model ideas from AT Protocol and sits the rest on commodity static hosting would be a substantially smaller system.
The Hacker News discussion has the characteristic pattern of a protocol announcement that resonated: the top comments are skeptical about the inbox problem and comparison to RSS, while the replies-to-replies are working through whether specific pieces of the tooling layer could be built. That second tier of conversation is usually where interesting open source projects start. The implementation surface is well-defined enough that the gap between the Show HN post and a working client library is probably a few weeks of focused work, not months.