· 7 min read ·

What Rust's Unstable Specialization Reveals About Zig Comptime

Source: lobsters

Noel Welsh’s article “Parametricity, or Comptime is Bonkers” makes a precise observation: Zig’s comptime feature breaks parametric polymorphism. In parametric systems like Haskell or ML, a function’s type signature bounds its behavior so tightly that you can derive theorems about it without reading the implementation. Zig’s comptime abandons that guarantee. Welsh frames this as bonkers; from a type theory perspective, it is. But “bonkers” needs a reference point, and the most concrete one available is Rust’s decade-long attempt to add type-specific behavior to a parametric generics system. That story reveals, more clearly than any theoretical argument, what the parametricity constraint is actually protecting.

What Parametricity Protects in Practice

Philip Wadler’s 1989 paper “Theorems for Free!” showed that the types of polymorphic functions are sufficient to derive behavioral theorems without reading their implementations. A function typed reverse :: [a] -> [a] in Haskell must satisfy map f . reverse = reverse . map f for any function f. You can prove this from the signature. The function cannot inspect the elements, so it cannot treat elements differently based on their values, so any element transformation you apply commutes with the reversal.

GHC uses this directly for stream fusion. The compiler applies rewrite rules like:

{-# RULES "map/map" forall f g. map f . map g = map (f . g) #-}

This rule is sound because map is parametric. The function cannot inspect the list elements, so the structural traversal commutes with the element transformation. GHC rewrites fused pipelines into single-pass traversals, and the soundness comes from the type-theoretic property, not from proof-checking the specific functions. In a large Haskell codebase, you get these optimizations for free from the way generics work, not from anything the programmer writes.

Rust’s generics offer a similar guarantee. An unconstrained generic function fn identity<T>(x: T) -> T cannot branch on the type of T. To get type-specific behavior, you define a trait and add it to the function’s bounds. The constraint appears in the signature. The compiler enforces this bidirectionally: the function body can only use capabilities declared in the bounds, and callers know in advance what types are accepted. Adding a new bound is a breaking change; removing one widens the API.

Rust’s Specialization Problem

In 2015, Rust opened RFC 1210, proposing “specialization”: the ability to provide more specific trait implementations that override general ones for particular types. The motivating example is legitimate. If you have a general implementation of ToString for any type implementing Display, you might want a more efficient one specifically for String that avoids the formatting machinery entirely.

This is exactly the kind of type-specific behavior that parametricity forbids, and RFC 1210 has been stuck in nightly-only status for over a decade. The reason is that specialization interacts with lifetime reasoning in ways that create unsoundness.

The borrow checker’s analysis sometimes depends on parametricity. A trait implementation that is valid for all T allows certain lifetime conclusions that break when you can override it for specific types. Researchers demonstrated that naive specialization allows constructing a 'static reference to a non-'static value by exploiting the gap between what the general implementation promises and what the specialized one delivers. The following sketch illustrates the structure of the problem:

// A general implementation valid for all T
impl<T> Foo for T {
    fn make_static(x: &T) -> &'static T { ... } // not actually valid
}

// A specialization for a specific type
impl Foo for String {
    fn make_static(x: &String) -> &'static String {
        // exploits the override to violate lifetime guarantees
        unsafe { std::mem::transmute(x) }
    }
}

The Rust team has explored several partial solutions, including “min specialization” that restricts which properties can be overridden, but no approach has been stabilized. The tracking issue for specialization has accumulated hundreds of comments over ten years, with multiple attempts to scope the feature down enough to be sound.

This is not a failure of engineering effort. It is a structural consequence of trying to graft type-specific behavior onto a system built on parametric guarantees. The borrow checker reasons about lifetimes assuming that generic code behaves uniformly across type instantiations. Specialization violates that assumption in ways that are difficult to contain without either restricting specialization to near-uselessness or reworking core borrow checker invariants.

What Zig Avoids by Never Making the Promise

Zig has no such problem, because it never made the parametric promise. The @typeInfo builtin is the intended mechanism for type-specific behavior:

fn process(comptime T: type, value: T) void {
    switch (@typeInfo(T)) {
        .Int => |info| {
            if (info.signedness == .unsigned) {
                // unsigned-specific path
            }
        },
        .Float => {
            // float-specific path
        },
        else => @compileError("unsupported type: " ++ @typeName(T)),
    }
}

No parametricity was promised, so none is violated. The compiler’s safety analysis does not rely on uniform generic behavior across type instantiations, so there is no property to break when you branch on types. The structural inspection is integral to the system from the start.

The structural serializer is the canonical example of where this shines:

fn serialize(comptime T: type, writer: anytype, value: T) !void {
    switch (@typeInfo(T)) {
        .Int, .Float => try writer.print("{}", .{value}),
        .Struct => |info| {
            inline for (info.fields) |field| {
                try serialize(field.type, writer, @field(value, field.name));
            }
        },
        .Enum => try writer.print("{s}", .{@tagName(value)}),
        else => @compileError("serialize: unsupported type " ++ @typeName(T)),
    }
}

The inline for over info.fields iterates at compile time over the field list. Each iteration produces specialized code for that field’s concrete type. The result is zero-overhead structural serialization with no runtime dispatch and no vtable. The Zig standard library’s std.fmt uses exactly this pattern to handle format strings at compile time.

The equivalent in Haskell requires either typeclass instances that must be explicitly written or derived, or a generics library like GHC.Generics, which encodes structure through a representation type and requires deriving or manual instances. Both approaches maintain parametricity by making the type-specific behavior explicit at the type level. Zig’s approach is more direct, and the cost is that the signature does not reveal that the function branches on type structure.

The Interface Documentation Tradeoff

Both Rust’s trait bounds and Haskell’s typeclasses surface interface requirements at the signature boundary. A caller reading fn process<T: Display + Clone>(x: T) knows before reading the body that the function requires Display and Clone. Adding a new bound is a breaking change that the compiler enforces.

Zig’s equivalent, when written defensively, uses @hasDecl and @compileError:

fn process(comptime T: type, value: T) void {
    comptime {
        if (!@hasDecl(T, "serialize")) {
            @compileError("type " ++ @typeName(T) ++ " must implement serialize");
        }
    }
    value.serialize();
}

This generates a clear error for types that do not implement the expected interface. The Zig standard library uses this pattern for its writer and reader interfaces. But notice where the requirement lives: in the function body, not the signature. A caller reading fn process(comptime T: type, value: T) void has no information about what T must provide. The interface contract is scattered through the implementation rather than surfaced at the signature boundary.

This is not fatal for small codebases with few contributors. For larger codebases, the discipline that prevents undocumented interface requirements must come from code review and convention rather than from the compiler. The requirements can become stale, incomplete, or invisible across module boundaries in ways that a type-checked constraint cannot.

The Spectrum From id :: a -> a to anytype

Zig’s most extreme form is anytype, where the function accepts anything that structurally satisfies its usage, discovered at the call site:

fn printAll(writer: anytype, values: anytype) !void {
    inline for (values) |v| {
        try writer.print("{any}\n", .{v});
    }
}

The signature communicates nothing about accepted types. Compatibility is determined by whether compilation succeeds when the function is instantiated with specific arguments.

This is the far end of a spectrum. Haskell’s id :: a -> a tells you everything: the function is the identity, full stop, and no other implementation is possible. Zig’s anytype tells you nothing from the signature; you learn what a function accepts by reading its body or by attempting instantiation. Both ends of this spectrum represent coherent design choices, not mistakes, and most practical code sits somewhere in between.

The Design Consequence

Welsh’s framing of comptime as bonkers is accurate within the parametric polymorphism tradition. Comptime violates the property that makes generic type signatures meaningful as behavioral specifications. Stream fusion-style optimizations are not sound in Zig because they require parametricity that does not exist. Code review requires reading function bodies to understand what types a function actually handles.

But Rust’s RFC 1210 shows what happens when you start from parametric guarantees and try to add type-specific behavior afterward. A decade of soundness bugs, RFC revisions, and a feature that remains nightly-only because the core assumption of the surrounding system is violated in ways that propagate further than anyone expected.

Zig’s design is consistent in the other direction: build from type inspection as the primitive. The @typeInfo introspection, the comptime stage, the anytype duck typing, the compile-time recursive struct walking in std.fmt all fit together as a partial evaluation system where the type argument is an inspectable first-class value, not a parametric abstraction. The free theorems from parametricity are real and practically valuable. They are also simply not the contract Zig offers. Rust spent ten years demonstrating that you cannot have both at once.

Was this interesting?