· 7 min read ·

Comptime Breaks the Promise That Type Signatures Make

Source: lobsters

Parametricity is a property of polymorphic functions: a function polymorphic over a type variable must treat that variable uniformly. It cannot inspect what the type is, cannot branch on it, cannot manufacture values of that type from nothing. This sounds like a limitation, but it is actually a source of power. When a function is forced to treat its type blindly, its type signature becomes a surprisingly tight behavioral specification.

Philip Wadler’s 1989 paper “Theorems for Free!” formalized this intuition, building on Reynolds’s 1983 relational interpretation of polymorphic types. The core result: you can derive theorems about a function’s behavior from its type alone, without reading the implementation, purely because parametricity constrains what an implementation can do.

The canonical example is id :: a -> a in Haskell. The only valid implementation is the identity function. Given a type variable a, the function cannot construct a new a because it has no idea what a is, so it must return exactly the value it was given:

id :: a -> a
id x = x

More usefully, a function reverse :: [a] -> [a] must commute with any mapping over its element type. If f :: A -> B, then:

map f (reverse xs) == reverse (map f xs)

This theorem follows from the type alone. reverse cannot inspect the elements because it has no knowledge of what a is, so any rearrangement it performs is structurally independent of the elements’ values, which means element transformations commute with it. GHC exploits this: stream fusion and rewrite rules like map f . map g = map (f . g) are safe compiler optimizations precisely because parametricity guarantees they hold for any function with those types. Remove parametricity and these optimizations become unsound.

What Zig Does Instead

Zig’s documentation on comptime describes comptime as a unified mechanism for generics, conditional compilation, and compile-time computation. At its center is a simple fact: types in Zig are first-class values of type type. You can pass them to functions, store them in variables, and compute with them at compile time.

The builtin @typeInfo takes a value of type type and returns a tagged union describing its structure: .Int with a signedness and bit width, .Float with a bit width, .Struct with a slice of fields each carrying a name and type, .Union, .Enum, .Pointer, and so on. A generic function with an anytype or comptime T: type parameter can call @typeInfo(T) and switch on the result to produce entirely different behavior for different types.

This is valid Zig, and it is the intended usage pattern:

fn printInfo(comptime T: type, value: T) void {
    const info = @typeInfo(T);
    switch (info) {
        .Int => std.debug.print("integer: {}\n", .{value}),
        .Float => std.debug.print("float: {}\n", .{value}),
        .Bool => std.debug.print("bool: {}\n", .{value}),
        else => @compileError("unsupported type for printInfo"),
    }
}

The function signature fn printInfo(comptime T: type, value: T) void looks structurally similar to a parametric generic. It is not. It branches on T, and two instantiations with different types produce different behavior. The type signature gives you no indication of that. Parametricity is gone.

Noel Welsh observes that this makes Zig’s generics remarkable from a type-theoretic standpoint, in the specific sense that in a language with parametricity the type signature is a contract, while in Zig it is a shape. Those are meaningfully different things.

The std.fmt Case

The canonical example of what comptime buys you is std.fmt.print. Format strings in C are stringly-typed: printf("%d", value) relies on runtime agreement between the format specifier and the argument type, and getting it wrong produces undefined behavior. Rust’s println! macro solves this with compiler-special-cased logic: the macro has privileged status and the compiler validates format strings against argument types as a built-in check, not a library feature.

Zig’s std.fmt.print is neither. It is written in ordinary Zig using comptime type inspection. The format function iterates over format string specifiers at compile time, calls @typeInfo on each argument type, and dispatches to the appropriate formatting logic. User code can define a format method on a custom type to control its serialization. This is entirely a library feature, not compiler magic, and the fact that it works at all is a direct consequence of types being first-class comptime values.

In Haskell you achieve something similar with the Show typeclass, but the constraint must appear in the function signature: show :: Show a => a -> String. In Rust you use std::fmt::Display as a trait bound: fn print_value<T: Display>(value: T). Both approaches keep the type-specific behavior constrained by an interface that is visible in the signature. Zig’s version requires no annotation. The std.fmt.print signature accepts anytype arguments, and all type-specific behavior lives inside the function body. This is more ergonomic for callers and more flexible for implementers, at the cost of making the signature uninformative about what the function accepts or how it behaves.

C++ Templates: The Precedent

Zig is not the first systems language to break parametricity. C++ templates have never been parametric. Template specialization allows a completely different implementation for a specific type instantiation:

template<typename T>
T process(T x) { return x; }

template<>
int process<int>(int x) { return x + 1; }

process<int> and process<double> are different functions with different behavior, connected only by name. if constexpr makes the branching inline and co-located with the general case:

template<typename T>
T process(T x) {
    if constexpr (std::is_same_v<T, int>) {
        return x + 1;
    }
    return x;
}

The C++ template system grew via accretion across decades of standards, absorbing type traits, SFINAE, concepts, and if constexpr as separate facilities. The metalanguage for template metaprogramming is famously distinct from ordinary C++, with different syntax, different error conventions, and a different mental model. Template errors spanning hundreds of lines are a running joke in the community precisely because the metalanguage is orthogonal to the object language.

Zig makes the same fundamental choice but with intentional design. There is no separate metalanguage. The same language that writes runtime code writes compile-time code, and the comptime keyword signals where execution happens. The trade-off with parametricity is the same as in C++, but the mechanism is coherent rather than a layer of features accumulated over thirty years.

Rust’s Explicit Cost

Rust occupies a different position. Its generics are parametric by default. An unconstrained generic function cannot inspect T without explicit opt-in:

fn identity<T>(x: T) -> T {
    x
}

When you need type-specific behavior, you declare a trait and add a bound:

use std::fmt::Display;

fn print_value<T: Display>(value: T) {
    println!("{}", value);
}

The bound T: Display is visible in the signature. A reader knows immediately that this function requires Display from its type argument, and the type-specific behavior is constrained by that interface, not by arbitrary inspection. The function is parametric within the constraint.

Rust provides escape hatches: std::any::TypeId enables runtime type comparison, const generics add limited value-level dispatch, and procedural macros generate type-specific code at compile time. Each has a syntactic cost; you must explicitly reach for them, and that reach is visible in the code. The syntactic cost is the signal that you are departing from the default parametric discipline.

The Rust community has been debating specialization via RFC 1210 since 2015. The proposal would allow a specific impl to override a general one, enabling type-specific behavior inside what appears to be a generic context. It has remained unstable for over a decade in part because specialization interacts badly with the borrow checker’s lifetime reasoning in ways that can produce unsoundness. Maintaining parametricity while adding type-specific dispatch is a harder problem than it looks, and the long stall is evidence for that.

What the Trade-Off Actually Costs

The practical cost of losing parametricity is not academic. In a parametric language, a generic function’s type signature bounds its behavior. When reviewing unfamiliar code, the space of things a function can do is constrained by its type. A function f :: [a] -> [a] is some list transformation: it cannot double every integer element, because it does not know the elements are integers.

In Zig, a function fn process(comptime T: type, xs: []T) []T can do anything to its elements because it can inspect T and branch accordingly. Every generic function is a potential source of type-specific behavior. You cannot audit it by reading the signature; you must read the body. In a large codebase with many contributors, this compounds: a function written as a uniform transformation can acquire type-specific branches over time, and nothing in the type system flags the change as meaningful.

The discipline that prevents this accumulation must come from code review and documentation rather than from the compiler. Zig’s @compileError can document required interfaces:

comptime {
    if (!@hasDecl(T, "serialize")) {
        @compileError("type T must implement serialize");
    }
}

This is useful, but it is convention enforced by the author, not a structural property of the type system, and it does not appear in the function signature.

The exchange Zig makes is explicit: type signatures as behavioral specifications versus zero-cost compile-time dispatch in ordinary code. For systems programming, the second is often more valuable. Serialization, format strings, SIMD dispatch, protocol parsers: these all benefit from direct type inspection without the indirection of trait objects or runtime dispatch. The reasoning guarantees that parametricity provides matter most for functional pipelines and container abstractions where equational reasoning drives correctness and compiler optimization. Those are not Zig’s primary targets.

The deeper point is that “generic” is not a single property. Parametric polymorphism, Rust trait dispatch, C++ template instantiation, and Zig comptime are different mechanisms with different reasoning properties. Knowing which one you are working with changes what you can infer from a signature. Zig’s comptime does not pretend to a parametricity it does not provide, and the type inspection it enables is syntactically present in every function body that uses it. The free theorems are gone, but the mechanism is legible and uniform. For systems programming, that is a coherent trade.

Was this interesting?