Zig Comptime Is Two-Stage Computation, and That Is Why Parametricity Does Not Apply
Source: lobsters
Noel Welsh’s piece on Zig’s comptime and parametricity makes a precise observation: comptime breaks parametric polymorphism, and therefore the free theorems that languages like Haskell derive from type signatures do not apply in Zig. The observation is correct, but the more interesting question is why the design ends up this way. The answer is not that Zig made a mistake or chose flexibility over correctness. It is that comptime belongs to a fundamentally different family of abstraction mechanisms, one whose theoretical roots are in staged computation and partial evaluation rather than in System F or the Hindley-Milner tradition.
Two-Stage Computation
The idea of multi-stage programming formalizes what many languages implement informally. A program with multiple stages has some computation that runs first (the first stage) and produces code or data that a second computation (the second stage) then uses. C macros are a crude version: textual transformation runs before compilation, and the resulting text is what the compiler sees. Template Haskell makes the staging explicit with quote and splice brackets. MetaML, a research language from Taha and Sheard in the late 1990s, provided a type-safe framework for writing programs in two or more stages with static guarantees about which expressions are evaluated at which stage.
Zig comptime fits this model directly. Every Zig program runs in two stages: the comptime stage, where types are resolved, generics are instantiated, and arbitrary code can run, and the runtime stage, where the compiled program executes. The comptime keyword marks which computations belong to the first stage. Types are values in the comptime stage, and the type type is the type of those values.
// This runs entirely at the first (comptime) stage
const MyInt = comptime blk: {
const bits = 32;
break :blk @Type(.{ .Int = .{ .signedness = .unsigned, .bits = bits } });
};
// The runtime program only sees the result: a u32
const x: MyInt = 42;
The Zig language reference describes comptime parameters as producing function instantiation for each unique set of comptime arguments, which is monomorphization, but the mechanism is more general than that: arbitrary code can produce types, and those types can feed into subsequent type computations.
Why Partial Evaluation Gets You This
Partial evaluation is the process of specializing a program with respect to some of its inputs, producing a residual program that computes faster because part of the work is already done. The classic example is specializing an interpreter with respect to a fixed program, producing a compiled version. Peter Sestoft’s survey of partial evaluation covers this territory thoroughly; the key theorem is Futamura’s projections, which show that a self-applicable partial evaluator is a compiler generator.
Zig comptime is essentially a partial evaluator that runs during compilation. Given a generic function and a comptime type argument, it specializes the function for that type, producing a residual function that operates only on concrete runtime values. This is why the type inspection is natural in comptime: a partial evaluator must inspect the structure of its static inputs to decide what to specialize. If the static input is a type, inspection of that type is how you do specialization.
In this framing, parametricity was never on the table. Partial evaluation is about collapsing the distinction between the static and dynamic stages, using everything known at compile time to reduce runtime work. Parametricity is about maintaining a barrier between the generic abstraction and the concrete type, ensuring the function behaves uniformly. These two goals are in direct tension, and Zig resolves the tension unambiguously in favor of the first.
The Dependent Types Connection
There is another way to read comptime: as a restricted form of dependent typing. In a dependently typed language like Idris or Agda, types can depend on runtime values, and the same language is used for both type-level and value-level computation. A type Vec n a depends on the natural number n, and functions over vectors can branch on n in their type signatures.
Zig does not allow types to depend on runtime values, but it does allow types to depend on comptime values, which is a subset of the same idea. A function’s return type can be computed from its comptime arguments:
fn returnType(comptime T: type) type {
if (@typeInfo(T) == .Int) {
return T;
} else {
return void;
}
}
fn conditionalReturn(comptime T: type, x: T) returnType(T) {
if (@typeInfo(T) == .Int) {
return x;
}
}
The return type is a function of the comptime argument. This is exactly what dependent function types express in Idris, restricted to the comptime fragment. Once you have this, the uniform behavior that parametricity requires becomes impossible to maintain: the function’s output type can change depending on the type input, so the function obviously cannot be uniform across type instantiations.
The Zig standard library uses this pattern extensively in std.meta, which provides utilities for type-level computation: finding the tag type of a union, extracting the child type of a pointer, generating tuple types from field lists. These are all functions from types to types, computed at comptime.
What You Can Build That Parametric Languages Cannot
The trade is real and it runs in both directions. Parametric polymorphism enables free theorems and compiler optimizations like stream fusion. Comptime enables a different class of abstraction.
The most immediate example is structural serialization. A serializer that handles integers, floats, structs, enums, and unions differently, recursing through struct fields automatically, requires type inspection:
fn serialize(comptime T: type, writer: anytype, value: T) !void {
switch (@typeInfo(T)) {
.Int, .Float => try writer.print("{}", .{value}),
.Bool => try writer.print("{}", .{value}),
.Struct => |info| {
inline for (info.fields) |field| {
try serialize(field.type, writer, @field(value, field.name));
}
},
.Enum => |info| {
_ = info;
try writer.print("{s}", .{@tagName(value)});
},
else => @compileError("serialize: unsupported type " ++ @typeName(T)),
}
}
The inline for over info.fields iterates at comptime over the compile-time field list. Each iteration generates specialized code for that field’s type. The result is zero-overhead structural serialization with no runtime type dispatch and no vtable. The equivalent in Haskell requires either a typeclass instance that must be explicitly written or derived, or a generics library like GHC.Generics, which uses a representation type to encode structure and requires a newtype wrapper to tie it to Generic. Both work. Zig’s version is more direct, and the cost is that the signature does not tell you the function branches on type structure.
A second class of application is interface generation. Zig has no runtime interfaces or virtual dispatch, but you can construct interface-style dispatch tables at comptime:
fn Allocator(comptime Impl: type) type {
return struct {
impl: Impl,
pub fn alloc(self: @This(), n: usize) ?[]u8 {
return self.impl.alloc(n);
}
};
}
This generates a wrapper struct at comptime that holds a concrete implementation and exposes a fixed interface. Zig’s standard library allocator design uses a similar pattern. The concrete implementation is monomorphized at the call site; no virtual dispatch occurs. Generics in parametric languages cannot generate new types from type arguments in this way; the type argument is opaque.
The Cost at Scale
The loss of parametricity matters most in one specific situation: reading unfamiliar code written by someone else. In a parametric language, the type signature of a generic function bounds its behavior. You do not need to read the body to know that reverse :: [a] -> [a] cannot manufacture elements or inspect their values. That constraint accelerates code review and supports equational reasoning in documentation.
In Zig, the signature fn process(comptime T: type, x: T) T does not tell you whether the function returns x unchanged, increments it if T is numeric, serializes it, or calls a method on it. You read the body. In a small codebase, this is a minor inconvenience. In a large one with many contributors and many generic utilities, the discipline that prevents unexpected type-specific branches must come from code review convention and documentation rather than from the type system.
The Zig community addresses this partly through @compileError annotations that generate explicit errors when a type lacks a required capability, and partly through documentation that describes what a function expects from its type parameter. These work, but they are softer than a type-checked constraint in a function signature. The Zig standard library’s writer interface documents its requirements in comments, but the compiler does not enforce that a caller reads them.
The Framing That Resolves the Surprise
When Welsh describes comptime as bonkers from a type-theoretic perspective, he is measuring it against the parametric polymorphism tradition, where that characterization is accurate. Comptime is not a variant of parametric generics with some extra features bolted on. It is a different mechanism from a different theoretical tradition, one where types are first-class comptime values, generic functions are specializations produced by a partial evaluator, and the type argument is as inspectable as any other comptime value.
The languages closest to Zig in this design space are not Haskell or Rust. They are MetaML, Template Haskell, and to some extent Lisp macros: systems where the first stage of computation produces the second stage, and the first stage has full access to the structure of its inputs. The free theorems that parametric languages provide are real and valuable. They are also simply not the contract that comptime offers, and evaluating comptime against that contract produces the wrong conclusions in both directions.