Noel Welsh’s recent piece on comptime frames the issue through parametricity, the property from type theory that says a function polymorphic over a type must behave uniformly across all instantiations. Zig’s comptime violates this systematically, and the violation is intentional. The parametricity framing is useful, but there is another way to approach the same issue that reveals more about the practical consequences: thinking about what type signatures communicate, and what they do not.
Two Traditions
Generic programming has two intellectual lineages that have been developing in parallel for decades, and they reach different conclusions about what a type parameter means.
The first runs through ML, Haskell, and Rust. In Hindley-Milner type inference, a universally quantified type variable is genuinely opaque: the function has no mechanism to inspect which type it received. Philip Wadler’s 1989 paper “Theorems for Free!” formalized the consequence: a function’s type signature alone is sufficient to derive theorems about its behavior. Haskell’s typeclasses and Rust’s traits extend this with constrained polymorphism, where a function can require specific capabilities from its type arguments and those requirements must appear in the signature. The signature is a behavioral contract. It tells you what the function can and cannot do with its arguments.
The second lineage runs through C++ templates, D’s template constraints, and Nim’s generics. Here, generic functions are compile-time code generation. Each instantiation produces different code, and the code can vary arbitrarily based on the type argument. C++ built an entire metaprogramming ecosystem on this: if constexpr, SFINAE, type traits, and template specialization all depend on the ability to inspect types at compile time and produce different behavior. The function’s signature tells you the types that flow in and out, but not what the function does with them.
Zig’s comptime is a member of this second tradition, designed with more coherence than C++ accumulated. The Zig documentation presents comptime as a unified replacement for what C++ handles through templates, macros, and conditional compilation. The model is consistent and the syntax is readable, but the fundamental property is the same: compile-time specialization without parametric guarantees.
Anytype as Structural Duck Typing
The most direct expression of this in Zig is anytype. A function like:
fn printValue(writer: anytype, value: anytype) !void {
try writer.print("{}\n", .{value});
}
accepts any type for both parameters and resolves which operations are valid at the call site. This is structurally identical to Python’s duck typing, except the check happens during compilation. The function makes no promises about what it will accept before you try to call it. If you call it with a type that does not support print, you get a compile error pointing into the function body, not to any declaration of what was required.
The more constrained form uses comptime T: type, which looks like it adds structure but does not change the fundamental model:
fn process(comptime T: type, value: T) T {
return value;
}
The signature fn process(comptime T: type, value: T) T is compatible with an identity function, a function that branches on specific types, a serializer, or anything else that takes a value and returns a value of the same type. The reader gains nothing from the signature beyond the shapes of the types that flow in and out.
In Rust, the equivalent has a different relationship with its signature:
fn process<T: Clone + Debug>(value: T) -> T {
value
}
The bounds Clone + Debug are part of the public contract. Adding a bound is a breaking change. Removing one expands the set of valid callers. The compiler enforces this bidirectionally: the function body can only use capabilities declared in the bounds, and callers know in advance what types are accepted. The signature and the implementation are coupled by design.
The @hasDecl Workaround
Zig has a convention for working around the absence of formal interface declarations. Functions that require specific capabilities can validate them at compile time:
fn serialize(comptime T: type, writer: anytype, value: T) !void {
comptime {
if (!@hasDecl(T, "writeFields")) {
@compileError("Type " ++ @typeName(T) ++ " must implement writeFields");
}
}
// ...
}
The Zig standard library uses this pattern in several places. It generates a useful error message when a requirement is not met. What it does not do is surface the requirement in the function signature, where a reader scanning an API would naturally look for it.
The pattern also scatters interface documentation through function bodies rather than consolidating it at the signature boundary. When a library evolves, a requirement can be silently added by inserting a new @hasDecl check inside an existing function. The change is invisible in the signature, which means any documentation of the API surface that does not reparse the implementation may be out of date without any obvious signal.
anytype is the most extreme version of this. A function accepting anytype makes no formal promises about what it will accept. Every call site is a fresh instantiation, and compatibility is discovered through compilation rather than through an explicitly declared interface. This works well for small utilities, internal adapters, and code where the author and the caller are the same person. It becomes harder to manage when an API is shared between teams or published as a library.
What Optimizers Can Derive from Parametricity
There is a less commonly discussed consequence of losing parametricity: compilers in the parametric tradition can use type signatures to justify optimizations that would be unsound otherwise.
GHC’s stream fusion relies on this. The rewrite rule map f . map g = map (f . g) is provably valid for any functions f and g because map is parametric: it cannot inspect its elements, so any transformation it applies to the list structure must commute with element transformations. GHC applies this rule during optimization without knowing what f and g do. The proof follows from the types alone, and it holds for every possible implementation.
Zig’s optimizer cannot reason at this level. Each comptime instantiation is a concrete function, and the compiler reasons about it concretely via LLVM. This is not a serious practical limitation for most systems programming tasks, where the optimizer sees the fully specialized code and applies standard optimizations to it. But it means certain categories of algebraic rewrite rules, specifically the ones that hold because a function cannot distinguish between types or values, are not available as a source of optimization.
When the Tradeoff Pays Off
The cases where type inspection is genuinely necessary are not edge cases for Zig’s target domain. A generic serializer that handles integers, floats, structs, and enums differently is better expressed with @typeInfo than with any approach that maintains parametricity. Zig’s standard library std.fmt is the canonical demonstration: it evaluates format strings at compile time, generating specialized formatting code for each type it encounters, with no virtual dispatch and no trait object overhead.
Rust handles this through the Display trait for types that opt in, and through procedural macros for deriving implementations. Both work well, but they represent design decisions made in advance. If you want to format a type from a library that does not implement Display, you need a newtype wrapper. Zig’s comptime handles the case directly: if the type has the structure you need, you can serialize it regardless of whether its author anticipated your use case.
The Rust specialization RFC has been open since 2015, attempting to allow type-specific behavior within trait implementations. It remains unstable because specialization and parametricity interact in ways that can threaten soundness, and the interaction has produced real unsoundness bugs in nightly Rust. Zig does not have this problem because it does not have parametricity to preserve. The cost is fixed and known; the benefit is a coherent mechanism without the correctness traps that have stalled Rust’s approach for years.
Where the tradeoff is most visible is in large codebases with multiple contributors. Parametricity’s value as enforced documentation compounds with scale. In a parametric language, a generic function is known to be uniform: no type-specific branches, no hidden special cases. Auditing it means understanding its algorithm, not cataloging its type-specific behavior. Without parametricity, every generic function is a potential accumulation of special cases, and the discipline preventing that accumulation comes from code review and convention rather than from the compiler.
Noel Welsh’s framing, that comptime is “bonkers” from a type theory perspective, is accurate. The more complete picture is that Zig is making a deliberate bet: the expressiveness and simplicity of compile-time code execution with full type knowledge is worth more to its users than the formal behavioral contracts that parametricity provides. For systems programming tasks where you regularly need different code for different types, that bet is well-placed. For library APIs at scale, the cost is real, and the tools for managing it are conventions and documentation rather than language-enforced interfaces.