· 2 min read ·

Redox OS Bans LLM Code: A Policy Worth Taking Seriously

Source: hackernews

Redox OS — the Rust-based microkernel project aiming to be a genuinely secure, Unix-compatible operating system — has updated its contributing guidelines with two notable additions: a Developer Certificate of Origin (DCO) requirement and an explicit ban on LLM-generated code contributions.

The HN thread has the usual mix of takes, but I think this decision deserves more credit than the reflexive “gatekeeping” narrative gives it.

What They’re Actually Requiring

The DCO is the same mechanism the Linux kernel has used for years. By signing off on a commit, you’re certifying:

  • You wrote the code yourself, or have the right to submit it
  • It’s being contributed under the project’s license
  • You understand this sign-off is a legal declaration

That’s not onerous. That’s baseline accountability. The LLM ban adds a specific carve-out: code generated by language models doesn’t qualify as “written by you” in the sense the DCO requires.

Why This Makes Sense for an OS Project

People get frustrated when projects ban LLM code as if it’s some kind of moral panic, but the reasoning here is actually pretty concrete.

Systems code fails badly. A hallucinated API in a web app gives you a runtime error. A hallucinated memory safety assumption in a microkernel gives you a security vulnerability that survives boot. Redox’s entire value proposition is correctness and security. LLMs are confidently wrong at a rate that’s acceptable for many contexts and catastrophic for this one.

License contamination is unresolved. The legal status of LLM training data and the code it produces is still genuinely unsettled. A project that wants clear provenance on every line — especially one that might eventually be used in embedded or critical systems — has good reason to avoid that ambiguity entirely.

Rust already raises the floor. Redox benefits from Rust’s memory safety guarantees. The Rust compiler is a brutally honest reviewer. But LLM-generated Rust code can pass the compiler while still being logically wrong, algorithmically inefficient, or subtly incorrect in ways that only show up under specific conditions. Compiling isn’t the same as correct.

The Broader Question

This is going to keep coming up. As LLM-assisted development becomes the default for most developers, projects that care about provenance, copyright clarity, or code quality are going to have to make explicit choices. “We didn’t think about this” is going to be a worse answer every year.

I build Discord bots, not kernels — so my personal risk tolerance for LLM-assisted code is higher. I use it for scaffolding, for boilerplate, for first drafts I then actually read. But I also wouldn’t merge a chunk of LLM output into a bot’s message handler without understanding every line, because subtle logic errors in async event handling are the kind of thing that bites you three weeks later at 2am.

The principle scales. If I apply that scrutiny to a hobby bot, a security-focused OS project applying it to kernel contributions seems… obvious, actually.

Redox OS isn’t banning LLMs because they’re scared of the future. They’re banning them because they know what they’re building and what it needs to be. That’s not gatekeeping — that’s engineering judgment.

Was this interesting?