There is a quiet but growing tension in open source communities between the convenience of AI-assisted coding and the demands of correctness in safety-critical software. Redox OS just picked a side.
The project recently updated its contributing guidelines to include both a Developer Certificate of Origin requirement and a blanket prohibition on LLM-generated code. No exceptions. If you used a language model to write or substantially assist with a patch, that patch is not welcome.
Why This Is Different From the Usual Debate
Most arguments about LLM code quality center on things like hallucinated APIs, subtle logic errors, or style inconsistencies. Those are real problems, but they are recoverable. You catch them in review, you fix them, you move on.
Redox OS is not writing CRUD apps. It is an operating system built in Rust, designed from the ground up around microkernel principles and memory safety guarantees. The code that goes into Redox is code that runs at the boundary between hardware and everything else. A subtle memory safety bug that slips through is not a bad day — it is a CVE and a broken trust model.
LLMs are, at their core, pattern matchers trained on vast amounts of code, much of which is mediocre or outright wrong. They are good at producing plausible-looking output. In systems programming, plausible-looking and correct are not the same thing, and the gap between them can be invisible until it is not.
The Auditability Problem
There is another angle here that does not get discussed enough: auditability. When a human writes a patch, you can ask them why. Why did you choose this approach? What did you consider and reject? What invariants are you relying on? The author can answer those questions because they went through the reasoning process.
With LLM-generated code, the author often cannot answer those questions. They ran a prompt, got output, reviewed it well enough to feel comfortable submitting it, and moved on. The reasoning is opaque by construction. For a project like Redox, where reviewers need to deeply understand the implications of every change, that opacity is a real cost.
The Certificate of Origin requirement compounds this: DCO is a legal attestation that you wrote the code or have the right to submit it. LLM-generated code sits in a murky space with respect to copyright and provenance. Redox is being clear about where it stands.
A Policy Worth Taking Seriously
I use AI tools in my own work constantly. They are genuinely useful for boilerplate, for exploring unfamiliar APIs, for getting unstuck. I am not going to pretend otherwise.
But I also work primarily on Discord bots and application-layer software where the blast radius of a mistake is limited. Redox is operating in a completely different regime. The people building it have made a judgment call that the productivity gains from LLM assistance are not worth the correctness and auditability risks in their context.
That is a reasonable judgment. In fact, I think it is the right one for a project with Redox’s goals. The interesting question is whether more safety-critical open source projects — embedded systems, cryptographic libraries, OS kernels — will adopt similar policies as the broader community accumulates more experience with where LLM-generated code actually fails.
The Redox team is not being reactionary. They are being precise about what their software requires, which is exactly the mindset you want from people building an operating system.