· 2 min read ·

Rust Wants Into Safety-Critical. The Language Is Ready. The Ecosystem Isn't.

Source: rust

There’s a fascinating gap between “this language prevents the bugs that kill people” and “this language is approved for use in systems that could kill people.” The Rust team’s latest Vision Doc post on shipping Rust in safety-critical software makes that gap painfully clear.

The engineers they interviewed — working across automotive, aerospace, industrial, and medical — all said roughly the same thing: Rust’s compiler-enforced guarantees are genuinely compelling. Memory safety, no undefined behavior, fearless concurrency. These aren’t marketing bullet points in this context. They’re exactly what Functional Safety Engineers spend careers trying to enforce manually through process, review, and static analysis.

But the moment you move from prototype to a higher-criticality component, the scaffolding falls away.

What’s Actually Missing

The post names three categories of gaps that keep coming up:

Toolchain qualification. In domains governed by standards like ISO 26262 (automotive) or DO-178C (aerospace), it’s not enough for your code to be correct — your compiler has to be certified or qualified for the integrity level you’re targeting. rustc isn’t there yet for most contexts.

RTOS and platform support. There’s no AUTOSAR Classic-compatible runtime written in Rust with first-class support. No OSEK-compliant scheduler. These aren’t gaps you fill with a weekend project. They’re multi-year, heavily regulated engineering efforts.

Tooling for certification. MATLAB/Simulink generates C code that feeds directly into established certification workflows. There’s no Rust equivalent. The model-based design tooling that automotive and aerospace shops have built entire pipelines around simply doesn’t have a Rust path.

The Interesting Part

What I find genuinely surprising is how close the language already is. The things Rust prevents by construction — data races, use-after-free, buffer overflows — are exactly the classes of defects that safety standards exist to eliminate. In that sense, Rust is arguably better suited for this work than C, which remains dominant purely because of ecosystem and institutional inertia.

The irony is that Rust’s hardest remaining problem isn’t technical. It’s bureaucratic. Certification bodies move slowly. Standards evolve over years. And the companies that need certified tooling most are often the ones least equipped to fund its development themselves.

The post is honest about this. It’s not a roadmap announcement; it’s a “here’s what we learned.” But laying out the specific gaps clearly is meaningful. The Rust ecosystem has a track record of identifying hard problems and eventually solving them. Embedded no-std support was once similarly immature. async in constrained environments still is.

Safety-critical feels like it’s a few years away from inflection point — assuming someone funds the RTOS work and the certification tooling. Worth watching if you care about where systems programming is heading.

Was this interesting?