The conversation around C++ and memory safety has sharpened considerably over the past few years. Government agencies started naming names. The NSA published guidance recommending a move toward memory-safe languages. And the C++ standards committee responded by accelerating safety work, resulting in a set of features coming in C++26 that are genuinely meaningful. A post circulating on Lobsters recently walked through why these features won’t fully satisfy the critics, and the argument is worth engaging with seriously.
What C++26 Is Actually Adding
The safety work in C++26 is not cosmetic. Contracts give you first-class preconditions and postconditions:
int divide(int a, int b)
pre(b != 0)
post(r: a % b == 0 || r == a / b);
Safety profiles let you opt subsystems into stricter rules — no bounds violations, no dangling pointers, no type punning. These are real constraints with real enforcement. The committee has been doing serious work.
But here is where the structural argument comes in.
Opt-In vs. Opt-Out
Rust’s memory safety model is opt-out. You write safe code by default, and you reach for unsafe deliberately, with an explicit signal to every reader that something unusual is happening. The compiler enforces the safe subset; violations are errors, not warnings.
C++26 safety features are opt-in. You can enable profiles in a compilation unit, annotate functions with contracts, mark containers with bounds-checked accessors. But nothing stops you — or a library you depend on — from writing unrestricted pointer arithmetic three files away. The language has no mechanism to express “this codebase is operating entirely within the safe subset” in a way the compiler can verify holistically.
This matters because the guarantee people are asking for isn’t “we tried to be safe.” It’s “the compiler verified this is safe.” C++ can get you the former. It cannot yet deliver the latter for an entire codebase.
The Legacy Code Problem
Any large C++ project carries years of code written before these features existed. Profiles and contracts apply to new code you write and annotate. They do not retroactively make existing code safe, and they cannot quarantine unsafe code from the rest of your program the way Rust’s module system contains unsafe blocks.
Migrating an existing codebase to use safety profiles is a large, manual, error-prone process with no guarantee of completeness. You can do it, but it requires sustained engineering investment and produces no verification that you got it all.
What This Means in Practice
For greenfield systems-level code where memory safety is a hard requirement, Rust remains the more compelling choice precisely because the guarantee is structural rather than procedural. The compiler enforces the invariant; you don’t have to.
For existing C++ codebases, the C++26 features are genuinely useful. Contracts catch bugs at development time. Bounds-checked iterators eliminate a class of runtime error. Profiles give you a migration path toward stricter subsets. None of that is nothing.
But the argument that C++26 closes the gap with Rust on memory safety is overstated. The gap is real, and it’s architectural. C++ is a language that lets you do anything, including the unsafe things. Adding tools that make the unsafe things harder to do accidentally is valuable. Changing the fundamental nature of the language is a different project, and C++26 is not that project.
The committee knows this. The more honest voices in the C++ community acknowledge it too. The question is whether incremental safety tooling is sufficient for the environments pushing for memory-safe language adoption, and the answer for many of them is probably no.