The memory safety conversation around C++ has been heating up since government agencies started publishing advisories recommending a move to memory-safe languages. The C++ standards committee has responded, and C++26 includes real safety work: hardened libc++ with bounds-checked containers, std::mdspan for safer multidimensional access, and ongoing work on Profiles, a mechanism for enforcing subsets of the language with stronger safety guarantees. Critics looking in from outside the community are not impressed, and the structural reasons for that skepticism are worth taking seriously.
What C++26 Actually Proposes
The safety work in C++26 is not cosmetic. Bounds-checked modes for standard containers catch out-of-bounds access at runtime. The Profiles proposal, if it ships, would let you annotate a translation unit or function as conforming to a safety profile, with the compiler enforcing that constraint. Lifetime annotations are an ongoing effort to give the compiler enough information to flag dangling pointer bugs statically.
These are meaningful improvements. Hardened containers have already shipped in libc++ and are being used in production at scale. The tooling story around sanitizers (AddressSanitizer, MemorySanitizer, UBSan) has also matured significantly.
The Structural Problem
The critique is not that these features are useless. It’s that the problem C++ has with memory safety is not primarily a problem of missing features.
First, the features are opt-in. Safety profiles require annotation. Hardened containers require a build flag or a library configuration. Lifetime analysis requires tooling investment. In a language with forty years of existing code, opt-in safety means most code never sees the improvement. The codebases most likely to contain exploitable memory bugs are legacy systems that will not be touched to add profile annotations.
Second, the safety guarantees are weaker than what a memory-safe language provides by construction. Rust’s borrow checker enforces ownership and lifetime rules at compile time, with no runtime overhead and no escape hatch unless you explicitly write unsafe. C++‘s approach involves runtime checks for some issues, static analysis for others, and compiler enforcement only within annotated regions. The result is a patchwork where the unsafe paths remain accessible and default.
Third, backward compatibility is a hard constraint. The committee cannot break existing code, which means the unsafe surface of the language cannot shrink. New idioms can be safer; old idioms stay legal. A new C++ programmer learning from a ten-year-old codebase or tutorial will absorb the same unsafe patterns regardless of what C++26 ships.
The Honest Position
The honest position is that C++26’s safety work is valuable for new code written by teams who adopt it deliberately, and that it does not materially change the safety profile of the existing C++ ecosystem. If your threat model is new code written with modern idioms, the improvements matter. If your threat model is the aggregate of C++ code running in production, they change relatively little.
This does not mean C++ is the wrong choice for all systems work. There are codebases where the rewrite cost is prohibitive, where C interoperability requirements constrain the options, or where the team’s expertise makes C++ the pragmatic answer. In those cases, adopting hardened builds, sanitizers, and Profile-conforming code where possible is clearly better than nothing.
But the outside critics are pointing at a real phenomenon. Language-level safety, enforced by the type system, is a different kind of guarantee than opt-in safety features layered onto an inherently unsafe foundation. C++26 makes the language better. It does not make the language safe in the sense that Rust is safe, and conflating the two is worth resisting.