Quasar Chunawala’s walkthrough of a custom vector implementation on isocpp.org frames the exercise as a way to understand the standard library. That is a good framing. But there is a more surprising observation hiding in the details: once you implement vector<T> and understand each decision, you can look at Rust’s Vec<T> implementation and find essentially the same design — three words, multiplicative growth, raw memory separated from constructed objects. This is not coincidence or imitation. It is the shape that a correct dynamic array converges to, regardless of language.
The Three-Word Layout
Both implementations use exactly three words of storage. In C++, the standard library typically stores three pointers:
// Simplified from libc++ and libstdc++ internals
T* begin_; // first element
T* end_; // one past last live element
T* end_of_storage_; // one past end of allocated buffer
size() is end_ - begin_, and capacity() is end_of_storage_ - begin_. Rust’s Vec<T> stores a pointer, a length, and a capacity:
// Simplified from std::vec internals
struct Vec<T> {
ptr: NonNull<T>,
len: usize,
cap: usize,
}
These are equivalent representations. C++ encodes size and capacity as pointer differences; Rust stores them as usize values. Both optimize the common case: size() and capacity() are computed in a single subtraction or a direct load, respectively. Some C++ implementations, notably MSVC’s, also store size and capacity as integers rather than an end pointer, matching Rust’s layout exactly.
Three words is the minimum for a heap-backed dynamic array that supports size(), capacity(), and data() as O(1) operations. Any correct implementation converges here, and this is part of why the Rust reference for Vec and any C++ implementation guide describe the same fundamental structure.
Separating Allocation from Construction
The crucial insight when implementing vector<T> — the one that separates a working version from a broken one — is that allocating memory and constructing objects are two separate operations. new T[n] does both. vector<T> cannot use it.
The reason is capacity. A vector may allocate a buffer large enough for 16 elements while only constructing 8 of them. The remaining 8 slots hold uninitialized raw memory. Calling new T[16] would default-construct all 16 elements, which requires T to be default-constructible and does extra work that will be overwritten later.
In C++, the separation looks like this:
// Allocate raw memory
T* buf = static_cast<T*>(::operator new(capacity * sizeof(T)));
// Construct an element into an already-allocated slot
::new (static_cast<void*>(buf + index)) T(value); // placement new
// Destroy without freeing
buf[index].~T();
// Free the raw memory
::operator delete(buf);
Rust achieves the same separation through std::alloc::alloc and std::ptr::write:
// Allocate raw memory
let layout = Layout::array::<T>(capacity).unwrap();
let ptr = unsafe { alloc(layout) as *mut T };
// Write a value into an uninitialized slot (equivalent to placement new)
unsafe { ptr::write(ptr.add(index), value); }
// Drop without freeing
unsafe { ptr::drop_in_place(ptr.add(index)); }
// Free the raw memory
unsafe { dealloc(ptr as *mut u8, layout); }
The operations map directly. Both languages require you to be explicit about the lifecycle boundaries: allocate memory, construct objects, destroy objects, free memory. These are four distinct operations that new T[n] and delete[] p collapse into two, and that collapsing is incompatible with a variable-capacity container.
Multiplicative Growth
Both std::vector and Vec<T> use multiplicative growth: when capacity is exhausted, allocate a new buffer larger by some constant factor. GCC and LLVM use 2x; MSVC and Rust’s standard Vec use approximately 1.5x (Rust uses cap + cap / 2 as of Rust 1.x).
The reason multiplicative growth is mandatory — not just a reasonable heuristic — is that it guarantees amortized O(1) push_back. With a fixed additive increment of k elements, inserting n elements triggers n/k reallocations, each copying proportionally more elements, for O(n²) total work. With a multiplicative factor f, the total elements copied across all reallocations is bounded by n + n/f + n/f² and so on, a geometric series that converges to n * f/(f-1). For f=2 that is 2n; amortized constant.
The choice of 1.5x over 2x is a memory reuse argument. With 2x growth, the current buffer is always larger than all previously freed buffers combined, so the allocator can never reuse old memory for the next allocation. With 1.5x (or the golden ratio), freed memory eventually accumulates enough to satisfy the next growth. Facebook’s folly::fbvector documents this argument explicitly and uses 1.5x. The C++ standard leaves the growth factor unspecified; Rust’s standard library documents its 1.5x behavior in the RawVec implementation. Both are conforming; both are driven by the same mathematics.
Where They Diverge: Iterator Invalidation
Here is the first genuine difference. C++ documents iterator invalidation rules: any push_back or insert that causes reallocation invalidates all iterators, pointers, and references into the vector. Violating these rules is undefined behavior, caught at runtime by debug iterators or address sanitizers, but silently wrong in release builds.
Rust’s borrow checker makes this class of bug impossible at compile time. When you hold a reference to a vector element, the borrow checker marks the Vec as immutably borrowed and refuses to compile any call that mutates it:
let mut v = vec![1, 2, 3];
let first = &v[0]; // immutable borrow begins
v.push(4); // compile error: cannot borrow `v` as mutable while borrowed
println!("{}", first);
In C++, the equivalent code compiles without warning and has undefined behavior if push_back triggers a reallocation:
std::vector<int> v = {1, 2, 3};
const int& first = v[0];
v.push_back(4); // may or may not reallocate
std::cout << first; // UB if reallocation occurred
This is one of the sharpest illustrations of what the borrow checker provides. The underlying memory model is identical; what differs is whether the rules governing it are enforced statically or documented for the programmer to follow. Both approaches encode the same constraint; only one enforces it.
Exception Safety vs. Panic Safety
C++ vector’s push_back provides the strong exception safety guarantee when T’s move constructor is marked noexcept. During reallocation, the vector wants to move elements into the new buffer rather than copy them. But if a move constructor throws partway through, the old buffer has already been partially vacated and cannot be restored.
The solution is std::move_if_noexcept: move the element only if its move constructor is declared noexcept; otherwise copy it. A type without a noexcept move constructor causes push_back to silently copy every element during every reallocation:
struct Expensive {
Expensive(Expensive&&) { /* not noexcept */ }
};
std::vector<Expensive> v;
v.push_back(Expensive{}); // copies on every reallocation, not moves
The performance implication is significant. If you define a move constructor and forget noexcept, reallocation becomes O(n) in copy cost instead of O(n) in move cost. For a type with an expensive copy, this matters. The rule is not obscure, but it is easy to miss.
Rust handles this differently because Rust does not have exceptions; it has panics. A Drop implementation that panics during Vec reallocation triggers a double-panic, which aborts the process. In practice, Drop is expected not to panic, and Rust’s standard library trusts that. There is no move_if_noexcept equivalent because there is no exception-safety guarantee to maintain through a partial move. The simplicity is real, but it is purchased partly by convention — the “don’t panic in Drop” rule is not enforced by the compiler, only by documentation and community norms.
Allocator Support
Both C++ and Rust support custom allocators, but the C++ allocator model is considerably more elaborate. std::vector<T, Allocator> takes an allocator type parameter, and all allocation and construction operations are routed through std::allocator_traits<Allocator>, which provides defaults for every optional method. C++17 added std::pmr::vector with runtime-switchable allocators via std::pmr::polymorphic_allocator, allowing pool allocators, arena allocators, and monotonic buffers without changing the container’s type.
Rust’s Allocator trait, stabilized in Rust 1.80, is simpler: two required methods (allocate and deallocate) and optional growth and shrink operations. Vec<T, A: Allocator> follows the same pattern of a type parameter defaulting to the global allocator.
The C++ allocator’s complexity reflects decades of requirements: shared-memory-segment allocators, arena allocators, pool allocators, and the propagate_on_container_* traits that control what happens to the allocator when the container is copied, moved, or swapped. Whether that accumulated complexity was worth the cost is a question the committee has revisited repeatedly. std::pmr is partly an acknowledgment that for most users, runtime-dispatched allocators are more practical than statically typed ones.
What the Implementation Exercise Shows You
Working through a custom vector<T> implementation makes clear that most of the design is not optional. The three-word layout is the minimum representation. The separation of allocation from construction is mandatory for correct capacity semantics. Multiplicative growth is the only way to amortize insertion cost across pushes. These constraints hold in any language, which is why Rust’s Vec, C++‘s std::vector, Java’s ArrayList, and Go’s slice all share the same fundamental allocation structure.
The divergences — iterator invalidation enforcement, exception vs. panic safety models, allocator complexity — are where language philosophy and historical constraints take over. The borrow checker’s approach to iterator safety is not free; it requires restructuring code that C++ allows. C++‘s noexcept move rule catches a real class of performance bugs, at the cost of a rule that is easy to forget. Neither approach is strictly superior; they represent different points on a tradeoff between static enforcement and flexibility.
The original article gives you the scaffolding. Following the same design to its conclusions in a second language shows you which parts of that scaffolding are mathematical necessities and which are choices your language made for you.