Most walkthroughs of implementing std::vector start by allocating with new T[capacity] and then consider the structure roughly in place. That starting point is functionally wrong, and the gap between it and a correct implementation is where most of the interesting C++ lives.
Quasar Chunawala’s piece on isocpp.org frames the exercise correctly: implementing your own vector<T> is a learning exercise, not a replacement for the standard library. What I want to explore here is the specific lesson that most informal implementations miss: the separation of allocation from construction, and the abstraction layer the standard uses to enforce that separation.
Why new T[n] is the wrong starting point
operator new[] combines two operations that a vector needs to keep separate: it allocates a block of memory and default-constructs every element in it. A vector with capacity 16 and size 3 should contain exactly 3 live objects. The other 13 slots are raw storage, not default-constructed T instances. Reading from them is undefined behavior. Writing to them with assignment is also undefined behavior because assignment requires a destination object to already exist.
The correct approach separates allocation from construction:
// Acquire raw bytes, no construction
void* raw = ::operator new(capacity * sizeof(T));
T* buf = static_cast<T*>(raw);
// Construct one object at a specific address (placement new)
::new(static_cast<void*>(buf + i)) T(value);
// Destroy one object without releasing memory
buf[i].~T();
// Release raw memory, no destruction
::operator delete(raw);
The pairing matters for more than style. Memory obtained with ::operator new must be released with ::operator delete, not delete[]. A vector that uses new T[capacity] and then placement-constructs over some elements is double-constructing: the array form already ran default constructors on all capacity elements, and placement new runs them again over live objects. Tracking which slots contain live objects and which are raw storage is not optional bookkeeping; it is the central invariant of the data structure.
The std::allocator_traits abstraction
The standard library does not call ::operator new directly. It routes all memory operations through std::allocator_traits, an abstraction layer parameterized on the allocator type the vector carries:
using Traits = std::allocator_traits<Allocator>;
T* data = Traits::allocate(alloc, n); // raw allocation
Traits::construct(alloc, data + i, args...); // placement new via allocator
Traits::destroy(alloc, data + i); // explicit destructor
Traits::deallocate(alloc, data, n); // raw deallocation
For the default std::allocator<T>, these calls reduce to ::operator new, placement new, explicit destructor invocation, and ::operator delete. That is the common case. The abstraction opens the door to pool allocators, arena allocators, shared-memory allocators, and the standard’s own polymorphic memory resource system.
What allocator_traits also provides is a set of static member functions with well-defined fallbacks. If your custom allocator does not define a construct method, allocator_traits::construct falls back to ::new(ptr) T(args...). If it does not define a max_size method, allocator_traits::max_size returns numeric_limits<size_type>::max() / sizeof(T). This means a minimal custom allocator only needs to implement allocate and deallocate; everything else has a reasonable default. Writing a vector that talks to allocator_traits instead of calling operators directly is what allows it to work with all these allocator variants without modification.
Polymorphic Memory Resources
C++17 introduced std::pmr (Polymorphic Memory Resource) as a standardized custom allocator interface. The foundation is std::pmr::memory_resource, an abstract base class with virtual do_allocate and do_deallocate methods. The standard ships three concrete implementations:
monotonic_buffer_resource: a bump allocator over a fixed buffer, no per-object free, releases everything at once on destructionunsynchronized_pool_resource: a free-list pool for fixed-size allocations, fast recycling with no synchronization overheadnew_delete_resource: a thin wrapper overoperator new/delete, the default
std::pmr::vector<T> is a type alias for std::vector<T, std::pmr::polymorphic_allocator<T>>. The polymorphic_allocator accepts a memory_resource* at runtime, so two pmr::vector<int> instances using different memory resources have the same type. This is the key distinction from traditional custom allocators, where the allocator type is a template parameter and two vectors with different allocators are distinct types that cannot be compared or returned from the same function.
A concrete use case: parsing structured data into a hierarchy of vectors where the entire structure gets discarded at once.
std::array<std::byte, 64 * 1024> stack_buf;
std::pmr::monotonic_buffer_resource pool(stack_buf.data(), stack_buf.size());
std::pmr::vector<std::pmr::vector<int>> rows(&pool);
// All allocations, including inner vectors, draw from the pool
// Destroying the pool releases everything in O(1)
The monotonic resource bumps a pointer for each allocation and never frees individual objects. The allocator overhead drops to a single pointer comparison per allocation. Resetting the resource releases everything at once. For short-lived workloads with many small allocations, this eliminates per-allocation system overhead entirely. A new-based vector implementation cannot express this pattern at all because it hard-codes the allocator.
The containers-of-containers case also illustrates something subtle about allocator propagation. By default, polymorphic_allocator propagates into containers constructed inside it. When rows.emplace_back() default-constructs a new inner pmr::vector<int>, that inner vector automatically receives the same memory resource as rows. This propagation behavior is configurable through allocator_traits::select_on_container_copy_construction, another method with a sensible default that most custom allocators never need to override.
The constexpr complication
C++20 made std::vector usable in constant expressions, which required solving a problem with placement new. The expression ::new(static_cast<void*>(ptr)) T(...) goes through a void*, which the constant evaluator cannot track as a typed construction. The compiler needs to know which type is being constructed to verify lifetime and aliasing rules at compile time.
P0784 introduced std::construct_at and std::destroy_at to fill this gap:
std::construct_at(data + i, args...); // recognized by the constant evaluator
std::destroy_at(data + i); // symmetric lifetime end
The constant evaluator has built-in knowledge of these two functions. Placement new through void* remains illegal in constexpr contexts; std::construct_at is the only portable path for constructing objects in raw storage during constant evaluation. The standard’s allocator_traits::construct was updated in C++20 to call std::construct_at when evaluated in a constexpr context, which is how constexpr std::vector works without any special-casing in the allocator itself.
There is a constraint worth understanding: any allocation made during constant evaluation must be freed before the evaluation ends. A constexpr variable initializer that leaves a vector allocated would carry heap data into runtime, which the standard prohibits. Constexpr vector is most useful inside constexpr functions that return non-heap types, such as computing a std::array from compile-time data and returning it by value.
What the full picture looks like
Getting the allocator model right changes the shape of a vector implementation. The size tracking stays the same: three pointers for begin, end, and end-of-capacity. The common operations stay the same: push_back checks capacity, reallocates if needed, placement-constructs at end. What changes is that every allocation goes through allocator_traits, every construction uses construct, and every destruction uses destroy. The vector carries the allocator as a member, using Empty Base Optimization to store it in zero bytes when it is stateless.
The exception safety story flows from this foundation. Reallocation must hold the strong guarantee: if anything throws, the original vector is unchanged. The correct sequence is to allocate the new buffer, transfer elements one by one using std::move_if_noexcept, then swap the internal pointers. Moving is only safe if the element type’s move constructor is noexcept; otherwise copying is required to preserve the ability to roll back. Types with throwing or unmarked move constructors silently force O(n) copies on every reallocation, with no diagnostic from the compiler.
A homegrown vector that handles all of this correctly will still be worse than std::vector. It will miss the trivially-copyable fast path (where the standard uses memcpy for bulk transfer when std::is_trivially_copyable_v<T> is true), the self-reference aliasing check in push_back, and years of edge cases the standard library worked through. The point of the exercise is not the result. After implementing this once, reading the libc++ source feels like recognizing familiar structure: __begin_, __end_, __end_cap_() map directly to the three quantities you tracked yourself, and functions like __construct_at_end and __uninitialized_allocator_move_if_noexcept have obvious purposes. That recognition is the actual payoff.