The concurrency landscape has a vocabulary problem. “Async,” “parallel,” “concurrent,” and “multithreaded” circulate in documentation, code reviews, and design discussions as if they were interchangeable. They are not synonyms, and conflating them leads to reaching for the wrong tool when a new problem arrives.
Lucian Radu Teodorescu’s Concurrency Flavours, published on isocpp.org in late December 2025, does the work of pulling these apart. It reads as a useful retrospective on terminology confusion that has persisted through decades of language evolution and shifting hardware.
The core distinction worth internalizing is between concurrency and parallelism. Concurrency is about structure: multiple logical tasks are in progress at the same time, but they may or may not run simultaneously. Parallelism is about execution: tasks genuinely run at the same instant on separate cores. You can have concurrent programs that are single-threaded, as with coroutines or event loops, and you can have parallel programs that are not “concurrent” in any meaningful structural sense.
From my own experience writing async Discord bots, this distinction matters constantly. An async bot handles many requests concurrently using a single event loop thread. Adding threading doesn’t automatically improve throughput if the bottleneck is I/O, not compute. The model that fits is async concurrency, not parallelism, and reaching for threads first because they feel more “real” just adds synchronization overhead for no gain.
Systems programming adds more flavors. Data parallelism, where the same operation runs across a large data set simultaneously, fits different hardware and algorithms than task parallelism, where independent tasks run in parallel. Lock-based synchronization, lock-free structures, and software transactional memory each carry different trade-offs around correctness, contention, and composability. These are not interchangeable abstractions.
What Teodorescu’s article gets right is the insistence on precision. Most engineers encounter concurrency problems through a specific technology first, whether that’s std::thread, async/await, or POSIX signals, and build their mental model around that technology. The model ends up too narrow. When a different problem arrives and the familiar tool doesn’t fit, it’s hard to even name what’s missing.
A few things worth carrying away from the article: coroutines and threads solve problems at different abstraction levels; the right synchronization primitive depends on the access pattern, not just the data type being protected; and concurrent code is not automatically correct concurrent code. Each flavor of concurrency has its own failure modes, and they don’t overlap neatly.
The C++ world tends to expose all of these layers simultaneously, which makes precise vocabulary more important than it is in languages that hide some of the complexity. Python’s GIL and JavaScript’s single-threaded model each obscure parts of the picture in ways that can mislead developers who move between ecosystems. C++ gives you access to nearly every flavor, which means knowing what you’re working with matters from the start.