Avery Pennarun’s recent post arguing that every review layer slows you down by 10x landed with nearly 500 upvotes on Hacker News, which suggests it’s touching something real. The math he’s describing is fundamentally about queue theory: each approval step introduces a waiting period, and waiting periods compound. But the number gets significantly worse for distributed teams, and that part of the story doesn’t get told often enough.
The Baseline: Why 10x Is Credible
Before getting to the timezone problem, it’s worth taking the 10x claim seriously on its own terms. A single review layer, done asynchronously, doesn’t just add the review time to your cycle. It introduces at minimum:
- The time to context-switch away from the work once it’s submitted
- The reviewer’s queue time (how long until they look at it)
- The review itself
- The time for the author to re-engage after feedback arrives
- Any revision cycle, which may repeat all of the above
The DORA State of DevOps research measures this as “lead time for changes,” and the gap between elite and low-performing teams is stark: elite teams achieve sub-hour lead times; low-performing teams measure in weeks. That’s not two or three layers of review causing a 2-3x slowdown. It’s compounding behavior.
Little’s Law from queuing theory gives you the mechanics. If your reviewer is at 80% utilization (reviewing things most of their day), average wait time roughly quadruples compared to a reviewer at 50% utilization. Run that through two sequential approval steps with moderately busy reviewers, and you’re already at a 10x multiplier before you’ve considered any human behavioral effects.
What Happens When You Add a Time Zone
Now take that baseline and put your author in San Francisco and your reviewer in Amsterdam.
The author submits a PR at 2pm Pacific. The Amsterdam reviewer’s day ended around midnight Pacific. The earliest that reviewer sees it is the next morning, Amsterdam time, which is roughly 1am Pacific. Accounting for their morning routine and other priorities, let’s say they look at it at 10am Amsterdam time. That’s an 11-hour wait just for first contact, on a good day.
If the reviewer requests changes, the author won’t see those until they wake up in San Francisco. That’s another 8 hours of silence. A single revision cycle in this setup takes 24 hours minimum. Two revision cycles: 48 hours for what might be a 30-minute conversation if the parties were co-located.
This is not a small-team edge case. The Microsoft Research study on code review in distributed teams found that the median time to first review response exceeded six hours, and median time to merge exceeded 30 hours. That’s before any mandatory second approval or compliance sign-off.
Now add that second layer. Say you need a security team review, and that team is based in a third location. Your 10x multiplier from the first review gets multiplied by the timezone offset of the second. You’re not stacking 10x + 10x; you’re multiplying 10x by 10x, while also compounding the temporal gaps. A change that should take a day ends up in a two-week hold pattern.
The Abandonment Cascade
What the numbers don’t capture is what happens to the code during that waiting period.
The author moves on. They start the next piece of work. By the time feedback arrives, they have rebuilt mental context around a different problem. Re-engaging with a stale PR is not free; depending on the complexity of the change, it can take 15-45 minutes just to remember what decisions were made and why. This is well-documented in research on developer context-switching costs, which puts the interruption recovery time at over 20 minutes on average.
Worse, the work continues without the reviewed change. Subsequent PRs get built on top of unmerged code, creating dependency chains. When review feedback requires structural changes to PR #1, PRs #2 and #3 need to be rebased or reconsidered. The cost of a single review delay propagates downstream.
In organizations with lots of in-flight work, this produces what you might call an abandonment cascade. Long-running unmerged branches become stale. Developers start to regard review as a formality rather than a useful signal, so they stop making thorough changes and start submitting things they expect will sail through without questions. The review layer that was supposed to improve quality starts selecting against thoroughness.
Research from LinearB found that PRs with more than a 24-hour wait for first review are significantly more likely to be abandoned outright. At scale, this means your most complex changes (which take longer to review) are also the ones most likely to stall, and the ones most likely to accumulate drift while waiting.
What Google Got Right, and Then Complicated
Google’s internal code review tool, Critique, enforces a model where every change requires an owner approval and a readability review for first-time committers. This sounds like it would be slow. In practice, Google has historically shipped at high velocity despite it, for a few specific reasons.
First, their code ownership system is granular. You don’t need a team-wide approval; you need the nearest appropriate owner. The queue is short because ownership is specific. Second, their culture treats a same-day turnaround as a professional obligation, not a favor. Third, their monorepo model means changes land as single commits to the main branch with no long-lived branch rot.
The lesson isn’t “Google has two review layers and survives,” it’s that their model is specifically designed to keep each layer’s latency close to zero. The 10x rule still applies, but they’ve built tooling and culture around holding each layer below one hour.
Most organizations adopt the bureaucratic shell of this model, multiple approvers, required security sign-off, without the supporting culture and tooling. The result is mandatory layers without latency controls, which is the worst of both worlds.
What Small Teams Do Right Without Knowing It
Working on small Discord bots and tools where I’m often the only reviewer, or working with one or two people in overlapping time zones, review happens synchronously or near-synchronously by default. You look at each other’s changes over a call, or drop a message and get a response in minutes. The review layer costs roughly zero time beyond the review itself.
This is not some enlightened engineering practice; it’s just what happens when the team is small enough that coordination is cheap. The problem is that when teams scale, they formalize coordination into processes rather than improving the tooling that makes synchronous or near-synchronous review feasible. They add a second reviewer requirement rather than investing in smaller, more frequent PRs. They add a security checklist rather than building automated security checks that run in CI.
Trunk-based development, pair programming, and feature flags all address the root cause rather than adding more process. They keep change size small, keep review latency low, and avoid the long-branch rot that makes review feedback expensive to act on. The Accelerate book by Forsgren, Humble, and Kim provides the empirical backing here: trunk-based development is one of the most consistently predictive practices for high delivery performance, and it’s structurally incompatible with long review queues.
The Practical Upshot
Pennarun’s 10x estimate is probably calibrated to a reasonably coordinated team. For distributed teams with mandatory multi-stage approval and no timezone overlap between stages, the real multiplier is higher. The question worth asking for any review layer is not just “does this improve quality?” but “at what latency does it operate, and who controls that latency?”
A review step with 10-minute latency is a fundamentally different thing than one with 24-hour latency. The former is compatible with fast iteration. The latter turns every code change into a project.
Automated checks run in seconds. Pair programming has zero review latency. Small PRs get reviewed faster than large ones. The solution is not to remove oversight; it’s to make each layer fast enough that the 10x rule doesn’t compound into a 100x rule. That’s an engineering problem, not a process problem, and it has engineering solutions.