· 2 min read ·

When the Headline Number Misleads

Source: martinfowler

Martin Fowler’s latest fragments cover two things worth sitting with.

The first: a tech company fined $1.1 million for selling high-school students’ data. The headline number sounds significant. But Brian Marick’s point, which Fowler endorses, is correct: these stories need context. The valuation Fowler could find for the company was $11 million from 2017. A nine-year-old figure, but it suggests the fine was somewhere around 10% of the company’s worth at the most generous reading. For a company that turned a profit selling data, that arithmetic probably worked in their favor.

Data privacy fines have a structural problem: they tend to be calibrated to regulators’ understanding of the harm, rather than to the profit that came from causing it. A fine that a company can pay while still coming out ahead is a licensing fee. Enforcement changes behavior when it costs more than compliance does, and by that measure, $1.1 million may have cost very little.

The second item is Charity Majors’ closing keynote at SRECon, which Fowler summarizes with clear enthusiasm. Her message: engage with AI rather than waiting for the wave to crash on you. The most useful part of her framing is the call to resist confirmation bias:

The best advice I can give anyone is: know your nature, and lean against it.

This is worth taking seriously across the full spectrum of developer opinion on generative AI. The naysayers, who have been burned by hype cycles before and pattern-match AI as another round of it, need to engage genuinely with what these tools do well, because some of it is real. The enthusiasts, who are building workflows around tools that hallucinate and produce subtly broken code at unpredictable rates, need to engage with the failure modes more honestly.

Both positions are largely driven by prior beliefs rather than accumulated evidence, and neither is doing good epistemic work.

My own experience building with AI tools is that they’re genuinely useful for certain categories of work and genuinely unreliable for others, and the boundary between those categories is hard to predict without direct experience. That makes the advice to engage rather than theorize correct regardless of where you started on the optimism-pessimism spectrum.

Both of Fowler’s fragments share a common problem worth naming: the headline number misleads. A $1.1 million fine looks serious until you place it against revenue. An AI demo looks impressive until you see the production failure rate. Getting the proportions right requires looking past the number that’s presented to the one that matters.

Was this interesting?