Martin Fowler’s March 10 fragments cover two items that, in different ways, are about how poorly we tend to calibrate our responses to things already in motion.
The first: a tech firm fined $1.1 million by California for selling high-school students’ data. Brian Marick’s observation, which Fowler endorses, is that no story like this should run without contextualizing the fine against company revenue, profit, or valuation. The company’s last known valuation was $11 million in 2017. That makes the fine roughly 10% of that number, and likely a much smaller fraction of whatever the business is worth today. The point is that fines function as deterrents only when they hurt. When lawbreaking is priced in as a low-risk cost of doing business, regulators have not fixed the incentive structure; they’ve just published a rate card.
The second item is more interesting to me professionally. Charity Majors gave the closing keynote at SRECon last year encouraging engineers to engage with generative AI. Fowler summarizes what her message would be in 2026: stop treating this as something that might happen to you and start treating it as something already underway. Swim out to meet the wave rather than waiting for it to crash on you.
The part that stayed with me is her advice on confirmation bias. She tells people to know their nature and lean against it. If you are a reflexive skeptic, force yourself to find something worth wondering at. If you are an optimist who assumes things will improve, force yourself to sit with the cautionary cases.
This is hard advice to follow in practice. I lean toward skepticism in hype cycles, which means I have well-worn grooves for collecting evidence that AI tools are overhyped and underdelivering. That evidence exists and is real. But so does the other kind, and if I am being honest, I weight the first more heavily without always noticing I am doing it.
What Majors is describing is epistemic discipline. Recognizing that your priors shape what you notice is the first step. Actively seeking disconfirming evidence is the second. A lot of people stop at the first step, feeling satisfied that they have been self-aware, while their actual behavior stays unchanged.
The two stories are not entirely unrelated. In both cases, numbers that should change behavior fail to land because people do not contextualize them. A fine looks significant until you see the revenue comparison. AI looks either transformative or overblown depending on which examples you let in. The work, in both cases, is getting calibrated enough to see what is actually there.