The observation Unmesh Joshi makes in his Martin Fowler article is deceptively simple: LLMs are useful tools, but using them to skip around the learning loop makes their help illusory over time. Published in November 2025, the piece identifies something real, but the argument becomes more interesting when you trace exactly where in the loop the damage happens and why.
Looking back at it now, a few months on, the claim has only become more relevant as LLM-assisted coding has normalized further. Most developers I talk to use Copilot or Claude or Cursor daily. Few have thought carefully about what that pattern costs.
The Learning Loop Is Not Just “Practice”
The most common misread of Joshi’s argument is treating it as “you should practice more instead of using shortcuts.” That framing makes it sound like discipline advice, and discipline advice is easy to dismiss. The actual claim is more structural.
David Kolb’s experiential learning cycle, formalized in 1984, describes learning as moving through four phases: concrete experience, reflective observation, abstract conceptualization, and active experimentation. The phases feed each other. You do something, you observe what happened, you build a mental model, you test it. Skipping any phase degrades the model you end up with.
LLMs interrupt this cycle at a specific seam: between concrete experience and reflective observation. You encounter a problem (concrete experience), and before you can sit with what you don’t understand about it, the LLM hands you a solution. Reflection gets skipped. Conceptualization, which depends on reflection, gets skipped too. You go from “I have a problem” directly to “here is working code,” and the working code creates the illusion that the loop completed.
It didn’t. You have an output, not a model.
The Generation Effect and Why Reading Code Doesn’t Teach You to Write It
Cognitive psychology has a well-documented phenomenon called the generation effect, established through work by Norman Slamecka and Peter Graf in the 1970s and since replicated many times. Information you actively generate yourself is retained significantly better than information you passively read, even when the content is identical. The act of production encodes more deeply than the act of consumption.
This is directly relevant to how LLMs are typically used. When you ask an LLM to write a function, you read the result and evaluate it. When you write the function yourself, even badly, even slowly, the cognitive machinery of retrieval and construction is engaged. The struggle to produce is not friction around the real work. It is the real work, from a learning perspective.
This is also why reading other people’s code, while valuable, is a weak teacher compared to writing your own and debugging your own failures. LLMs accelerate the reading side dramatically. They do not help the writing side, and in many workflows, they actively crowd it out.
Desirable Difficulties: Why Friction Is the Signal
Robert Bjork at UCLA has spent decades studying what he calls desirable difficulties, learning conditions that feel harder in the short term but produce stronger retention and transfer over time. The canonical examples are spaced practice, interleaved practice, and retrieval practice. The common thread is that each of them forces the brain to work to reconstruct knowledge rather than simply re-encounter it.
The difficulty is not incidental. It is the mechanism. When retrieval is hard, the encoding that results from successful retrieval is stronger. When you work through a problem you can almost but not quite solve, the resolution lands deeper than a problem you found trivial.
Manu Kapur at ETH Zurich has documented a related phenomenon under the term productive failure. Students who attempt problems before receiving instruction, and who often fail those attempts, outperform students who receive instruction first and then attempt the problems. The failure is doing something. It primes the conceptual structures that make instruction land.
LLMs, used carelessly, eliminate productive failure. You never sit in the discomfort of not knowing. You never develop the wrong mental model that will be corrected. You get the right answer without the wrong answer that teaches you why it’s wrong.
The Metacognitive Blindspot
There is a second-order effect that is subtler and, I think, more damaging over a career. When you struggle with something, you discover the shape of your ignorance. You learn that you thought you understood how async context propagation worked, but you actually didn’t. That discovery is itself valuable: it maps a gap and motivates you to fill it.
When an LLM fills the gap for you before you’ve found it, the gap remains but becomes invisible. You don’t know what you don’t know, and worse, you have mild evidence that you do know it, because the code worked. The metacognitive calibration that comes from productive struggle never fires.
This matters especially early in a domain. Senior developers who use LLMs heavily have usually already built the mental models that let them evaluate LLM output, catch subtle errors, and recognize when the generated code is idiomatic versus superficially plausible. Junior developers who offload to LLMs before building those models may never build them. They end up with a workflow dependency on tooling that fills gaps they cannot see.
What Responsible Use Actually Looks Like
None of this argues against using LLMs. The argument is about timing and posture.
The most useful heuristic I have found is “struggle first.” When encountering a problem, attempt it for a meaningful stretch before opening the LLM. Not as a performance of effort, but because the attempt is when learning happens. The LLM then becomes a check on your solution or an explanation of where you went wrong, not a replacement for the attempt itself.
A second pattern: use LLMs for explanation rather than generation when you are learning. Instead of asking “write me a function that does X,” ask “explain why this approach fails” or “what concept am I missing here.” The LLM as interlocutor is a different tool than the LLM as code emitter, and it preserves more of the generative work on your side.
A third: treat LLM-generated code as code review material, not as code you wrote. Read it critically. Understand every line. Rewrite it from scratch without looking at the generated version. This last step sounds excessive and usually is for production work, but for learning contexts it closes the generation loop.
Joshi’s framing in the original article is that the learning loop is an essential part of professional practice. That word “professional” is doing real work. There is a difference between using tools to accomplish tasks and using tools to become more capable of accomplishing tasks. LLMs are excellent at the former. For the latter, they require careful management.
The developers who will be most capable in five years are not the ones who avoided LLMs or the ones who used them most. They are the ones who were deliberate about when to use them, preserving the effortful parts of work that look like inefficiency but are actually the learning.
That distinction is worth keeping clear.