The Robot is not a Junior Developer. It Is a Senior Developer caught in Groundhog Day
I frequently encounter the notion that, when it comes to programming, AI is like a well-read junior developer.
I don’t think that is a great analogy any more. At this point, Claude (my preferred provider at the moment) feels much more like a talented senior developer. But crucially, a senior developer who is experiencing a perpetual day 1 on the team.
There are two ways of getting better and more specific output from an LLM: fine-tuning and retrieval-augmented generation (RAG).
Fine-tuning is like making a student study a curriculum for a long time. Over time, they internalize a worldview. Certain assumptions simply become “how things are.”
RAG is more like allowing the student to bring notes during an exam.
In one sense, the former is much more powerful. But in many real situations, the latter is more effective. If the question is narrow, and the relevant piece of knowledge is very specific, it often matters more that the right note is present in front of the student than that the student has spent years internalizing the entire curriculum.
Scanning a codebase, reading READMEs, and opening individual files is exactly this kind of exam-time context injection. When an AI does it, that is RAG, but obviously humans do it as well.
When an experienced engineer is introduced to a code base, their first task is not writing code. It is reconstructing local reality. They scan the architecture and attempt to infer conventions. But even after doing that, their first PR’s will need course-correcting by the other team members. They will have assumed that standards are followed and that visible patterns are intentional, both of which are frequently wrong.
Standards will not be followed because the developer who started something didn’t know about them or maybe they did but they decided to be clever, for good or bad reasons. And even clear patterns in the code base may reflect past decisions that were revised just last week.
The real rules live in commit history, Slack or email threads, half-remembered discussions, and the team’s collective memory. A human absorbs this over time. An AI doesn’t. Every session, it wakes up on day one. Now, in theory, fine-tuning would be the equivalent here, but at least with our current tooling, this isn’t an option.
AI-Assisted Programming Is Not Delegation. It’s Context Construction.
Given this perspective, how can we best work with LLM’s as software developers?
Well, we need to give the robot as much help as possible which means, somewhat paradoxically, adding more English to our code: comments, READMEs (I’m considering putting one in every folder), agent files with detailed descriptions.1 This creates an “exoskeleton” for the LLM to work with. Ask it to keep this up to date.
However, as tempting as it might be to document every idiosyncrasy of the codebase in an agent file, this will result in large prompts full of information, most of which will be irrelevant to most tasks, with a real risk of confusing the LLM. Even if the context window is large enough, you still want to be conscious of what you put in there. In an uncanny, human-like fashion, the primacy and recency effects where most attention is paid to the first and last bits of information presented have been observed in LLM’s!
Thus, the primary job of the human engineer in an AI-assisted workflow is not to write code. It is to compile the world the code will be written inside.
For every non-trivial change, you have to decide:
What assumptions about this part of the system is the LLM likely to make
Which of those assumptions are true and matter for this change
Which assumptions the model is likely to get wrong
And which slice of local reality must be injected to prevent that
In other words: what would you need to tell a talented developer with zero institutional memory so that they can operate effectively?
You are not “using AI.” You are compiling context for non-persistent intelligence.
And that is now the real job.
Adding lots of English to your code is the antithesis to Clean Code. I was never a believer in that to begin with, but with LLM’s I am even less so. Some people will disagree.

