Vivek Haldar

LLMs eat entry-level SWEs

(This is an edited write-up of my rambling video on the same topic.)

A key point missing in the discourse around “is AI going to make coding redundant?” is that there is a hierarchy to programmers, and there is a hierarchy to the difficulty and ambiguity in programming tasks. It seems obvious to state, but: AI/LLMs will attack the easier side of that spectrum to start with, and in fact, they already are.

I predict that entry-level junior software engineers are going to get eaten by large language models that can generate code.

Now my take is actually a moderate one. If you listen to folks like Matt Welsh of, he has a much stronger position. He has the position that all software engineering is going to get eaten by LLMs and AI. And maybe that’s true, but I’m making the softer claim that it’s going to happen more immediately and more obviously for entry level positions really soon.

We have to start by first understanding the level progression of software engineers. If you go look at, the levels are similar across most tech companies. The first three levels are L3, L4, L5. L3 is entry-level software engineer. That is usually people new to software engineering or recent graduates. L4 is mid-level. L5 is senior. The general scope and responsibilities for an entry-level L3 software engineer are to design, implement, debug, and deploy software that is fairly well-understood and well-defined in scope. Usually this takes the form of someone a bit more senior like an L4 or an L5 doing the higher level or more complex design for a system and breaking it down into well-defined tasks. Then L3 software engineers do maybe some more detailed design, and the more well-defined and well-scoped implementation tasks to implement that larger system.

Basically, what an entry-level software engineer does is take well-defined, well-scoped tasks and then be proficient enough in coding, debugging skills to quickly, correctly, cleanly implement them and deploy them to production and to be able to debug those things if need be. And then once you get to higher levels like mid-level or senior level, it’s more about being independent. It’s more about larger scope, larger ownership, leadership, all that stuff. But today we’re just talking about entry level software engineers.

So given that description of what an entry level software engineer does, it’s not that hard to see how the code-generating LLMs of today are creeping very close to it. And if you wait another maybe year, which is practically infinity in machine learning time scales, that it would be pretty easy for LLMs to accomplish that. With context windows getting larger and larger, you can fit more and more surrounding code, adding more context so that you can ask the LLM to write code for something that is very specific to your system and to your code base, your module, your implementation and generate code that understands what the surrounding classes and methods and abstractions are doing and actually uses them as opposed to writing generic code that uses publicly available APIs that the LLM has already been trained on.

If you look at recent increases in context window lengths, for example, Gemini 1.5 went all the way up to a million tokens, and even larger sizes in research. And a million tokens is pretty huge. A million tokens is 750,000 words. A lot of even medium-complexity GitHub repos could fit entirely within that window. Now, if you’re working in a closed source enterprise code base, it’s probably a lot larger than that, but a million tokens might still be enough to fit the module or the subsystem that an entry-level software engineer is working on. And you can put all that into the context window plus your instructions for what you want to do.

Today the “hit rate” for accomplishing a task right away is not high. Often a human software engineer has to take that first draft and tweak it a little bit to make it work.

The writing is on the wall. What are the implications of that? Are you just going to have senior software engineers do design, spell out architecture, and drive the LLM, so to speak, give the LLM direction, look at its output, nudge it in the correct direction?

It gets a bit more speculative and vague as you go up the stack from junior to mid-level to senior and beyond. But at least for the kinds of things that entry-level software engineers do, I don’t think it’s a far stretch at all to say that code-writing LLMs can do that kind of work. They can do a lot of that kind of work even today, and very soon they’ll be able to do most, if not all of that entry-level work.

The open question is what that means for the overall hiring pipeline, and career development more generally. If you just skip this entry-level step in career development, how does one even get to mid-level or senior level? How do you get the experience needed to reach those levels? It becomes a bootstrapping problem. And I don’t have any good answers to that. I don’t think anyone does. I think all this is happening so fast that everyone is pretty much making it up as they go along.

That’s an important question for computer science or programming graduates coming into the market, into the tech hiring market in the next few years. They’re going to need a lot more than coding and implementation chops to stand out in the hiring pool. And once they do get hired, again, they’re going to need more than coding and implementation chops to stand out among their colleagues.

At the very least, what’s clear is that if you’re graduating now or soon, you have to be extremely familiar and extremely fluent with AI-assisted programming. You have to understand how to prompt an LLM, how to prompt an LLM to get specific kinds of code, how to prompt an LLM to help you debug, how to prompt an LLM to help you critique your code, review your code, debug your code. You have to know various ways of prompting LLMs. You have to know how to work with LLMs. And I think that skill set will become as basic as knowing how to use an IDE or knowing how to do a edit-compile-debug cycle.