Vivek Haldar

The Programmer's Climb

As a computer science undergrad in the mid-90s, my “Introduction to Programming” course in the very first semester was modeled after the legendary “Structure and Interpretation of Computer Programs” by Abelson and Sussman.

For a quarter-century at MIT, Abelson, Sussman, and colleagues used Scheme to teach freshmen how to manufacture abstractions almost artisanally from scratch. Rooted in the Lisp ethos, the premise was that by understanding a handful of primitives and mastering a small set of combinators, students could build a platonic ladder of abstractions—a veritable cathedral of computation—and still reason about every brick. This pedagogical approach involved first creating abstractions to model the problem domain, effectively designing a custom language in which the final solution could then be expressed cleanly, almost self-evidently.

At that point I’d been programming in BASIC (including its more modern variants like Visual Basic) and Pascal for a few years through middle and high school. The concept of functional programming was alien and jarring to me. But soon it clicked for me, and this way of approaching programming in an almost algebraic way grew on me.

When MIT moved from Scheme to Python for teaching its own introductory CS course (6.001) in 2007 it was news in academic CS circles. It was a signal of the way programming was changing, or already had changed, by the early 2000s.

This wasn’t a sudden coup or a rejection of SICP’s profound ideas. To their credit, Abelson and Sussman themselves had advocated for a change, recognizing that the landscape of programming, and therefore the essential skills required of engineers, had fundamentally shifted.

This transition, like the earlier move from assembly to high-level languages, offers another valuable lens through which to understand the evolution of our craft – another one of which is happening before our eyes today with the rise of AI-assisted coding.

The motivation

As Gerald Sussman himself explained in a 2009 talk:

In 1980, good programmers spent a lot of time thinking, and then produced spare code that they thought should work. Code ran close to the metal, even Scheme – it was understandable all the way down. Like a resistor, where you could read the bands and know the power rating and the tolerance and the resistance and $V=IR$ and that’s all there was to know. 6.001 had been conceived to teach engineers how to take small parts that they understood entirely and use simple techniques to compose them into larger things that do what you want.

But programming now isn’t so much like that… Nowadays you muck around with incomprehensible or nonexistent man pages for software you don’t know who wrote. You have to do basic science on your libraries to see how they work, trying out different inputs and seeing how the code reacts. This is a fundamentally different job, and it needed a different course.

The original SICP approach emphasized deep understanding, achieved by carefully composing systems layer by layer from fundamental building blocks like recursion and higher-order procedures, allowing theoretical mastery of the entire stack.

However, the reality of professional software development had shifted dramatically. It was becoming less about constructing every component yourself and more about integrating large, complex libraries—often opaque black boxes with imperfect documentation. Consequently, the crucial skill evolved from pure logical construction towards empirical investigation—needing to perform “basic science” just to discover how these external systems behaved. To continue teaching only purity and first principles started to feel disconnected; the core task was becoming one of interrogating and orchestrating complex, pre-existing tools, not just building everything from scratch.

Hal Abelson agreed. Both authors had been lobbying to replace 6.001 for years before the final curtain in Fall 2007.

The choice of Python

The replacement course, 6.01, was deliberately centered on the challenges of programming robots to navigate the physical world. This focus was chosen because robots serve as the antithesis of the idealized, predictable components studied in SICP; their wheels slip, sensors yield noisy or inaccurate data, environments change unexpectedly, and all predictive models are mere approximations. Tackling robotics therefore forced students to grapple with a different kind of engineering complexity, emphasizing the need to build systems incorporating robustness, feedback mechanisms, and probabilistic reasoning to handle inherent real-world uncertainty and failure, rather than focusing solely on Platonic elegance.

And why Python? Sussman’s explanation was pragmatic: it had readily available libraries for interfacing with the robotics hardware they were using. This highlights Python’s “batteries included” philosophy – its strength lies in its vast ecosystem of libraries for tackling real-world tasks, even if one doesn’t understand the internals of every library used. The focus shifted towards using available tools to solve practical problems, mirroring the professional landscape.

Lessons for today

The earlier transition from hand-written assembly to high-level languages and compilers is another example of a major change in the way programs were created. It automated low-level machine details via compilers. It raised the level of abstraction, allowing programmers to focus more on logic and less on managing registers and memory addresses. Back then, programmers initially resisted compilers fiercely, fearing deskilling, loss of efficiency and control. But compilers opened programming to a broader audience, reshaping the role of programmers and greatly expanding software’s scope and impact.

Similarly, the transition from Scheme to Python (even in an academic context) marked another profound phase shift, driven by practical necessity as programming evolved from theoretical elegance toward empirical complexity. Today, we’re in the middle of yet another such shift—AI-assisted programming.

Just as Python freed programmers from reinventing wheels, AI-powered coding tools such as GitHub Copilot, Cursor, and Aider liberate us further from mechanical coding details. This arguably pushes the programmer’s role further up the abstraction stack. The “basic science” Sussman described might now involve prompting for, verifying, debugging, and integrating AI-generated code, and focusing even more on high-level design, architecture, and requirements. Tool literacy is beginning to beat syntax mastery.

Yet, just as with previous shifts, programmers today voice similar concerns: loss of control, deskilling, uncertainty around code quality, and fear of diminished roles.

But history offers reassurance: each phase shift in programming hasn’t diminished the profession, but elevated it. Assembly programmers feared compilers but ended up focusing on high-level problem-solving instead of machine instructions. Scheme purists might have lamented Python’s practicality-over-purity approach, yet today’s programmers grapple with much richer complexities, enabled by the vast ecosystems of Python and other modern languages.

Academia is also beginning to grapple with this, with every major CS department scrambling to understand how to teach “Intro to CS/Programming” (and even intermediate courses) when the latest LLM one-shots into the top quartile of the class. Pedagogy must follow practice. If freshmen arrive with ChatGPT open in another tab, pretending it doesn’t exist is as futile as ignoring Python and its “batteries-included” packages in 2007.

Ultimately, each technological leap—from compilers automating machine code to Python’s libraries simplifying integration, and now to AI generating code—has freed human programmers to operate at higher levels of abstraction and tackle more complex problems. The future is programmers amplified.