Vivek Haldar

When Compilers Were the 'AI' That Scared Programmers

Video version of this.

The early history of high-level languages and compilers is fascinating. It is a tale of the pioneers of computer science charting new territory, but also of deep resistance to new ways of doing old things. And I think it has a lot to teach us about how our tribe of programmers is wrestling right now with the idea of AI and LLMs, writing code.

Early Compilers

In the early decades of computing, programming was done at the most granular level, hand coding in machine code or assembly language. This was painstaking work. Programmers had to manage operation codes and memory addresses directly, which one early pioneer likened to hand-to-hand combat with the machine.

By the mid-1950s, however, a revolutionary shift was underway. High-level languages and their compilers began to automate low-level details, allowing programmers to write instructions in more human-readable forms. Developed at IBM by John Backus’ team, Fortran was the first widely used high-level language. It enabled scientists and engineers to write formulas and algorithms in a more natural mathematical notation, which the compiler would then convert to machine instructions. Fortran dramatically reduced the number of statements required. A program that might have needed 1,000 assembly instructions could now be done in about 50 Fortran statements.

The Fortran compiler quickly proved that compiled code could approach the efficiency of hand-coded assembly. Early skeptics who insisted that high-level code could never be as efficient as hand-crafted machine code were confounded when Backus’ team delivered an optimizing compiler that made Fortran programs run nearly as fast as assembly.

Adoption was rapid. A 1958 survey found that over half of all code being run on IBM computers was generated by the Fortran compiler. This was a stunning validation of this new approach.

Resistance

Even then, many working programmers were skeptical or resistant. They had deep expertise in assembly language and were proud of the arcane skills needed to wring efficiency out of early machines. John Backus later described this culture as a priesthood of programming. In the 1950s, programmers often regarded themselves as guardians of arcane knowledge, possessing skills and knowledge of mysteries far too complex for ordinary mortals.

One of the most frequent arguments against compilers was that compiled code could not be as efficient or compact as handwritten assembly. People would say they could be more efficient in assembly, giving a whole litany of reasons to avoid high-level languages. And this was not entirely untrue. The earliest compilers sometimes did produce verbose or suboptimal machine language. A 1965 Honeywell management briefing noted candidly that a highly skilled assembly programmer could still beat COBOL’s output in efficiency. But it also questioned how many highly skilled programmers are available or even needed at the average installation.

In practice, compiler optimizations improved quickly. IBM’s Fortran team introduced the first optimizing compiler and proved that compiled code could approach or match the performance of hand-coded assembly in many cases. As these successes became evident, the purely performance-based objections began to wane.

The next argument was control. Many assembly programmers were accustomed to having intimate control over memory and CPU instructions. Surrendering this control to a compiler felt risky. There was a sentiment of, if I don’t code it down to the metal, how can I trust what’s happening? In some cases, this was about efficiency. In other cases, it was about debuggability and understanding programming behavior. However, as compilers matured, they began providing diagnostic output and listings that actually improved understanding.

More programmers

There was an implicit fear that making programming easier might reduce the prestige or necessity of the seasoned programmer. High-level languages opened the door for newcomers to write code without years of experience in binary or assembly. The priesthood culture, Bacchus described, suggests that some experts guarded this domain closely.

Grace Hopper encountered this attitude when promoting compilers. Management and colleagues initially thought the idea of automatic programming was crazy, fearing it would make programmers obsolete. Hopper and others had to repeatedly demonstrate that these tools would augment programmers’ productivity, not replace the need for skilled people.

In fact, far from eliminating jobs, high-level languages led to an explosion in demand for programmers. High-level languages opened the door for many more people to become programmers. In the 1940s and early 50s, programming was the domain of a small elite with specialized training, often in mathematics or electrical engineering. By the 1960s, that pool had widened significantly. Fortran had allowed scientists and engineers to directly write programs for their problems, and COBOL did something similar for business. It enabled individuals with domain expertise, such as in accounting or inventory, to learn programming without needing a deep computer science background. As one retrospective put it, COBOL democratized coding. Companies could take everyday people and train them to be useful COBOL programmers in a few months, allowing a new influx of talent into the field.

The result was a dramatic expansion of the programmer workforce. The role was now seen as more akin to logical clerical work rather than electrical engineering. This set the stage for programming to become a mainstream profession rather than a niche craft.

Going higher

As compilers automated the gritty aspects of coding, the skill set for programmers evolved. There was less need to memorize machine instruction sets or calculate memory addresses by hand– tasks the compiler would handle flawlessly.

Instead, the emphasis shifted to problem analysis and program design. This is reflected in the terminology. The rise of the term analyst in job titles during the 1960s. A high-level language freed programmers to think more about what the program should accomplish rather than how at the bit level to implement it. One manager in 1963 noted that with the advent of COBOL, he valued employees who understood the business logic and could structure a program clearly. The compiler would take care of the low-level details if the high-level logic was sound.

In short, the introduction of high-level languages elevated the abstraction level at which programmers operated. By making programming more efficient and accessible, high-level languages enabled many more applications to be developed in a given time. Businesses could automate more of their operations, which in turn drove demand for more computers.

This expanded the scope of automation in society. Entire new domains for computing emerged because software could be written fast enough to respond to business needs. High-level languages were the catalyst that helped move computing from a limited experimental phase into a major industry of its own.

AI

The shift from assembly language to high-level languages with compilers in the 1950s and 1960s has striking parallels to today’s transition from manual coding to AI-assisted programming with tools like GitHub, Copilot, Cursor, Aider, and others.

In both cases, a fundamental change in how code is written is occurring, leading to initial skepticism, gradual adoption, and ultimately, a redefinition of what it means to be a programmer.

The debate playing out today about what it means to be a programmer when LLMs can churn out large amounts of working code is of exactly the same shape. Let’s learn from it and not make the same mistakes.

Sources

1, 2, 3, 4, 5, 6, 7.