Dijkstra would’ve (incorrectly) HATED vibe-coding
Edsger W. Dijkstra was one of the greats of computing, responsible for large parts of the way we write programs today. He was an early champion of structured programming. His missive “GOTO considered harmful” now seems blindingly obvious, but was against the grain of programming at the time (which is why he had to write it!).
In one of his missives he makes the case against using natural language for programming:
Formal language/notation is precise! And it has precise rules for composition which enable stating complex ideas clearly.
The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid.
Math took off only after it shed the ambiguity of natural language
Greek mathematics got stuck because it remained a verbal, pictorial activity… and the modern civilized world could only emerge … when Western Europe could free itself from the fetters of medieval scholasticism … thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole.
The trap of unknown imprecision. Natural language lets us make statements that sound reasonable, but don’t hold up to rigorous scrutiny.
When all is said and told, the “naturalness” with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious.
In other words, a rigorous formal notation forces clarity and precision, which is hard work, but the return is a reliable, dependable and correct system.
Will natural language actually be easier?
a change of interface can easily increase at both sides of the fence the amount of work to be done (even drastically so). Hence the increased preference for what are now called “narrow interfaces”. Therefore, although changing to communication between machine and man conducted in the latter’s native tongue would greatly increase the machine’s burden, we have to challenge the assumption that this would simplify man’s life.
This is a subtle but crucial point. When using any interface between human and computer, there is some work that both parties have to do. For example, with a structured programming language, the human must do the work of converting their thoughts into a syntactically correct program, and the computer (compiler) does the work of translating it for and executing it on the machine.
But the nature of the interface itself also creates an overhead for both parties to manage and interpret the communication happening over that interface. This overhead includes:
- Specification Cost: The effort needed to clearly articulate intent across the interface.
- Interpretation/Translation Cost: The computational or cognitive effort to parse, understand, and translate instructions received via the interface.
- Verification Cost: The effort to ensure the communication was successful and the intended semantics were preserved.
- Ambiguity Resolution Cost: The effort spent dealing with underspecified or potentially misinterpreted communications.
At first glance, the move to using natural language seems like it would drastically reduce the human’s effort (they wouldn’t have to write code anymore!) at the expense of drastically increasing the computation required.
Dijkstra’s argument is that the reduction in precision, the increase in ambiguity, and the resultant increase in the human’s burden to manage that ambiguity and verify correctness mean that the overall human effort could easily increase, potentially outweighing any perceived benefit from not having to learn a formal syntax. (This argument is similar to the one from the “Ironies of Automation”: increasing the automation in a machine can increase the human operator’s cognitive load.)
That’s a pretty damning critique if applied to the vibe-coding style of programming today. He would have absolutely hated vibe-coding.
I agree completely with Dijkstra’s argument when it comes to the advantages of formal notation, but I still think vibe-coding (or AI-assisted coding, which is largely natural-language driven) is a net benefit for building applications today.
Dijkstra is talking about correctness, which rightly was the focus of early computer science. When writing code for algorithms (sorting, graphs) or basic systems (OS kernels, compilers) one needed to be precise and rigorous to get confidence in the correctness of the code implementing the algorithm.
The basic high-stakes layers of computing (CPU instruction sets, operating systems, compilers, interpreters) are fairly settled by now, and were built with a lot of rigor. They are as “correct” and reliable as is possible to make them with human engineering.
(Aside: note that the modern computing stack got quite reliable, but largely without formal proofs. C.A.R. Hoare writes about it here (video), and attributes it to treating programming as an engineering discipline, with thorough reviews and lots of testing.)
But the apps we’re building today do not have a clear notion of correctness. Often “correctness” is discovered during building, testing and releasing the app. Think of social media apps, or CRUD enterprise apps full of tangled biz logic. How would you define “correctness” for such applications? There is a notion of “meeting the specification”, but:
The business specification for modern software is in natural language, not a formal one. Product managers and business owners write their “specs”, such as they are, in natural language. There is no formalism in which to ground the process. That imprecise natural language is taken by system architects and software engineers and turned into software, which does have a formal, executable language.
The specifiers (i.e. product managers or business owners) and the builders (designers, programmers) perform an iterative dance amongst themselves and the application to arrive at the behavior they collectively deem acceptable in their context.
Consider the absurdity of the counter-factual: if modern applications did have a clear notion of correctness, product managers would throw a PRD over the wall to engineering, and would never need to be spoken with again.
The modern activity of software engineering is more about discovery and evolution than building to a static specification. It also largely consists of snapping components together (the long-cherished dream of software engineering). Databases and web frameworks exist–you need to smush your business logic in the middle and build a differentiated user experience.
For such a scenario, AI-assisted coding that lets you quickly build an MVP, quickly change it, and do all that with a natural-language driven interface, is a boon. As folks like Claire Vo and Sahil Lavingia are pointing out, the well-known roles of PM, eng, UX are collapsing into one. An end-to-end “product engineer” can spec, design and build an application, using a fraction of the headcount (usually just themselves!) and time previously required.
The argument for the necessity of formal notation and the benefits of AI-assisted vibe-coding apply to very different layers in the computing stack. Use appropriately.