The Assembly Language of Knowledge Work
The work most of us are doing right now—the clicking, the tabbing between windows, the copy-pasting, the endless typing interspersed with bursts of genuine cognition—will soon seem as archaic as programming in assembly language—the low-level instruction set for a machine that is about to be automated away.
The Atom of Work: The Read-Cognify-Write Loop
Break down any task performed by a knowledge worker, and you find the same atomic structure repeating itself:
- Read: You ingest information. This could be reading an email, listening in a meeting, analyzing a spreadsheet, or even reviewing the output of your previous task.
- Cognify: You process that information. This is the black box of thought where judgment, synthesis, analysis, extrapolation, or creativity happens. It’s the process of forming a judgment, decision, or the next step.
- Write: You persist that new insight. This could mean writing an email, updating a document, creating a presentation slide, or writing a piece of code.
Complex projects are a long series of these read-cognify-write steps. A massive undertaking like launching a new product is composed of thousands of smaller RCW loops. Sometimes they’re strung together in bursts carried out in a few minutes, other times they might have weeks-long pauses.
The RCW steps need a structure to live within. First, you must create a project plan. To do that, you break the task down: “analyze market data” (a series of RCW loops), “draft a budget” (more loops), “assign team roles” (even more loops). At the end, you perform a final synthesis step, combining the outputs of all these threads into the final artifact—the project plan.
That sounds like the basic plan-and-execute structure of most AI agents today, and it is.
For decades, humans have been the sole engine capable of this entire process. We hold the high-level intent for a project over weeks, months, or even years, manually “compiling” that intent into countless atomic RCW tasks. We are the planners, the executors, and the synthesizers. But this is changing.
But agents have been progressing to hold intent over longer and longer tasks. The resulting curve showing the length of tasks that AI can do shows up as a neat line on a log-chart, i.e. a clean exponential.
The AI “Compiler” for Knowledge Work
The latest generation of reasoning AI models can now perform both the low-level execution and the high-level planning and orchestration. They can not only perform an individual RCW step but also do the planning required to decompose a complex goal into those smaller steps and then synthesize the final output.
AI is becoming like a compiler for knowledge work. Tiny RCW loops are the target assembly language ISA for these compilers.
When a programmer writes in a high-level language like Python, they are expressing their intent. They don’t worry about memory allocation, CPU instruction sets, or register management. The compiler handles that, translating the simple, human-readable intent into efficient, low-level machine code. Yet, there was a time when humans had to painstakingly handle moving values between memory and registers, manage the stack, and write assembly language.
Similarly, we will soon stop managing the low-level “assembly language” of work. Instead of manually executing hundreds of RCW loops, we will state our high-level intent—“Create a go-to-market strategy for Project X”—and the AI agent will compile it, orchestrating the necessary steps to produce the final output.
The Bitter Lesson and the Black Box
Ethan Mollick makes a similar point eschewing this low level work in favor of clearly stating intent, making the analogy with Sutton’s “The Bitter Lesson.” The lesson is that we shouldn’t try to force AI to replicate the exact steps a human would take. Instead, we should give it a clear goal and a robust way to measure success, then let it find its own, potentially alien, path to the solution.
This is exactly how we treat compilers. A programmer rarely, if ever, inspects the assembly code their compiler generates. They trust it because it’s proven to be efficient and correct. We must adopt a similar mindset for AI agents. The key is not to micromanage their internal plan but to focus on two things:
- Clearly Stated Intent: The quality of your output depends entirely on the clarity of your prompt. Our skill will shift from execution to direction.
- Inspectable Artifacts & Evals: We don’t need to know how the AI did it, but we need to trust the result. This means demanding outputs with citations, verifiable reasoning, and clear diffs showing changes.
Optimizations for Knowledge Work
Pushing the compiler analogy further, you could “optimize” knowledge work very much like how a compiler optimizes code when emitting assembly. AI agents will apply the same principles to knowledge work workflows.
Consider these classic compiler optimizations and their application to corporate knowledge work:
- Dead Code Elimination: A human, working from habit, might waste a day generating a report that no one uses in the final presentation. An AI agent, seeing the full dependency graph of the project, would identify the task as “dead code”—its output is never used—and eliminate it before it even begins.
- Common Subexpression Elimination: A human might analyze a dataset for a summary email, then someone else might re-analyze the same raw data for a detailed slide deck. An agent would perform the core analysis once, cache the results (the “common subexpressions”), and then reuse them.
- Instruction Reordering & Parallelization: A human preparing a competitive analysis would research funding, then product features, then press releases, one after another. An agent would see these as independent tasks and execute them in parallel, cutting the time-to-completion dramatically.
- Loop Unrolling: A human sending 50 personalized follow-up emails performs a slow, repetitive loop with high cognitive overhead for each iteration. An agent would “unroll the loop,” pulling all the data in one batch operation and generating all 50 drafts simultaneously.
From Assembly to Intent
The knowledge work we do today is a mesh of high-level strategic thought and low-level, manual execution. We are the thinkers and the typists, the strategists and the assemblers. The AI compiler is coming for the assembly part of the job.
Our value and focus will shift entirely to the realm of intent. Our core tasks will be to ask the right questions, to define the goals with clarity and precision, and to build the robust evaluation systems that ensure the quality and reliability of the output. The painstaking, manual process of bringing those ideas to life will be abstracted away, just like compilers abstracted away assembly language to high-level languages.