CPU pipelines and the structure of work
I want to expand on what I meant in the following tweet a few days ago.
The design of CPU pipelines can teach us some deep things about how work should be structured.— Vivek Haldar (@vivekhaldar) May 13, 2013
I think the modern CPU is an excellent parable for the nature of modern work itself. In a sense, it is the purest expression of Taylorism.
(Prerequisite: none of this will make sense without a basic knowledge of computer architecture and how CPUs are structured at a high level as a series of stages (called pipelines). The canonical reference is Computer Architecture, by Hennessy and Patterson.)
The goals of structuring the work are efficiency (utilizing resources optimally), latency (reduce the time it takes from start to finish) and throughput (maximize the work completed per unit time).
In order to achieve these goals, work is structured into a sequence of stages, with each stage performing a narrow, well-defined sub-task that is relatively easy and simple.
Work should not pile up. Inventory is anathema. There is a well-defined and bounded buffer between stages to hold intermediate results.
The latency and throughput is limited by the slowest stage in the pipeline. Thus, it is important that each stage be roughly equal in complexity and complete work in about the same time. This also helps to minimize temporary inventory (3).
Specialization is key. As a corollary of all the above points, each stage is naturally pushed to a narrow specialization.
Complexity in the overall system is acceptable, and indeed often inevitable because the above rules gravitate naturally towards a deep pipeline with a large number of stages, as long as each stage is simple.