The Upside of Complexity
Consider the task scheduling function of the Linux kernel–back from version 1.2, released in 1995. It starts with this comment:
/* * 'schedule()' is the scheduler function. It's a very simple and nice * scheduler: it's not perfect, but certainly works for most things. * The one thing you might take a look at is the signal-handler code here. * * NOTE!! Task 0 is the 'idle' task, which gets called when no other * tasks can run. It can not be killed, and it cannot sleep. The 'state' * information in task is never used. * * The "confuse_gcc" goto is used only to get better assembly code.. * Dijkstra probably hates me. */
After that, it consists of 82 lines of code. The entire file, sched.c, weighs in at only 777 lines of code. The directory that contains it, “kernel”, has 16 C files, totaling to just over 5000 lines of code.
Fast forward sixteen years to 2011. Look at the same function in version 3.0. Amazingly, it’s still about 80 lines of code, but that’s just a front. The file sched.c now has 9337 lines of code, nearly double what the entire kernel directory contained back in 1.2. The directory now has more than a whopping 174,000 lines of code!
What is the difference between these two? Why is the second one so much larger and more complex? Is that a good thing?
Let’s take a look at all the things that the second scheduler does, that the first one doesn’t. These are a few that pop into my mind, and I’m probably missing a whole bunch.
- The kernel is pre-emptible.
- It can run on multi-processor systems.
- It has a much more sophisticated scheduler.
- Control groups can be used to isolate resource consumption.
- Tracing infrastructure.
- Profiling infrastructure.
…and the list goes on and on.
Looked at this way, complexity is a sign of growth and vitality. In some sense, complexity is the result of progress. The steady increase of complexity only looks bad until you compare it with the opposite scenario–a steady decrease in complexity. Imagine what your system would look like if it always decreased in complexity with time. What would the logical culmination of that be?
As engineers, we constantly struggle with complexity. The number one thing engineers whine about is the complexity of the systems and tools they work with. But it is engineers themselves who brought the system to that level of complexity. It is engineers that are adding on the next feature, the one that will incrementally increase the complexity of the system.
There are two types of complexity, and it is important to know which one you’re dealing with–so that you can fight against the right one.
There is “real” complexity. This is the complexity that comes from real-world constraints and requirements. Dealing with this type of complexity, and choosing to take it on, is almost always an economic decision, one that weighs the cost of leaving the system as it is against the cost of building and maintaining the additional complexity. For example, you could simply beef up your hardware and continue to run your simple, unoptimized program, or you could pay for the manpower to optimize it, and maintain the resulting program. Which side of the equation saves you money depends on your particular situation.
And then there’s “artificial” complexity. This is complexity that’s created out of whole cloth, not in response to any real constraint or requirement. This is the complexity of grand visions and cute designs. This is the complexity that often arises from too many resources–so much so that you don’t have to choose or cut down or budget. This is the complexity we should all be fighting against.