The rise of fimperative programming
I love functional programming just as much as the next guy, but I’m wary when someone unabashedly declares it the savior from our software crisis.
For the longest time, it was thought that functional programming would save us from the multicore crisis, because, well, if there is no shared state then every concurrent execution can just blaze ahead full speed, can’t it? But the devil really is in the details. Persistent data structures are not always performant. Sometimes it is cheaper to update in-place (Eeek! Assignment!).
Also, as Cedric Beust points out, in the meanwhile the imperative camp did a great job of making concurrency work, by putting out libraries that solved the hard problems and yet exposed easy enough APIs. Paradigms win and lose not just by their theoretical beauty but by how much engineering effort goes into making them viable. As an example, this is what it takes to get a high-performance concurrent dictionary.
But all the same, the functional camp has much to be proud of. The trend of mainstream languages incorporating functional features has gained a lot of momentum and shows no signs of reversing. Anonymous lambda functions. First-class functions. List comprehensions. A decade ago they were FP-only, and now they are part of most programmers’ vocabularies. Even C++ has lambda functions now! Maybe all languages are destined to asymptotically approach Lisp.
We’re seeing functional-inspired features creep into mainstream imperative object-oriented languages. Richard Minerich calls it expression-oriented programming. I half-jokingly call it fimperative programming. But what’s driving that isn’t so much the need for performance or dealing with multi-core, but just more succint and cleaner code.
Sidenote: while functional programming is often presented as the antithesis of the now dominant object-oriented model, immutability and getting rid of state (i.e. no assignment!) were central to Alan Kay’s early ideas of what objects should be like.