Vivek Haldar

The cost of memory safety

John Regehr asks:

If the cost of memory safety bugs in C/C++ codes is significant, and if solutions are available, why aren’t we using them in production systems?

He offers several possible reasons:

The cost of enforcement-related slowdowns is greater than the cost of vulnerabilities.

The cost due to slowdown is not greater than the cost of vulnerabilities, but people act like it is because the performance costs are up-front whereas security costs are down the road.

Memory safety tools are not ready for prime time for other reasons, like maybe they crash a lot or raise false alarms.

Plain old inertia: unsafety was good enough 40 years ago and it’s good enough now.

I’ve been thinking about this too. This was a hot research topic not so long ago, but seems to have faded away. I think there are a few additional factors at play here:

  • The cost of memory unsafety is much higher for client software than for server software. As our world moves towards thin and light clients (browsers, apps) that access heavy backend services, the urgency of the memory safety problem has diminished. Paying a small overhead (say, 5-10%) for memory safety on a laptop or desktop does not sound like much, but adding 5% CPU to an entire datacenter is material. At the other extreme, on phones and tablets, conserving battery is important and any drain on that needs a strong justification.
  • In tandem, other security problems–cross-site scripting, SQL injection–have become important to tackle.
  • Tools like address sanitizer have done a pretty good job of weeding out the vast majority of memory safety bugs early in development (with unit tests), taking the overhead from runtime to compile/build/test-time. The tradeoff is giving up soundness, i.e. you are not guaranteed to catch every bug.

All these combined haven’t “solved” the C/C++ memory safety problem, but they have brought it down to the level where it is manageable, and its current cost is judged to be comparatively low.