Many IT professionals have wondered why so much software, especially software from major vendors, is in such wretched security condition. IT staff members sometimes find their days consumed with applying patches to fix bugs that should never have been allowed to see the light of day. This can be extraordinarily frustrating, especially when one considers that a good deal of a company's IT investment goes towards purchasing software - software that purportedly had been well tested before being sold to the public.
The answer to the $64,000 - or is it $64,000,000? - question, "Why is so much software from so many large companies full of so many security holes?" is the topic of this article.
Let's begin by examining some easy answers:
"All software has bugs." Yes, it's true. Just like there are no perfect people, there is no perfect software. It has even been claimed that the number of bugs per thousand lines of code isn't a whole lot less today that it was a generation ago. But the mere existence, or shall we say persistence, of bugs in software does not in itself explain why there are so many security problems.
Software QA is lame. People who work in quality assurance as software testers have never had much respect. They generally earn less than the people that create the bugs in the first place, i.e., the programmers. Software testers are often told they have only so many days to issue a passing grade on a release. Failing to certify a product for release has strong negative implications within many companies, and the course of least resistance is likely to be to just approve the product. Many salaries and bonuses depend on shipping on time, and the QA people are perceived as not being team players (the kiss of death in most corporations) if they don't "do their job right."
Security is different. Many companies claim that their customers don't understand security or are unwilling to "pay extra" for it. There is certainly some truth to that, which will be explored later. Security is also somewhat different in that lots of people are looking for software errors that can be exploited to lead to either an unauthorized access, escalation of privilege, crash of a machine, or any of several other nasty consequences. Yes, lots of hackers, many of them skilled in software, security and the Internet, are spending lots of hours looking for security weaknesses in commercial software. They're not looking for the garden-variety non-security related bugs that so many end users encounter day after day.
All of that's fine, but it doesn't really get to the crux of the matter: Just why are so many security bugs present in so much commercial software? (Consider that in the nine days prior to writing the first draft of this paper, Microsoft has released no fewer than five security bulletins (MS00-081 to MS00-085) - more than one every other day.)
The answer, as you might suspect, has a lot to do with money. Imagine a graph where the X axis represents money spent on reducing security-related bugs and the Y axis represents the ability of the product to resist security attacks. The curve will look very similar to the kind of growth curve you may remember from biology classes: Initially there isn't much return per invested security dollar. But once investment increases beyond a certain point, the programmers are taught to write exploit-resistant code; the testing department is beefed up and becomes more skilled at finding security weaknesses; management is more willing to stretch out development times and ship dates to produce a better (in a security sense) product. This appears as a rapid rise on the curve, as the software products become far more secure.
Eventually there comes a point where increasing investment in a product's security fails to provide as much security improvement (per dollar invested) than it used to. This is the famous law of diminishing returns. On our graph, the shape of the line is still increasing, but getting much flatter. In other words, spending more money to improve security still helps, but not as much as it used to.
Most companies would produce much better software, security-wise, if they invested until they hit the law of diminishing returns. However, in reality, many companies stop well before they approach the inflection point associated with those diminishing security returns. To understand why, we need to look at another curve.
Imagine a curve where again, the X axis represents money spent on reducing security-related bugs. Now let the Y axis represent the company's bottom line: Some overall measure of its return on investor equity, or profitability, or sales growth rate. Unfortunately, this curve doesn't look like the first curve. At some point, each incremental dollar spent in improving the security of a product begins to subtract from a company's bottom line. At least that's the way most companies calculate their bottom line, labeling money spent on improving security as an expense, not an investment.
The curve looks something like an upside-down U. And the top of the U occurs before the security of the product reaches the diminishing returns region. In other words, each additional dollar that might be spent on improving product security is made to appear as if it were directly subtracting from the company's bottom line.
And the sad point is, that's probably true. The way software is developed, marketed, sold, installed, used and patched makes it appear (to people who measure such things) that it really is far more profitable to produce software with many security bugs. In other words, at many companies, security bugs are more a public relations problem than anything else. It's simply less expensive to issue patches (when necessary) than to develop more robust software in the first place.
But, how does this explain why some companies have a reputation for producing products with relatively few security vulnerabilities (e.g., OpenBSD, RSA Security), while other companies (which for legal reasons will not be named) have a reputation for virtually ignoring security? The short answer is that companies that do their best to sell very secure software have that fact built right into their business model, and the peak of their U curve occurs much farther to the right than at less security-minded companies.
In essence, some companies would suffer significant financial damage if their products were not highly secure. Companies that sell security bug-infested software have managed to create a much lower set of expectations. They are able to externalize the costs they would have to bear (to produce better software) onto their customers. The customers, i.e., most of the software-using public, ends up footing the bill.
So, you might ask, "Why does this have to be this way, and what can I do about it?" There really isn't a simply answer. As long as most companies and individuals buy the latest software as soon as it comes out, with little regard for security, we usually get what we paid for: new software and new security bugs. We make an implicit decision. We are willing to purchase some software because we feel we have to: Our customers are using it; our competitors are using it; the old version runs too slowly; we need new features; the price was very competitive.
What can we actually do about this? It sounds easy, but it's really very difficult: We have to start holding companies to higher standards. We have to stop buying software from companies with reputations for low security standards; we at least have to resist the urge to buy the latest and greatest. If enough people demand better quality, and purchase from companies that offer it, we will eventually have it.
SecurityPortal is the world's foremost on-line resource and services provider for companies and individuals concerned about protecting their information systems and networks.
Th e Focal Point for Security on the Net (tm)