Several years ago, in a nod to Linux creator Linus Torvalds, software developer Eric S. Raymond coined a phrase that he called Linus's Law:
"Given enough eyeballs, all bugs are shallow."
So goes the standard argument in favor of open source – that more "eyeballs" make for better quality control and better security. It has become the rallying cry for open-source enthusiasts, particularly in the aftermath of Edward Snowden's revelations last year about NSA spying and government infiltration of technology. Reports surfaced that Microsoft, Google, Yahoo, and other tech heavies were compromised. According to the open-source narrative, the Snowden documents proved that commercial software couldn't be trusted.
Is open-source software free of NSA backdoors?
"There have long been rumors in the networking community about possible backdoors in major networking vendors' firmware and network stacks," Nicholas Merrill, executive director of The Calyx Institute, told Enterprise Networking Planet in an interview last year. "I would suggest…that people strongly consider open-source solutions since their source code is open for peer review and auditing."
Government snoops, however, apparently have no qualms about attempting to hide vulnerabilities in plain sight. For instance, during a keynote panel discussion at this year's LinuxCon, Linus Torvalds was asked if the federal government had ever asked him to insert a backdoor into the Linux kernel. Torvalds verbally told the audience "No" – while nodding his head yes.
Additionally, among the Snowden leaks was confirmation that the NSA had inserted a self-serving vulnerability into a pseudorandom number generator and then worked to get it adopted as an international standard.
Certainly, although it has been confirmed that the US government pressures and works with commercial vendors to insert backdoors into their software, so too – apparently – do they participate in open-source efforts. After all, if open-source development is "open" to everyone, it's just as open to the government and others who wish to weaken software security.
Other factors demonstrate that Linus's Law is just plain false. In his 2003 book Facts and Fallacies of Software Engineering, Robert L. Glass levies numerous criticisms against the "law," writing that, according to research, the law of diminishing returns is at work when it comes to code review. Specifically, that having more than two to four code reviewers is not particularly useful.
"[W]e shouldn't think that a Mongolian horde of debuggers, no matter how well motivated they are, will produce an error-free software product," writes Glass, "any more than any of our other error removal approaches will."
Glass goes on to point out that no scientific evidence exists to show that open source is safer, more reliable, or less buggy. He also observes that the bugs found by the many "eyeballs" may not be the most serious. Other commentators have explicitly posited that security bugs are among the least likely to be found in open-source software because security review is more boring and more difficult than tending to features.
Indeed, who's to say that those debuggers – those eyeballs – are necessarily thorough, competent, or even trustworthy?
Expert eyeballs and the fall of TrueCrypt
In the wake of the NSA scandal, one tech academic had this same question.
Just over a year ago, Matthew Green, a cryptography and computer science professor at Johns Hopkins University, called for thorough, professional audits of major open-source solutions like OpenSSL and TrueCrypt.
"[O]pen code like OpenSSL needs more expert eyes," wrote Green. "Unfortunately there's been little interest in this, since the clever researchers in our field view these problems as 'solved' and thus somewhat uninteresting."
Participating in a substantial crowdfunding effort that raised more than $70,000, Green and others did manage to get the ball rolling on a professional audit of TrueCrypt. The fallout since has been devastating.
TrueCrypt was originally touted as having tentatively received a partially passing grade in April, when preliminary analysis of TrueCrypt's bootloader revealed "no evidence of backdoors or otherwise intentionally malicious code" therein. Still, the auditors found eleven vulnerabilities in the bootloader alone, at least four which were of "Medium" severity. They also were highly critical of the standards of code quality, finding it "lax" and noting that the poor code quality would 1) make it difficult to find bugs in the future and 2) potentially make it difficult and/or discouraging for others to join or meaningfully participate in the TrueCrypt project.
Then, as the audit progressed more deeply, TrueCrypt vanished.
On May 28, only 44 days after the initial audit report was released, the official TrueCrypt download page was replaced with a warning declaring that "[u]sing TrueCrypt is not secure as it may contain unfixed security issues." No further explanation was provided, leaving pundits to speculate and leaving the trustworthiness of TrueCrypt indefinitely tainted at best. The audit proceeds.
Meanwhile, the TrueCrypt website's warning includes instructions for TrueCrypt users on how to migrate their data…to BitLocker, a Microsoft product.
Bugs in plain sight: Heartbleed and Shellshock
It bears remembering that, free and open-source as the product may be, it took a major grassroots publicity effort and tens of thousands of dollars to bring this much attention to what may be the tip of the iceberg of TrueCrypt's weaknesses, and it took several months to do so. Other open-source vulnerability discoveries that have recently grabbed headlines were around for several years.
The Heartbleed bug, a severe vulnerability resulting from no more than a missing bounds check, persisted undetected in OpenSSL for nearly two and a half years after a volunteer coder inadvertently introduced the bug while working on the project on New Year's Eve, 2011.
Shellshock, a bug detected in September (less than six months after the discovery of Heartbleed), is an even more severe open-source vulnerability. Present in Bash, a widely used Unix shell, Shellshock allows an attacker to remotely execute commands. Shellshock was introduced to Bash in August 1989, making it older than many MBAs, having gone undetected for over 25 years.
It would, of course, be silly to say that any of the above is cold, hard proof that open source is inherently worse than proprietary code. The open-source community's messaging that open source is inherently better and more secure, however, is flatly disingenuous.
Photo courtesy of Shutterstock.
Joe Stanganelli is a writer, attorney, and communications consultant. He is also principal and founding attorney of Beacon Hill Law in Boston. Follow him on Twitter at @JoeStanganelli.