Operating systems are always a weak spot when it comes to network security because they are riddled with bugs which lead to security vulnerabilities. But why is it so hard to write secure operating systems and what can be done to make them more secure?
The answer comes down to math and history.
Historically, all enterprise operating systems -- such as Windows, Linux, and UNIX variants (including Solaris and FreeBSD) - have had huge hunks of code at their centers. These run in kernel mode, where they have unrestricted access to all system memory, instructions and attached devices. Now here's the dilemma: code running in kernel mode has the potential to do much more damage to the smooth and secure running of a system than code running in more restricted user mode, but, put simply, it can increase the performance of a system -- especially one with limited resources.
Many years ago when UNIX and Windows were created, the lack of resources was acute -- hardware was slow by today's standards and memory was severely limited. So to get acceptable performance it made sense to stick device drivers and other system components like Microsoft's Graphics Device Interface (GDI) right in to the kernel . Today, though, performance is no longer an issue as hardware is now far more powerful and has more resources than was common in the past.
Yet device drivers and other "extra code" is still in the kernel -- to preserve compatibility and continuity. That's a problem because, as mentioned above, bugs in the kernel have the capacity to do more damage to a system than bugs in code operating in userland. So the bigger the kernel, the more unreliable a system becomes. The problem is compounded because reliability and security are intimately linked: A bug may cause a system to crash, but it also might cause a security vulnerability. A buffer overflow caused by code running in the kernel may be more easily exploitable, leading to arbitrary code execution and a complete system takeover by a hacker.
Now here's the math. Carnegie Mellon University's CyLab Sustainable Computing Consortium reckons that commercially produced code has between 20 and 30 bugs per 1,000 lines of code. Open source code may have fewer -- research by Stamford University computer science researchers suggests that there may even be one or two orders of magnitude fewer. But with about 5 million lines of code in the Windows and Linux kernels, that still leaves thousands or even hundreds of thousands of bugs that may be in the kernels of common server operating systems waiting to be exploited.
What can be done about this? The obvious answer is to take advantage of the powerful hardware which is available today by removing as much as possible from the kernel, according to Andy Tanenbaum, a computer science professor at the Vrije Universiteit in the Netherlands, and creator of Minix, the Unix-based operating system designed for teaching. He believes that the stability and security benefits of a "microkernel" approach far outweigh the performance benefits of using an operating system with a monolithic kernel which includes devices drivers and other operating system code. "I would hope that such a system would be much more reliable. There is still a performance gain to be had from a monolithic kernel, but most people don't care about performance that much anymore," he says. Microkernels are already used in many embedded operating systems, but not in general purpose desktop and server operating systems.
Right now Tanenbaum can point to Minix 3 -- the latest version of Minix. This has a microkernel of just 5,000 lines of code running in kernel mode -- less than 0.1% of the size of the Windows kernel. Device drivers run above the kernel in user mode, each one running as a separate process and restricted to accessing only its own memory. He points out that with just 5,000 lines of code there may be fewer than 100 bugs in the kernel, which could slowly be found and eliminated. In fact there could be far fewer: he says drivers typically have between 3 and 7 times as many bugs per 1,000 lines of code as the rest of the system. By removing the drivers from kernel space the most buggy kernel code is removed.
Tanenbaum is currently embarking on a project to produce a stable and secure operating system based on a similar microkernel architecture, which he intends to design with a POSIX interface (perhaps extended with Linux system calls) so that it will run UNIX (and Linux) software "without too much effort." (An alternative approach is to run a hypervisor in kernel mode, emulating a virtual machine running its own OS in user mode. But as operating systems are often paravirtualized to run in these virtual machines, and the hypervisor is adapted with an extensive API to provide services to the virtual machines, the distinction between a hypervisor and a microkernel becomes blurred, he says.)
And he is not the only person doing research into microkernel-based OSes: Microsoft has looked into it and produced an experimental one called Singularity.
It will be quite a few years yet before OSes based on this type of research find their way on to your network, but if and when they do they should make systems running on it more reliable, and therefore more secure. Which will make it harder (but certainly not impossible) for those hackers around the world to compromise machines on your network.