An IT Manager's Strategy Guide to Solaris

Wednesday Dec 17th 2008 by Charlie Schluting

OpenSolaris 2008.11 has arrived, loaded with improvements. Is it ready for the enterprise?

The largest trend in Solaris deployment these days is to wait for end-of-life and then replace the server with Linux, where possible. But now that OpenSolaris exists, should this trend continue? Perhaps we should look at OpenSolaris instead of Linux?

Last week I took the stance that OpenSolaris could in fact replace Linux. In theory—in the future, if Sun plays its cards properly—this is possible. Asking if I’d deploy OpenSolaris at work, however, is a very different question.


The kernel of the issue seems to be whether you want to use the features provided by the Linux kernel or the Solaris kernel. A few things quickly come to mind when thinking about each:

Solaris gives us DTrace and ZFS. Linux provides GFS, DRBD and many options for high availability. Not strictly kernel-related, but requiring the set of tools that Linux has; Red Hat is also developing oVirt, which is basically a VMware ESX replacement. Both Red Hat and SUSE have their cluster services offerings, but so does Sun, in a much more limited capacity.

Alright, we aren’t going to get anywhere by comparing features. Linux definitely has more choice, but Solaris still has benefits. If you want an extremely high performance and high capacity NAS device, the Sun x4500 series (AKA Thumper and Thor) is where it’s at right now. Because of ZFS, the throughput on these is stunning. Linux can’t touch its performance without something like ZFS, and your only other option is a NetApp. I bring this up because at my day job, we’re running three Thumpers. We frequently yearn for DRBD and other Linux technologies to build services that are more resilient without resorting to Sun Cluster, but running Linux on these is out of the question.

Aside from the huge financial systems and niches that require Solaris for various reasons, we’re left with the majority of the IT infrastructure looking for a home. “Linux or OpenSolaris” is the question.

Management Issues

There is not much difference in managing an OpenSolaris or Linux server. The fundamentals are the same, and skill sets are quite transferrable, especially since OpenSolaris has a GNU userland. The differences come in automation and installation, which OpenSolaris has mostly addressed at this point with PXE support.

When all is said and done, your sysadmins are going to spend their time on either fighting to get everything installed and working, or they will quickly deploy solutions and spend their time automating and creating high availability services.

OpenSolaris: Stable?

OpenSolaris is surely stable; it is the Solaris kernel, after all. This is not the type of stability I am talking about, however. Let’s say you deploy ten OpenSolaris servers today. You install everything you need, and perhaps even compile a few of your own packages, because—let’s face it—the repositories are quite lean at the moment. You set up an automated install environment, add the configurations of each machine to Puppet or Cfengine, and call it good. Just for giggles you decide to deploy five more servers to verify that everything is automated. Check; you’re done.

In six months, you must upgrade; maybe there is a security issue afoot. In this fast-changing OpenSolaris world, likely as not you are going to find that the latest version of OpenSolaris has broken many things that you’ve depended on. You compiled and created packages for some PHP modules that you need, and perhaps the PHP version has changed requiring that you rebuild these. That’s a simple example and easy fix, but more often it’s something more drastic. You might find that SMF changes, or that some services have been renamed. It’s hard to guess, but you get the idea.

In the Linux world, things are much simpler. Things change just as often, but you rarely need to compile your own software due to the large number of supported packages available. I hate to harp on packages, but it’s such an important issue; much more than people realize.

Is OpenSolaris Ready for the Enterprise?

At this point, it’s difficult to say. I would not run it except in a very limited capacity. It’s not going to be like my Linux servers, which start out with an automated base load and then morph into being many different types of servers. That requires stability, tons of open source packages available, and the knowledge that the OS environment will not change too often. Sun’s “binary compatibility” guarantee is nice, but moot most of the time. It doesn’t ensure that anything you’ve done will continue working, only that the binary format will not change. Linux’s hasn’t changed in many years.

The trend in the industry seems to position Solaris servers as purpose-specific machines. They run the software that requires Solaris, or the Thumper storage devices. They certainly are not multi-purpose, however.

OpenSolaris can change this.

ZFS, DTrace, SMF, etc, etc, etc are all very nice to have, but without the multi-purpose flexibility aspect I alluded to, they are not enough to switch. My sysadmins’ time is better spent worrying about the best way to create a high performance MySQL failover cluster, not fighting things that are easy in Linux.

For OpenSolaris to change this, it must get the Linux community on-board. Linux’s power is that you have choice. Don’t like one way of clustering services? Then just use the other. Soon, the original is abandoned and everyone flocks to the new, better method. That is what has been missing from Solaris all these years. If something doesn’t work, you wait, sometimes years. The community is great at making decisions based on what matters, not based on how much was invested in a technology.

Are 15,000 packages in a repository enough to get IT shops to abandon Linux? No. This also requires an innovative community creating things like GFS, iSCSI high-availability, and the like. One company, regardless of the cool stuff they produce, is not large enough to provide enough options for everyone.

When he's not writing for Enterprise Networking Planet or riding his motorcycle, Charlie Schluting is the Associate Director of Computing Infrastructure at Portland State University. Charlie also operates, and recently finished Network Ninja, a must-read for every network engineer.

Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved