The enterprise is rightly interested in containers, which promise to further leverage organizations' investments in virtualization and to help organizations capitalize on emerging trends like Big Data and microservices.
But organizations already dealing with the networking challenges presented by virtualization may be excused for looking before they leap into containers. After all, enterprise components do not work in isolation. The traffic generated by rising numbers of containers will have to go somewhere. That will undoubtedly place additional burdens on network infrastructure.
Wall Street is already picking winners for the expected network expansion. Wells Fargo is putting its money behind well-known industry players like Cisco, Brocade, and Juniper, but also behind more specialized players, like F5 Networks and Radware. The reasoning is simple: increased application density will spur demand for everything from high-speed Ethernet switches and application delivery controllers to security platforms and improved automation and orchestration. Wells Fargo noted that in the first wave of virtualization, roughly 2009 to 2012, network-related segments saw growth on the order of 20 percent. The container revolution could equal this rate, considering the fact that it coincides with the broader shift toward software defined networking.
The problem is that enterprises will be less willing to adopt another server-utilization product if it simply results in more money being spent on networking. Container platforms like Docker do have some rudimentary networking capabilities, but they usually require a fair amount of coding in the app development process. This is the primary reason Docker acquired SocketPlane last April: to develop a more user-friendly networking ecosystem that will be more amenable to the dynamic, scalable workloads that containers will have to contend with in production environments. The company’s latest effort is the development of “libnetwork,” an open source, multi-platform library that features a Container Network Model (CNM) to enable networking for any container runtime. Expect libnetwork to make its way onto Docker starting with the 1.7 release, due any day now.
Meanwhile, multiple management systems have already hit the channel, featuring various schemes to address networking and other facets of container-based architecture. Google’s Kubernetes platform, for example, is designed to foster the kind of cluster-based management that hyperscale users demand. It does this by disabling Docker’s IP masquerading technique, which limits direct access to individual containers, in favor of assigning a unique subnet to each Docker host. In this way, containers can be routed on the physical layer and groups of containers can then be reassigned to their own IP addresses so the host can serve as a container router. The upshot is a more streamlined network architecture capable of service-based load balancing, as well as other functions that will come in handy in highly dynamic data environments.
Container technology is a revolutionary concept for abstract data architectures, but it doesn’t look poised to be all that disruptive in the traditional sense. At first blush, it seemed like containers were the new virtualization, but it is evident now that they will likely augment legacy virtual infrastructure rather than replace it.
For this reason, container platforms like Docker have a vested interest in conforming to the broader data ecosystem. That includes the advanced virtual network constructs that are poised to transform the enterprise. The goal right now is to make sure containers fit comfortably with both legacy and emerging data architectures, but it won’t be long before experiences in the field lead to a raft of optimization projects . Then we’ll see how truly revolutionary a container-based data architecture can be.
Photo courtesy of Shutterstock.