It’s funny how technology acronyms seem to come and go even though the underlying concepts remain the same.
There was a time when the industry was abuzz over SOA, the Service Oriented Architecture. Functions would no longer be tied to hardware, you see, but defined as services on an abstract layer of software where they could be moved and mixed and matched to do all sorts of amazing things. Then we had virtual local area networks (vLANs), which were somehow different from the true objective, software defined networking (SDN), as well as software defined storage (SDS) and an endless litany of “as a Services” (XaaS).
Nowadays, we have the Software Defined Data Center (SDDC), which is the culmination of virtually all virtual initiatives of the past decade or more to describe an entire data environment that can be ported across abstract resources in the data center, the cloud, the edge, and anywhere else it needs to go. According to Orion Market Research, the SDDC sector is growing at nearly 22 percent per year, creating demand for compute, networking, storage, security, and a wide range of related technologies.
The newest entrant in this field comes from Cisco. Having already released the Application Centric Infrastructure (ACI) portfolio and the Intent-Based Networking (IBN) concept, the company is now touting Data Center Anywhere (DCA), its concept of a fully portable application and data environment that can traverse on-premises and cloud-based infrastructure. To achieve this, the enterprise will have to employ three key Cisco offerings: ACI, which is now extendable to Amazon Web Services and Microsoft Azure, the CloudCenter Suite of application lifecycle management software, and the Hyperflex (HX) converged infrastructure portfolio.
The sticking point in all of this is the fact that the HX component was engineered for Cisco’s hyperconverged UCS platform. While the company has extended broad compatibility with hypervisors, container management systems and bare metal hardware, it is still a proprietary Cisco environment, with all of the profit margins and lock-in baggage that goes with it. Is this necessarily a bad thing? Well, it depends on how much money you are willing to part with in order to build an SDDC vs. how much aggravation you think you can put up with in the implementation and operational phases.
The other option, of course, is to go with an open source networking solution. While that may or may not provide a cost benefit, integrating open operating systems and other tools into generic hardware is not for the technologically uninclined. At the moment, open source solutions like Arrcus’ new ArcOS and Pluribus Networks Netvisor ONE are gaining in popularity, but they have yet to reach the point where they can provide a fully integrated virtual data environment. That means the enterprise still has a lot of work to do integrating virtualization platforms, storage solutions and all the other elements to implement a working data center across hybrid infrastructure.
This dilemma between open and proprietary systems is an old one. Many enterprises have embraced numerous open platforms while still maintaining a strong reliance on integrated vendor solutions.
When it comes to the SDDC, however, the more federated it is the better. As increasingly intelligent management solutions seek out and deploy the lowest-cost, highest-performing infrastructure, there is a distinct disadvantage to being limited to a select vendor’s preferred partners. At the same time, smooth operations demand tight integration up and down nearly the entire IT stack, not just at the hardware or OS levels.
It seems that no matter the acronym, the challenge in deploying effective data infrastructure remains the same.
Arthur Cole is a freelance journalist with more than 25 years’ experience covering Enterprise IT, telecommunications and other hi-tech industries.