Editor's Note: Occasionally, Enterprise Networking Planet runs guest posts from authors in the field. Today Michel Burger, lead architect for the service provider group at API company Apigee, shares his thoughts on how network operators can bring value to the new generation of distributed data centers.
By Michel Burger
Thanks to mobility, the Internet of Things, and the march towards 5G, network operators find themselves uniquely positioned to provide network-embedded IT resources like compute and storage to cloud services and applications. This provides an opportunity to remain integral to the IT value chain, instead of being left behind by cloud service providers and relegated to the role of transport network provider.
Today, cloud service providers (infrastructure, consumer, even enterprise) find themselves facing two problems: how to deal with network latency and how to improve backend efficiency. Network operators and application developers alike have roles to play in solving these problems.
How cloud service providers' latency and backend efficiency solutions diminish network operators' roles
Most mobile apps rely on cloud services, accessible through an API. No matter how optimized the network, the cloud services are at least 10 milliseconds away from the device. In an effort to bring the data and computing closer to the device and thus minimize latency, cloud service providers can deploy a few tricks. They can provide blades for network operators to install within their networks, install boxes on premises, and/or force more logic onto devices. This last solution unfortunately causes increased battery consumption.
Additionally, new mobile apps trigger small workloads in data centers, either to pre-render content or to observe device activities in order to preempt and optimize actions, such as downloading the full content of a book after the first chapter was read.
Given the relatively fixed size of virtualization and cloud overhead compared to the size of the workload, these small workloads negatively impact the efficiency of a data center. Here too, cloud providers typically try to mitigate the effects by offloading the small workloads onto on-premises boxes, network blades, or onto the devices themelves, resulting in the an additional increase in battery consumption.
As a result, on-premises boxes, network blades, and end user devices are becoming a distributed extension of the data center. And with data centers becoming increasingly distributed, many of today’s cloud service providers are looking to network virtualization and Software Defined Networking (SDN) to create a network overlay on top of the network, treating it as a tunnel-based transport facility.
As a result, the network operator’s role shrinks to the management of dumb tunnels, while cloud service providers push application-level signaling inside these tunnels to manage the remote blades. Application-level signaling provides high value because it carries the messaging between an application and cloud service or between cloud services, communicating what is done and what is needed.
What will matter in the future: APIs and elasticity
Meanwhile, two powerful trends are enabling developers to take best advantage of these distributed data centers. Network operators would do well to stay on top of these trends.
APIs are vital to the new data center paradigm. Software development has evolved from a world of sub-routines, to local libraries, to remote libraries, culminating today in cloud services exposed with APIs. An application that calls an API is calling a remote service, so building applications with APIs forces a distributed solution. Today’s applications are therefore built with a set of independent components that can be distributed on local or remote IT resources.
Elasticity is also critical. One of the main characteristics of a cloud service is the degree to which it can adapt to workload changes by provisioning and de-provisioning resources automatically to match the current demand. Today’s elastic requests are mostly driven by a central controller or broker responding to load by offering resources, often within a specific data center.
Loads, analytics-driven events, and the characteristics or states of the resources themselves will all drive tomorrow’s elastic responses. Tomorrow’s brokers will be able to provision resources from across a hybrid datacenter composed of public and private data center resources and resources in a vastly distributed virtualized software-defined network.
Elasticity, in the cloud and on-premises
Two scenarios indicate what’s possible with the next level of elasticity, one in a distributed, hybrid network tying together private and public resources, the other in a private datacenter.
- Elastic provisioning based on analytics: The emergence of the cloud provides the ability to select IT resources from diverse public and private environments. This, coupled with the rise of analytics as a driver for the provisioning and de-provisioning of resources, allows for requests for resources to be more abstract and dynamic than before: “Give me a resource that complies with a specific country regulation” or “Give me a resource that has an access latency of X.”
- Elastic provisioning based on resource states: A 2010 Facebook patent filing describes a datacenter cooling system that can not only adjust fans to manage airflow but can also redistribute the workload across servers to shift compute activity away from “hot spots” inside racks. The Facebook system includes a central controller dictating the movement of the workload. It is also possible to implement a datacenter without fans and without a central controller. Elasticity would allow the workloads to move themselves in a coordinated fashion within the datacenter. Here, instead of load, the elasticity trigger is the heat of the blade; the workload requests new blades using proximity and temperature as search criteria. In this scenario, the moving heat source creates convection and airflow. Viola! A fanless datacenter, with corresponding energy savings.
Any analytic event could trigger elasticity. A cloud service asks a broker for resources with a certain set of criteria based on an event; the broker responds with appropriate and optimized resources, also based on data it has about the available distributed resources. Imagine an emergency cloud service moving in response to forecasted severe weather events to minimize mobile application latency.
For an analytics-driven and dynamic solution to work, however, applications need to be defined as a set of small components, each of them accessible through API.
Modern applications that are built using APIs, with data and analytics in mind, and composed of a set of smaller components, can best leverage distributed networks and use available IT resources. Legacy applications, on the other hand, will require more decoupling (identification of an API within the solution) or for tools that are observing the solution behaviors to completely or partially migrate.
Tying it all together: the role of the network operator
Given what today’s cloud service providers need, what can network operators do to provide value?
- Make the network cheaper and more agile. This is what many vendors are offering through Software Defined Networks (SDN) and Network Function Virtualization (NFV). In the long term, however, this does not change the status quo, in which the network operator is a commodity transport network.
- Transform the network into a massively distributed data center in which IT resources are embedded in the network, specifically at the edge of the network, one hop (<5m) away from the device. IT resources at the edge of the network have certain constraints, but network-embedded IT resources remain a better solution than pushing workloads onto devices or installing additional boxes on-premises. NFV enables the creation of these IT resources, and SDN enables the creation of paths to access these IT resources.
There’s some irony here. While these two approaches use the same technologies, the result is totally different. In one case, the capacity of the hardware needs to be optimized, a cost constraint. In the other case, it is important to over-allocate hardware to create distributed network embedded IT resources.
By providing massively distributed datacenters, network providers can offer the developers, who create cloud services, a continuum of IT resources between the device and the back-end datacenter.
Accomplishing this means bridging Network and IT within the network operator’s enterprise. Two elements are needed to build this bridge. The first is a hybrid cloud broker that can mediate IT resources, in the beginning between public and private providers and later between back-end datacenters and the network. The second is a strong API strategy and implementation that influences developers to create discrete elements with which to build applications and cloud services that can take advantage of the elastic, hybrid cloud.
The network carrier of tomorrow can do more than carry IP packets from source to destination. It can provide the network-embedded IT resources, such as computing and storage, that modern cloud services and applications need. In order to be fully part of this transformation, IT is definitively the future of the network.
Michel Burger joined Apigee in mid-2013. Previously, he managed end-to-end architecture to deliver digital services at Vodaphone, where he was appointed in 2011 to Chief Architect for the R&D division. As the CTO for Microsoft's communications sector, Michel defined the long-term vision and technical strategy for SaaS and service delivery solutions. Prior to his time at Microsoft, he was CTO at Embrace Networks and director of innovation at Sapient. He also spent 12 years in a variety of technical positions at Nortel Networks.