dcsimg
 

Understanding the State of Container Networking

Tuesday Sep 4th 2018 by Sean Michael Kerner

Containers have revolutionized the way applications are developed and deployed, but what about the network?

Container networking is a fast-moving space with lots of different pieces. In a session at the Open Source Summit, Frederick Kautz, principal software engineer at Red Hat outlined the state of container networking today and where it is headed in the future.

Containers have become increasingly popular in recent years, particularly the use of Docker containers, but what exactly are containers?

Kautz explained that containers make use of the Linux kernel's ability to allow for multiple isolated user space areas. The isolation features are enabled by two core elements: cGroups and Namespaces. Control Groups (cGroups) limit and isolate the resource usage of process groups, while namespaces partition key kernel structures for process, hostname, users and network functions.

Container Networking Types

While there are different container technologies and orchestration systems, when it comes to networking, Kautz said there are really just four core networking primitives:

Bridge
Bridge mode is when networking is hooked into a specific bridge and everyone that is on the bridge will get the messages.

Host
Kautz explained that Host mode is basically where the container uses the same networking space as the host. As such, whatever IP address the host has, those addresses are then shared with the containers.

Overlay
In an Overlay networking approach, a virtual networking model sits on top of the underlay and the physical networking hardware.

Underlay
The Underlay approach makes use of core fabric and hardware network.

To make matters somewhat more confusing Kautz said that multiple container networking models are often used together, for example, a bridge together with an overlay.

Network Connections

Additionally, container networking models can benefit from MACVLAN and IPVLANs, which tie containers to specific mac or IP addresses for additional isolation

 Kautz added that SR-IOV is a hardware mechanism that ties a physical Network Interface Card (NIC) to containers providing direct access.

Container Networking

SDNs

On top of the different container networking models are different approaches for software-defined networking (SDN). For the management plane, there are functionally two core approaches at this point: the Container Networking Interface (CNI) used by Kubernetes and the libnetwork interface used by Docker.

Kautz noted that with Docker recently announcing support for Kubernetes, it's likely that CNI support will be following as well.

Among the different technologies for container networking today are:

  • Contiv - backed by Cisco and provides a VXLNA overlay model
  • Flannel/Calico - backed by Tigera and provides an overlay network between each hosted and allocates a separate subnet per host
  • Weave - backed by Weaveworks; uses standard port number for containers
  • Contrail - backed by Juniper Networks and open sourced as the TungstenFabric project; provides policy support and gateway services
  • OpenDaylight - open source effort that integrates with OpenStack Kuryr
  • OVN - open source effort that creates logical switches and routers

Upcoming Efforts

While there are already multiple production grade solutions for container networking, the technology continues to evolve. Among the newer approaches is eBPF (extended Berkeley Packet Filter) for networking control, which is used by the Cilium open source project.

Additionally, there is an effort to use shared memory, rather than physical NICs to help enable networking.

Kautz also highlighted the emerging area of service mesh technology, in particular the Istio project, which is backed by Google. With a service mesh, networking is offloaded to the mesh, which provides load balancing, failure recovery and service discovery, among other capabilities.

Organizations today typically choose a single SDN approach that will connect into a Kubernetes CNI, but that could change in the future thanks to the Multus CNI effort. With Multus CNI, multiple CNI plugins can be used, enabling multiple SDN technologies to run in a Kubernetes cluster.

Sean Michael Kerner is a senior editor at EnterpriseNetworkingPlanet and InternetNews.com. Follow him on Twitter @TechJournalist.

Home
Mobile Site | Full Site
Copyright 2018 © QuinStreet Inc. All Rights Reserved