Boost Reliability with Ethernet Bonding and Linux

Monday Aug 27th 2007 by Carla Schroder

Best of ENP: The Linux kernel comes with what you need to do Ethernet bonding. It takes a few steps to implement, but the payoff comes in the form of boosted bandwidth and improved reliability.

An easy, inexpensive way to double up Ethernet interfaces to get more more bandwidth and reliability is called Ethernet bonding. While Gigabit Ethernet is all exciting and the hot new fad, you can get a lot of mileage out of using Ethernet bonding to give your existing gear a nice boost without spending much extra money. Just stuff two ordinary 10/100 Ethernet interfaces into a machine, tweak a few configuration files, and you're in business. If one fails you won't lose connectivity. It is a good cheap upgrade for your servers—you'll have several options for configuring load balancing and failover, and with the right gear you'll get an instant bandwidth boost by combining the bandwidth of the two interfaces.

To increase performance by combining the bandwidth of each NIC, you need a switch that supports link aggregation. Or you can use a special option in the kernel's bonding driver. This does some ARP (address resolution protocol) trickery to combine the bandwidth of the two NICs. You'll see a bit of a CPU hit, but it shouldn't be much. This must be supported by the interface's driver; if it isn't then your only option is a smart or managed switch that supports 802.3ad.

The Linux kernel has everything you need for this, and it's not very difficult to set up. Fedora and Debian each have their own special ways of configuring Ethernet bonding. (For you fine noobs wondering "But what about Ubuntu?" don't worry. Ubuntu is derived from Debian, so these instructions will work for them too.)

You're not limited to bonding two interfaces; you can go nuts and bond as many as you like, and can even bond machines and subnets. Clusters use channel-bonding to create super-fat pipelines. You can even bond Gigabit Ethernet. Which is very fun and not difficult, but today we shall limit ourselves to boosting the network performance of standalone servers.

2.6 kernels should have everything you need. If your system is missing some pieces, refer to the Documentation/networking/bonding.txt file in the kernel documentation. This is the most comprehensive documentation on Ethernet bonding in Linux. It has one flaw— it assumes a Red Hat-type system. No worries, because we'll cover the Debian way today. If you don't have the kernel documentation on your system, a Google search will find this file quickly.

Pre-flight Check

You'll need both mii-tools and ethtool installed. Then take a look in your kernel's config file and make sure that bonding is enabled as a module:

$ grep -i bonding /boot/config-2.6.20-16

It must be a loadable kernel module so that you can pass in various command options. Then verify that your NICs are working and have a good connection to the network:

# mii-tool
eth0: negotiated 100baseTx-FD, link ok
eth1: negotiated 100baseTx-FD, link ok

Then make sure the required kernel module is present on your system:

$ modprobe --list | grep bonding

The remaining steps are different on Debian and Fedora, so let's see how Debian does it.

Ethernet Bonding in Debian

Your next step is to install ifenslave-2.6, which means "interface slave":

# aptitude install ifenslave-2.6

Now load the bonding module:

# modprobe bonding mode=balance-alb miimon=100

This works some special magic, so let's take a quick detour to talk about it. balance-alb means adaptive load balancing. This is the special option that rewrites ARP mappings so that ARP sees your two NICs as the same one.

miimon=100 means "tell MII to check every 100 milliseconds to see if the interface is up." This is important because the bonding driver provides automatic failover if one of the links goes down, so their state must be continually monitored.

bond0 is your new logical bonded interface name. You'll configure this pretty much like ordinary eth0, eth1 etc. interfaces. For now create it temporarily with ifconfig:

# ifconfig bond0 netmask up

Then run ifconfig to make sure, even though you'll get an error message if it didn't work, because being too careful is just fine:

# ifconfig
bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr: Bcast: Mask:

And now, the big moment: assign your slave devices. Both interfaces must be down and not have IP addresses assigned to them:

# ifenslave bond0 eth0 eth1

If everything works, your new bonded interface is operational and you'll be able to ping back and forth between your new interface and the other nodes on your network. You can use iperf to test throughput, and a really fun test is to disconnect one of the interfaces while ping is running. You shouldn't see any interruptions.

Starting Everything At Boot

To load the module with the options you want at boot, edit /etc/modprobe.d/arch/i386 and add these lines. Ignore any documentation that tells you to use a different file because that is wrong:

alias bond0 bonding
options bond0 mode=balance-alb miimon=100

Then enshrine your settings in /etc/network/interfaces, using your own addresses of course:

auto bond0
iface bond0 inet static
        up /sbin/ifenslave bond0 eth0 eth1
        down ifenslave -d bond0 eth0 eth1

bond0 Status

Take a look at the contents of /proc/net/bonding/bond0 to see how your new interface is faring:

$ /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.1.1 (September 26, 2006)

Bonding Mode: adaptive load balancing
Primary Slave: eth0
Currently Active Slave: eth1
MII Status: up

Now what? In the second part you'll learn some additional configuration options for different roles such as round-robin or failure-only, how to configure bonding on Fedora, some tips on network topology, and how to troubleshoot problems.


Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved