An easy, inexpensive way to double up Ethernet interfaces to get more more bandwidth and reliability is called Ethernet bonding. While Gigabit Ethernet is all exciting and the hot new fad, you can get a lot of mileage out of using Ethernet bonding to give your existing gear a nice boost without spending much extra money. Just stuff two ordinary 10/100 Ethernet interfaces into a machine, tweak a few configuration files, and you're in business. If one fails you won't lose connectivity. It is a good cheap upgrade for your serversyou'll have several options for configuring load balancing and failover, and with the right gear you'll get an instant bandwidth boost by combining the bandwidth of the two interfaces.
To increase performance by combining the bandwidth of each NIC, you need a switch that supports link aggregation. Or you can use a special option in the kernel's bonding driver. This does some ARP (address resolution protocol) trickery to combine the bandwidth of the two NICs. You'll see a bit of a CPU hit, but it shouldn't be much. This must be supported by the interface's driver; if it isn't then your only option is a smart or managed switch that supports 802.3ad.
The Linux kernel has everything you need for this, and it's not very difficult to set up. Fedora and Debian each have their own special ways of configuring Ethernet bonding. (For you fine noobs wondering "But what about Ubuntu?" don't worry. Ubuntu is derived from Debian, so these instructions will work for them too.)
You're not limited to bonding two interfaces; you can go nuts and bond as many as you like, and can even bond machines and subnets. Clusters use channel-bonding to create super-fat pipelines. You can even bond Gigabit Ethernet. Which is very fun and not difficult, but today we shall limit ourselves to boosting the network performance of standalone servers.
2.6 kernels should have everything you need. If your system is missing some pieces, refer to the Documentation/networking/bonding.txt file in the kernel documentation. This is the most comprehensive documentation on Ethernet bonding in Linux. It has one flaw it assumes a Red Hat-type system. No worries, because we'll cover the Debian way today. If you don't have the kernel documentation on your system, a Google search will find this file quickly.
You'll need both mii-tools and ethtool installed. Then take a look in your kernel's config file and make sure that bonding is enabled as a module:
$ grep -i bonding /boot/config-2.6.20-16
It must be a loadable kernel module so that you can pass in various command options. Then verify that your NICs are working and have a good connection to the network:
eth0: negotiated 100baseTx-FD, link ok
eth1: negotiated 100baseTx-FD, link ok
Then make sure the required kernel module is present on your system:
$ modprobe --list | grep bonding
The remaining steps are different on Debian and Fedora, so let's see how Debian does it.
Ethernet Bonding in Debian
Your next step is to install ifenslave-2.6, which means "interface slave":
# aptitude install ifenslave-2.6
Now load the bonding module:
# modprobe bonding mode=balance-alb miimon=100
This works some special magic, so let's take a quick detour to talk about it. balance-alb means adaptive load balancing. This is the special option that rewrites ARP mappings so that ARP sees your two NICs as the same one.
miimon=100 means "tell MII to check every 100 milliseconds to see if the interface is up." This is important because the bonding driver provides automatic failover if one of the links goes down, so their state must be continually monitored.
bond0 is your new logical bonded interface name. You'll configure this pretty much like ordinary eth0, eth1 etc. interfaces. For now create it temporarily with ifconfig:
# ifconfig bond0 192.168.1.101 netmask 255.255.255.0 up
Then run ifconfig to make sure, even though you'll get an error message if it didn't work, because being too careful is just fine:
bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1
And now, the big moment: assign your slave devices. Both interfaces must be down and not have IP addresses assigned to them:
# ifenslave bond0 eth0 eth1
If everything works, your new bonded interface is operational and you'll be able to ping back and forth between your new interface and the other nodes on your network. You can use iperf to test throughput, and a really fun test is to disconnect one of the interfaces while ping is running. You shouldn't see any interruptions.