Networking

Cut costs by sharing a load balancing server

With an already tight IT budget, many admins are looking for ways to stretch their dollars as far as possible. One way is by sharing load balancing servers. Dave Mays lays down the foundation of a penny-pinchers dream come true.


Load balancers are expensive pieces of equipment that are rarely pushed to their limits by a single application. Unless you work for Yahoo.com or consistently push more than 100 Mbps of data, you can share this valuable resource with other departments or with multiple customers in a service provider model.

In this Daily Feature, I’ll cover the basic concepts of how this can be accomplished on dedicated load balancing servers. Exact configuration details are particular to each vendor and server model, but the basic concepts we’ll cover here do apply to virtually any set up.

Is sharing right for your network?
Before you consider sharing your load balancer, you should determine if users on multiple systems will play together nicely. (For a little background on how load-balancing works, click here.) If you are in a single company, subnetting a network and sharing the load balancer is quite simple; all you’ll need to do is setup multiple interfaces and point your servers to those gateways. If you are dealing with multiple users from different nontrusted organizations, you’ll need to add a component, a Virtual Local Area Network VLAN, to address security concerns. I’ll focus on this setup here, since it’s the most likely scenario in a business setting.

Virtual local area networks
The major underlying attribute that allows you to slice your load balancer into segments is VLAN tagging. VLANs let you allocate a virtual network to each application, group of servers, or multiple customers. Almost all of today’s load balancers,such as F5 Big-IP or the Foundry ServerIron, can use Layer 2 VLANs to segment traffic.

However, keep in mind that some vendors are now using the term VLAN to mean different things in reference to different devices. For example, on an Extreme Networks switch the term VLAN is a Layer 3 proprietary protocol that the switch uses to tag traffic based on IP address. When the VLAN is named, the protocol is considered a Layer 3 VLAN and will talk only to specific devices. If you don’t name the VLAN it is then considered a Layer 2 VLAN, which can talk to a wider range of devices. So check your documentation to see whether the VLAN refers to Layer 2 or Layer 3.

By using a VLAN on a segment of your various networks, you can create a trunk back to your load balancer and have a single uplink to each load balancer, instead of a port per network. This configuration will help you prevent packet sniffing and other problems that may arise when multiple users are sharing the same resource.

Redundant redundancy
Since you are providing for a group of multiple users, you’ll need to add a few things to the configuration of your load balancer. If don’t already have the redundant standby load balancer, get one. Most load balancers are just PCs with some smart software running on them. They can and will fail.

Every load balancer I’ve deployed has had a twin that can match it in heartbeat and failover. This approach is a necessity in the network setting. If you are just providing load balancing because of bad programming and your application is constantly croaking, a single load balancer might be OK. But remember that the load balancer will stop working someday, because it’s only as reliable as any PC—it’s not perfect. With multiple people relying on the availability of the load balancing service, it has to be both hardware and software redundant. In short, spend the money and get the redundant unit.

The setup
At a minimum, you’ll need the following equipment to provide redundant load balancing to your customers:
  • ·        Two load balancers with at least three Ethernet ports available. (For this example, I named them Ozzie and Harriet.)
  • ·        Two Ethernet switches labeled Red and Blue that support 802.1q tagging. (Also check how many VLANs your switch supports; some switches are 48 ports but only support 16 VLANs.)
  • ·        A couple of servers labeled Alpha and Beta running your application, HTTP, SMTP, or similar service. (Having multiple Ethernet ports on your server so you can run multiple uplinks to each switch is also a good idea.)

Run through this configuration in a lab environment first, and not on your production network. Your load balancing hardware provider should be able to give you some loaner equipment or access to a lab where you can attempt slicing up your load balancer.

Refer to Figure A as we go through the following configuration steps.

Figure A
A simple, load-balanced network.


Connect uplink 1 from application server Alpha to the Red switch and uplink 2 to the Blue switch. Connect uplink 1 from application server Beta to the Red switch and uplink 2 to the Blue switch. Now connect a link between the two load balancers Ozzie and Harriet; this will be your trunk for crosstalk. Now connect Ozzie to the Red and Blue switches and Harriet to the same.

After you complete the physical connections, set up the cross-connect between load balancers Ozzie and Harriet as a tagged VLAN trunk. This set up will allow traffic from one server to traverse to the other if there’s a physical outage, either incoming or outgoing. It’s a good idea to balance your groups across the multiple load balancers. I recommend putting odd-numbered VLANs on one load balancer and the even numbers on the other. This tactic also helps when it comes time to do diagnostics.

The next step is to activate your links to the switches in the same manner and create tagged trunks to Red and Blue. On to the switches, set up the ports where Ozzie and Harriet are connected and create them as VLAN trunks. If you are using a Cisco switch, you’ll need to specify that you want to use 802.1q trunking. By default, Cisco uses Inter-Switch Link (ISL) tagging, so if your load balancer is also from Cisco it will be able to use the ISL trunking with no changes. Assign a VLAN to each port for your application servers; you can use the Alpha server as one customer and the Beta as another. By using virtual servers on each machine, you can load balance to each machine to simulate multiple servers for each customer (remember, this is just lab work, so it’s okay to simulate).

As far as the servers are concerned, they just need to know the gateway and correct TCP/IP settings. So, back on the load balancers, create the gateways and the list, or groups of servers. Example groups would be Alpha1 and Alpha2 on VLAN10, and Beta1 and Beta2 on VLAN20.

From the server perspective, ping your gateway and then ping the other server. If everything is set up correctly, you should be able to get out to the Web and to the other group via the gateway. I also recommend running a packet sniffer, such as the freeware Ethereal, to make sure that you don’t see traffic from the other host.

Now disable one of the Web sites on Alpha and you’ll see that the site still continues to respond from the load balancers. Run the same test on Beta. You’ve now achieved your goal or redundant load balancing.

Why share?
We all like to have our own toys, but by sharing an expensive, under-utilized resource you can save some valuable cash and get more out of your investment. If you don’t have the budget for a load balancers, but need the benefits that load balancing gives, find a friend or department that shares your thoughts and get started. Just be sure to segment; good fences make good neighbors.

Editor's Picks