As infrastructure teams are pushed to deliver more service-based technology, frequently site-to-site bandwidth doesn't always keep pace with the advances in other areas. IT pro Rick Vanover outlines some tips and pointers in dealing with this challenge.
It is probably some sort of urban myth that goldfish will grow to fit their tank. However, I feel that IT bandwidth is subject to the same phenomenon. The unfortunate fact is that you cannot always get great bandwidth to a remote site. Factors such as cost, distance, provider availability, and redundancy all roll into the connectivity decision for a remote site.
The never-ending battle: Remote connectivity
For me, this becomes a challenge and an opportunity. In cases where a remote site must have an infrastructure footprint, the opportunity is to architect a full-service IT offering with the connectivity available. If the remote site is well-connected (I’ll come back to what that means momentarily), it is entirely possible to provide the following:
- 100% virtualized servers at remote site
- Automatic off-site data protection without tape
- Failover options without ‘hard’ changes
There are too many factors to itemize, but for a typical remote site, in my experience, 15 megabits per second is what I would call a “well-connected” remote site. In this situation, it is entirely possible to have a handful of virtual machines that are running on a small host footprint at the remote site and are protected to your main site. This can be through a number of virtual machine protection tools or storage area network (SAN) solutions. Many SAN solutions have volume replication over Ethernet that can protect an entire logical unit number (LUN) to a remote site (or the main site). This is one of the factors that I make it a practice to focus on -- an Ethernet-based storage protocol.
The best part is that, depending on retention requirements, this can all be done without the use of tape media. Today’s current offering of protection tools are incredibly smart and offer a number of controls, including bandwidth throttling, to tweak the protection to fit into the network connectivity available.
One silver lining is that if you can architect a protection solution without tape, those potential cost savings can be channeled towards increased bandwidth and advanced storage. Take into consideration tape drive costs, tape media costs, and off-site media storage services. If you can design a solution in-house that protects your data to your requirements on your own storage and your own network, this is effectively a private cloud. Take into consideration public or provider cloud offerings, and your options expand even further.
There are a number of products that can optimize WAN traffic. It is unfortunate, but most Windows file/print traffic is an incredible consumer of bandwidth. In most cases, transferring a 2MB file will consume double, triple, or even more on a WAN link when watching the to-and-from transfer bytes. Other strategies include WAN link aggregation, which can segregate asymmetric traffic such as Web traffic or POP email to a number of less expensive connections if using VPN for site-to-site connectivity.
Thin client computing
The perspective can be reversed, in some situations by having a near-zero remote site footprint. Technologies such as virtual desktop infrastructure (VDI) that use an advanced display protocol can deliver a full service experience to remote sites without a remote server infrastructure . Even less sophisticated options such as a terminal server farm with remote desktop can get the job done in the smallest of cases. This benefits IT infrastructures by keeping all client-server traffic local to the "main" datacenter or at least on higher speed connections.
What kind of tricks do you have for getting the most out of your remote sites? Share your comments below.
Need help configuring, administering, supporting, and optimizing network infrastructure? Then turn to our free Network Administration Newsletter. Automatically sign up today!