Networking

Why WAN acceleration is one of the hottest projects in IT

Learn what WAN acceleration can and can't do, how it works, the ROI case, and why it's such a hot project for many IT departments.

WAN acceleration made my list of the top IT trends to watch in 2010 because it can provide clear ROI and benefits that employees and company leaders will notice immediately. So let's take a look at what WAN acceleration can and can't do, how it works, and the ways it can benefit your organization.

You can watch the video or read the full text of this episode below the video window.

What is WAN acceleration?

Well, the first thing to realize is that what we're going to call WAN acceleration in this episode is referred to by several different terms in the industry. You'll hear different vendors call it things like WAN optimization, Application Acceleration, and Bandwidth Acceleration. Cisco even refers to it as Wide Area Application Services, or WAAS.

My favorite term for it is WAN caching. Of course, no one in the industry actually calls it that, but that's the crux of what's going on here.

WAN acceleration can drastically improve the speed of file transfers and the performance of many applications for your branch offices and remote workers. And since at least half, and by some estimates up to two-thirds, of all workers are located OUTSIDE the company's central office, this can be a big win.

How does it work?

WAN acceleration involves placing an appliance between your WAN router and your servers at the headquarters or primary data center and then another appliance in the same spot at each of the branch offices. The remote appliances then cache the large files that get sent repeatedly over the WAN and only replicate the small changes to the files. The appliances also do some compression and make some tweaks to optimize the networking protocols.

The result is that most files and applications will perform about five to ten times faster, and in some cases even up to 100 times faster. That will make employees much happier and more productive. The other benefit is that WAN acceleration can decrease WAN usage by up to 60-90%. That's where you'll see the ROI, because in many cases it can reduce the amount of bandwidth you'll need to purchase at some branch offices.

What's the catch?

The best part of WAN acceleration is that you can install these appliances without disrupting your current network and you'll immediately start seeing the benefits after the appliance caches the first file transfer. However, WAN acceleration will NOT speed up connection-sensitive applications such as video conferencing, Voice over IP, or real-time collaboration.

You may also wonder how WAN acceleration will affect road warriors and telecommuters who aren't located in a branch office. The good news is that many of the WAN acceleration vendors also offer software solutions that can be installed on laptops and PCs and provide most of the same benefits.

Speaking of vendors, the companies to watch in this space are WAN acceleration specialists Riverbed and Blue Coat and networking giants Cisco and Juniper.

All in all, WAN acceleration can have a big impact on file transfers, Microsoft Exchange, corporate databases, and many business-specific applications that rely on static files.

About

Jason Hiner is the Global Editor in Chief of TechRepublic and Global Long Form Editor of ZDNet. He is an award-winning journalist who writes about the people, products, and ideas that are revolutionizing the ways we live and work in the 21st century.

39 comments
D2010MOB
D2010MOB

TCP acceleration is a very compelling technology. Given the limitations on capex and overall spending across IT departments in enterprises and the limitations of user adoption when there is a client download to enable, we???re seeing a demand for TCP acceleration delivered via a service model (pay based on actual usage) without the capital equipment requirements associated with some WAN acceleration solutions. Implementation must be easy from the customer standpoint and deliver results appreciable by the end-user. At Internap, our TCP acceleration service, Accelerated IP (which is delivered as a service with no capex requirements) has improved download speed by up to 4x. In terms of ROI, we???ve seen one company extend the reach of their U.S. data center to effectively support a European customer using its SaaS application without the expansion of a data center footprint or the purchase of an expensive WAN acceleration solution (with associated hardware).

AG4IT
AG4IT

While WAN acceleration involves many aspects, one of the many features of WAN accelerators is to accelerate RDP. Besides appliances, there are also software solutions available. One of these is Ericom Blaze, a software-based RDP acceleration and compression product that provides improved performance over WAN and congested LANs. Besides delivering higher frame rates and reducing screen freezes and choppiness, Ericom Blaze accelerates RDP performance by up to 10-25 times higher, while significantly reducing network bandwidth consumption over low-bandwidth/high latency connections. Ericom Blaze works with any standard RDP host, including VDI, Terminal Servers and remote physical machines. You can read more about Blaze and download a free evaluation at: http://www.ericom.com/ericom_blaze.asp?URL_ID=708 Or view a video demo at: http://www.ericom.com/blaze_youtube.asp?URL_ID=708 Adam

ASheikh_81
ASheikh_81

Jason, once again spot on. How does it determine the frequently used static data? and does these devices have enough processing power so that it does not affect the speed of Dynamic data?

sekuop64
sekuop64

The thing is not the network. There is a lot of bandwith available. But software developers still do not program there appliacion in a way it is suitable for remote use en are able to deal with low bandwitch. But,There is light on the end of the tunnel! Mobilephone applications are hot, small and work with small bandwith, so why not use the same technology and designe rules for (remote) applications on Desktops!! Not long a go we has to deal with programs of 32K size and 56K bandwitch And it works well!

PhilippeV
PhilippeV

CDN's are also great solutions for increasing the performance of accesses to large contents, or live contents (video conferences). Deploying private solutions within private networks through VPNs is not only not proven to be more secure, but it almost always will be slower. Large CDN's have infrastructures that almost no company or organizations (even as large as governments) can build equally as reasonable cost. CDNs can efficiently deliver the contents over cheap links such as the Internet, and that are easy to integrate in the private infrastructure (though VPN appliances that will secure these links transparently, or though simple softwares installed on PCs used by mobile or isolated users, or by remote workers connected at home). CDN offers are frequently offered also with assistance services by ISPs. But most often you just don't need a VPN but only a PKI-based infrastructure: documents can be safely transmitted in encrypted form over the internet, after an authorization key exchange (think HTTPS or FTPS which works through a SSL/TLS connection to a fast CDN), and then decrypted by certificates delivered by the separate VPN. Internal Emails are the most dificult thing to integrate and optimize with such solution. But there's absoltely no problem from videoconferencing. For VoIP, it is certainly not valuable to manage it yourself in your infrastructure. That's the job of telcos and ISPs, it's reliable and as much secure as permitted by law. Just study the voice subscription plans: over he Internet with ISPs, it's almost free at a flat rate for almost everybody. Optimize the cost of your mobile communications, consider investing in solutions that can relay communication though Internet WAN routers (mobile phones exist now that can connect by WiFi when they are in the surrounding area of an existing approved private hotspot). WiFi hotspots are not only cost-effective, they are also a lot faster and more reliable, with faster response time than public mobile networks. Also, I see absolutely no use of deploying an enterprise DNS server even at the central office. Outsource it at an ISP of your choice (and keep only a secondary authoritative server in your central office). ISPs will provide much better performance and most information delivered by DNS is not critical in terms of privacy or organization secrets, as they are just used to help locate the services that really need the security. So buy a domain name for your company, and manage your private zone in it. It does not need to be on the same servers as your applications. (Remember that the DNS system used on the Internet is already proven to be the most scalable system of the world. In fact, ALMOST ALL communications now depend on it directely or indirectly, except possibly the very costly networks used by militaries and government secret agencies for their own internal needs. And the Internet DNS has a lot of backups everywhere in the world, accessible by all your users). If you're not an ISP, you will not need private links: replace them by Internet links. private links used by ISPs for their deployments are a very complex mesh of peering links and gateways. You can't build such a mesh with the same availability without extreme additional costs. Abandon these links, notably the oldest ISDN/X25/Framerelay links, (unless they are the only solution for large bandwidth and cannot warranty a minimum bandwidth and response time for your remote offices), and replace them by generic broadband Internet links (using the same technologies as those available for home users, except that you may request to the ISP an enterprise-wide assistance plan with a warranty of response time and known responsible contacts). And finally, don't trust a single ISP for all your infrastructure (even if it looks cost-effective). It's still best to have an alternative in case of emergency, even if it's slower, making sure that your business can remain operational for long enough until either you change your subsption options or renegociate with another ISP. Changing your ISP should also not require reconfiguring all your network or applications, just set a few parameters in your border gateways. (The new ISP should also be able to assist you to avoid downtime for the migration, notably if the migration will finally require changing some border hardwares, and it should also assist to keep your naming scheme in your DNS and possibly optimize your IP numbering and routing: consider adopting IPv6 early now, as it can easily manage and integrate your existing internal IPv4 numbering plan, independantly of your possibly multiple ISPs; "6to4" relays can be easily integrated in your border gateways or routers, in a much simpler way than NAT/PAT for IPv4 i.e. without renumbering or specific administration for each service port or server host on your private network, so your applications and your IPv4 network management will not have to be modified.)

PhilippeV
PhilippeV

Caching is just a small part of the problem. The real problem is to make the best use of the available bandwidth, including during idle time. That's where we can enhance the response time, by developing softwares capable of performing asynchronous requests, automatically manage the transfert priorities, but still allow managing transactions for a longer time that what is needed by users. That's where we need distributed transactions, AND a protocol in applications that can triger application events which will process all the various cases that may happen if a partially validated transaction can't be completed and consolidated. And yes, this does not resolve the vase of virtual desktops: to solve the problem, you have to make choices in the GUI designs of your applications, so that less important features can be delayed or even never transmitted at all, but still allow the applications to remain usable with a fast response. And for that you need the collaboration of the GUI frameworks or OSes that display and render these virtual desktops, plus a smarter management of cachable ressources (like icons, background images, colors, desktop themes, fonts, possibly even by allowing a remote client to have and use its own local theme instead of the theme provided by the virtual desktop.) And more generally, we need new network-centered OSes where processings can be transparently distributed. This militates for the more general development of virtual machines (.Net, Java) and a networking OS that will manage the distribution of components and all the consolidations of the transactions. You can call this type of transparent deployment a "computing grid", except that it not only manages and distributes the applications but also the user interfaces and the roaming user profiles. In other words, applications must no longer be deployed on hosts at fixed addresses, but only over a network which becomes THE single host (with the additional benefits of redundancy and automatic recovery after a fail-over, and transparent scalability.) In such pattern, the WAN becomes a minor point. It's just one of the many links that make up a working system (including hardware buses between devices in the same host, or even the buses within a single chip). What has been done for processors (yes it included solutions with caches, but not only as it included the many possibilities od rescheduling working threads or fibers) must be extended to a whole network (including through LAN and WAN links). This is more complexe here because these links need to be more tightly secured (and there's a general problem of performance, due to the needed encryption and authentication, and the need to monitor and reevaluate constantly if the connected components/hosts/devices are still secured, and what to do if one of them is signaled with a problem, and how to reconnect another access point, when the users are clueless about how to do it securely according to the security needs of the organization driving this system). In multihomed organizations that need WAN solutions, it is extremely frequent that applications will be deployed to places and users that absolutely cannot manage the recovery themselves when a failure occurs: the system will have disconnected them from the network (possibly because of an external attack or because that place has been compromized). How can you investigate the problem on these places? This requires local support, and local support has a cost. The alternative is to deploy the system not as a monolitic application, but as several layers (and of course have the possibility to continue working through other means, including phones, faxes, to manage the more urgent needs). But organizations will not avoid the need of sending a qualified person to solve the problem ''in situ''. After all, this is what all ISPs in the world are doing for reconnecting their customers: they have to contract with local third parties (including competitors), or they have to invest in training their users so that they can self-assist themselves, just enough to revover and get at least the minimum needed for a more complete remote assistance. I think that training is the most cost-effective solution. This includes documenting the systems, and selecting some local users that are the best local ressources to discuss the central assistance, plus a long term plan to train every user as much as they can (and yes including by using "old-aged" solutions such as phones and faxes, but also helping them evaluate what is really urgent and can still be performed temporarily with slower means like postal services). But the other way to reduce the cost of down time is to be able to locate where the fails have happened and being able to isolate this place as much as possible an in a very short time, without affecting the operations for the rest of the system. The earlier and the smaller parts you will detect and isolate, the more time you'll gain that will allow using alternate slower solutions. Here again, this requires training. Finally, organizations should be less dependant from very far remote sites: outsourcing may be good to avoid complete disasters, but outsourrcing abroad is much risky: WAN links are already quite slow, but they can easily become saturated and unusable without notice (and you won't have any option to help accelerate the recovery if this depends on people with whom you have never had any contact, such as foreign ISPs, or governments). All these solutions have nothing in common with the basic devices proposed in the article, which are just cheap caching proxies, which offer absolutely no help when the documents needed are new and have never been accessed before, but are exactly those for which your remote sites will work the most. What does this means? Reduce the size of the documents, allow them to be refered to by a numeric or symbolic identity. You actually don't need the whole documents with all details: split them, add metadata that can be indexed separately, implement and deploy search engines and smarter (more selective) databases, don't expose everything to remote people that actually don't need it for their work. If one needs more details, and is authorized to get these details, design and deploy a working strategy that will allow them to get these details in time, but possibly later. And maximize your WAN infrastructure: use automatic schedulers, and allow it to perform transfers during cold hours or days. Create a console that will allow users to get the list of their scheduled tasks and their current completion status, and integrate it with their working agendas and contact lists, plus tools that will monitor the length of these lists, so that you can assist places that are constantly full so that you can investigate or invest in better WAN links or in training and hiring local people, or contracting with a local support service. May be (and most frequently in fact) the problem is not even at the remote site (with its limited local resources, it often tries to do the best as it can) but within the central site itself (and most often this is caused by bad management or bad policies there, or lack of communications between people, or compeltely unrealistic strategies which will finally have severe financial and social impacts for the company results).

abghodke
abghodke

How will Wi Fi Direct affect WAN acceleration?

wavmeister
wavmeister

Would "bit cache", "bit changes", or "bit updates" be a better term? It's been long called bit backups in the backup world where only the parts of a file that changed are backed up in order to conserve bandwidth. Why continually reinvent terms?

jeffsilverm
jeffsilverm

Look at F5 Wan Jet and also the web accelerator module for LTM.

jg.devries
jg.devries

We are distributor of Viprinet in the Netherlands and Belgium. Viprinet is a German Multichannel Router concept backed by a worldwide patent. Where all the mentioned products try to pump more data through the same line, Viprinet goes the other way. Increasing bandwith, by bundling cheap consumer-style broadband in a very innovative way, Increasing reliability and automatically creating secure tunnels over the Internet. In this way the mentioned drawbacks are easier to overcome. Streaming media performs exceptionally well over WAN links, there is enough bandwith for remote locations or remote workers. And pricing is much more better, both the product and the labour, than mentioned products. I speak for Europe where a good penetration of broadband technology is everywhere around, I can't speak for the US situation. In my opinion this is the way to go, since 'pumping data through a line' increases latency and doesn't improve redundancy or security. Viprinet does all that better and cheaper.

bobwinners
bobwinners

Sounds to me more like leveling band width. There will be a lot of background file checking between 'appliances'. So it's a temporary measure to be used instead of actual band width increases. Kind of like keeping the bathtub full with a dripping faucet instead of filling it each time you need it. I suppose all this is to make us 'ready' for remote application serving and the huge bandwidth requirements that will necessitate. But wait! Those apps will really be on a 'appliance' in our remote closets. Say, isn't 'appliance' another word for 'server'?

FatNGristle
FatNGristle

Our parent company implemented the Cisco solution and had nothing but problems with it (years ago). We used Riverbed and have been quite happy. There have been a few glitches requiring us to reboot the Riverbeds or dump the cache, maybe 4 times in 3 years.

greg.hruby
greg.hruby

should I be looking for a technology that writes off the WAAS/WAN-A'r "files" as my backup. Looks like an architecture change coming!

rkeegan
rkeegan

How secure is the device? How is security managed on it?

QAonCall
QAonCall

Is that not accomplishing the same thing?

Chug
Chug

This article seems to be written from a standpoint of assuming that the remote office does not already have it's own file server storage. How about consideration of WAN accelerators will allow a company to get rid of their servers in the remote offices and centralize that data. It's a great idea, but I'm not convinced we can maintain the performance we have with having the local servers.

The 'G-Man.'
The 'G-Man.'

That would mean a measurable return v the money spent. How wold you calculate this in relation to WAN acceleration?

Michael L.
Michael L.

A lot of good points made. There certainly are many things to consider when choosing a WAN Optimization solution. I discuss some of them at this blog: http://bit.ly/2bUh5

jmarkovic32
jmarkovic32

Haven't you learned anything over the years? IT is all about buzzwords and marketing. Why do you think we've gone from ASP to Web 2.0 to SaaS?

singhraj
singhraj

We have Riverbed steelheads for few years now. Implementing few more by the course of next quarter.

Turd Furgeson
Turd Furgeson

No need to backup anything more than the config of the appliance.

jmarkovic32
jmarkovic32

The TCO of an appliance is almost always lower. Most appliances are fire and forget. Sure I have to upgrade the firmware every six months or so, but that's much less than a full-featured server where many things can go wrong. Most importantly, I can install an appliance anywhere. Most of them are plug and play. You should have seen some of the branch offices at my last job. The server closet was shared with a water heater and hazardous waste. I'd rather put an appliance in there than a full-fledged server.

Turd Furgeson
Turd Furgeson

The riverbed type devices can do email, web, app, etc. traffic.

greg.hruby
greg.hruby

our docuemtn management tool also use differential file transfer -same concept - only send the changes - not the whole bloddy file.

keith
keith

I was wondering the same thing. I have a customer that has a AT&T point to point link and no servers on the other side. They are running file shares, RDP, and As400 traffic. They are thinking of going VPN over cable. 50 MB down 5 MB up and getting a Riverbed Appliance or Win 2008 DFS. Windows will not help RDP or As400 but will save them some $.

mjmurdza
mjmurdza

We did this same analysis. We decided on WAN acceleration appliances at 180 locations because the TCO is lower on the appliances than that of servers. We also eliminated two staffers by reducing the workload of the server support group. Pretty tough to pass up ROI like that.

Turd Furgeson
Turd Furgeson

If you are serious about it. We did it with the first generation of Cisco devices(WAFS)and the newer devices from cisco and Riverbed are much better devices. We will be going to one of those shortly Obviously there are lots of variables but the user experience is pretty much the same. First you have to calculate the savings. We did it in conjunction with going from NT to W2K. So we removed at minimum 3 servers in all our brach offices. Considering we had at least PDC,BDC and file server in each location. We got rid of all that hardware, software and backup support(including local tape drives and offsite costs). The people support at these sites also went down. You essentially just need desktop support.

AstroCreep
AstroCreep

WAN optimization devices aren't made to replace file servers or the like; they're made to make resources available on a different network easier to access. I can have a file/print server at each of my branches, but I likely can't install a mission-critical server/client application at each office. It would be too costly on support/upgrade fees and my data is now decentralized. I work at a Financial Services firm (accounting, auditing, tax, etc), and as anyone who has ever had to support that applications in that field knows, these apps are usually quite large (CCH's ProSystem FX Tax and Thomson's CS Suite) and without these WAN boxes the time it takes to open any of these apps across a WAN link is excruciating. For example, the Tax program takes a good ten minutes to OPEN...to simply OPEN. With the Riverbed units it's cut down to three minutes. Sure, an app like this we published as a RemoteApp on Terminal Server, but there are other apps that we have that we aren't allowed to install in a TS environment due to licensing, and some that the users need to download information from the server directly to their laptops so they can work on it at a client's location ("Check-Out" the data). An app like that I can't use efficiently in a TS environment because if the location they are at doesn't allow them to use the internet, they can't access the data they need. A lot of them also have options that can be bolted-on such as DNS server and DHCP server/relay, if they are required at the remote office.

alexisgarcia72
alexisgarcia72

We had a Riverbed acelerator in the past (not in production now) and the benefit was amazing. File transfers, replicas between servers, Citrix sessions and remote applications speed increased from 30 ~ 90%. I notice the change in speed when I moved a full WinServer ISO from NY to Mexico in less than 10 minutes using one single E1. Is really amazing. remote CTX and RDP sessions with WAN accelleration works like locally.

Stewart Levin
Stewart Levin

Whomever you choose to test, be sure to test on live links not just in lab. Not all wan optimization vendors are equal. The large players bought their way into the market and have had difficulties integrating the technology into their otherwise excellent network offerings. Also make sure that the vendors cover all IP traffic not just mail and file transfer. Most vendors ignore thin client traffic or have significant problems with it (cisco, riverbed etc.) An excellent ROI calculator for WAN optimization can be found at: http://www.expand.com/roi/ (Oh, yes. I work for Expand.)

greg.hruby
greg.hruby

Cost to Organization for improvement vs Savings explicitly tied to the improvement. So, if staff process more "data" in a shorter time - people hours per task goes down and productivity-profit up . that's one measure.

sheryl.buscheck
sheryl.buscheck

I'm seconding Stewart's comment and with the rise of virtual desktops (hosted in the datacenter, streamed, etc.) don't forget to ensure your WAN opt solution can help with the additional traffic virtual desktops will create (I work for Citrix). We also have produced an ROI calculator (in case you want multiple sources) at www.citrix.com/branch-repeater-roi - take 5 to 10 minutes to see your potential savings.

Turd Furgeson
Turd Furgeson

Getting rid of remote servers u have these savings. Hardware support for multiple servers Software support for multiple servers Backup support (including tapes hardware licenses and off site storage.) People support. since you remove the servers you need only desktop support. Assuming you use virtualization or clustering attached to a San or Nas device with snapshots and replication the only downtime is a wan link going down depending on your redundancy there.

Editor's Picks