Whether an organization has an e-commerce site or provides services that are dependent on information delivered through web applications, optimal application performance is critical for employees, partners and customers.
However, when user traffic spikes and servers are bogged down by compute-intensive SSL transactions, large data file requests, memory overloads, and hackers attempting to flood servers, site and application performance can slow or grind to a halt. And when users can’t access a company’s applications or they experience long waits, network downtime can kill a business (at least for a time).
While it is easy to throw more hardware at the problem, this is not always the best strategy when traffic isn’t properly directed to the best performing resource. Nor is it economical for many businesses.
So how do IT managers ensure web and application uptime and speed to get optimal performance out of datacenter equipment — while keeping costs to a minimum?
Five benefits of application delivery controller (ADC) solutions
The costs of ensuring high web application infrastructure availability, performance access and speed, and secure operations can be dramatically reduced using application delivery controllers (ADC) and server load balancers. ADC solutions monitor heavy traffic loads on busy servers and re-route data to servers with less traffic, preventing crashes and keeping traffic flowing.
Below are some of the benefits that IT managers can realize by leveraging ADCs to optimize web and application infrastructure while streamlining IT costs.
#1 Improve performance by distributing traffic among multiple servers
Server load balancing is the basic functionality of ADCs. By accepting traffic on behalf of servers, ADCs select a server to which they will forward that traffic. ADCs provide for multiple Layer 4 IP-based methods for distributing user traffic to servers, including enabling the administrator to assign a “weight” to servers to better control traffic distribution. For example, if there are two servers that are substantially different in performance capacities, you can designate the higher performing server with a higher “weight”, sending two, ten or 100 times more traffic to that server.
#2 Optimize resources by efficiently allocating traffic based on application types
Layer 7 content switching is a higher level load balancing method that enables an ADC to decide where to send traffic based on information in the request. This is done by examining the page content (such as a URL) and “switching” the request to the appropriate server. For example, an ADC can send all graphics or multimedia-type requests to a group of optimized servers, while other requests can be sent to servers optimized for transaction processing. By dedicating your servers, they become more efficient in increasingly specialized tasks, providing for greater performance tuning and application flexibility.
#3 Ensure application and data-access consistency
When deploying load balancers or ADCs, it is important for some types of applications to maintain “stickiness” to a single server to ensure that the application session is completed. An example is an online shopping cart. If during the session the customer is switched to a server that does not contain shopping cart data for that user, the shopping cart will be lost – as well as potentially the final sale.
Load balancers/ADCs can provide for “stickiness” using two types of methods. Layer 4 persistence uses source IP address to keep users “stuck” to the appropriate server, but has proved unreliable because the use of proxy servers and network-address translation (NAT) make it difficult to reliably correlate an IP address to a user. Layer 7 persistence enables ADCs to inspect data at the application layer, using browser headers and other application protocol elements to uniquely identify users. Today’s ADCs using Layer 7 or a combination of layer-7 and layer-4 persistence capabilities can ensure that users have reliable, consistent application and website experience.
#4 Improve users’ experience and reduce server overhead with SSL acceleration
If your site contains transactional elements, you likely use SSL to encrypt and secure those transactions – a CPU-intensive process that can quickly deplete a server’s resources. SSL accelerators remove the workload off of servers and place the burden on the ADC. For SSL-encrypted requests, the ADC decrypts the request and uses the HTTP header information to decide where to send the request. Using specialized SSL processing ASICs, ADCs significantly increase performance processing of SSL traffic while simplifying SSL certificate management.
#5 Reduce single points of failure
The ADC performs health checking on the servers and automatically takes a server (or application on that server) off-line if it has been deemed unresponsive or fails. It automatically re-routes users to other functioning servers. Since all inbound traffic passes through the ADC, should the ADC itself fail, the server farm and site can be taken down. For this reason, ADCs can be deployed in a redundant, high availability (HA) configuration.
As businesses collect, store and access more and more data, servers are being asked to take on more and heavier data loads, and we will have to find creative and cost-effective ways to ensure uptime and speed. Employing an ADC solution to ensure proper traffic management and optimize web and application performance is one way to stay ahead of the curve – and avoid the downtime that can severely injure a company.
About the Author:
Peter Melerud is the cofounder and vice president of product development for KEMP Technologies.