People often throw the wrong resources at performance problems, yet tools exist to gauge the user experience and improve it at an acceptable cost, says Bob Tarzey.
Twenty-five years ago, if you wanted to know if the users of a given IT application were happy with their experience, you could usually just wander along the corridor and ask them. They were mostly in the same single central location as the computer running the application, using a VDU linked directly to that machine - one of only a few in the building.
It is all very different today. Users of most applications are widely dispersed. They can be using a range of devices to access numerous applications. The growth in the number of users is not just because many more employees have direct access to IT as part of their day-to-day job, it is also because applications are increasingly open to use by outsiders.
These outsiders are either active, such as supply chain partners, online shoppers and users of internet banking, or passive, for example, viewing video displays in stores or passing through ticket readers at train stations. If external users receive a poor experience they will at best be disgruntled and form a poor opinion of the organisation whose service has let them down, and at worst go to a competitor.
If internal users are dissatisfied, they may put up with it. However, that dissatisfaction can reduce the efficiency of business processes, lead employees to bypass procedures, which can harm compliance and provide an excuse for inactivity and distraction. Whether the users are internal or external, you cannot rely on them to report poor experience.
Gauging and improving
It is therefore essential for any organisation to be able to gauge the experience of all users and take effective action to improve it when it is not good enough. The service users receive depends on three things: their location, the network that connects them to the relevant applications and the run-time environment of the application itself.
The last of these, the run-time environment, has become even more problematic in the past 10 years with the increasing use of virtualisation and cloud-based services. Both have many benefits, but they also divorce applications from the hardware that drives underlying performance, which is being shared with other applications or, in the case of public cloud, with other organisations.
However, with the right tools firms can collect the data needed to understand and improve the user experience, including data gathered from a wide range of network and security devices: for example, routers, load balancers, and content filters. It also includes data on the performance of applications collected using specially located performance monitors.
There are a number of vendors that provide such tools. These include Visual Performance Manager from Visual Network Systems - VNS, née Fluke Networks - which combines both network and application performance monitoring to provide...
Bob Tarzey is a director at user-facing analyst house Quocirca. As part of the Quocirca team, which focuses on technology and its business implications, Tarzey specialises in route to market for vendors, IT security, network computing, systems management and managed services.