Enterprise Software

Cache complex Web service objects with ColdFusion

Sometimes software development in an enterprise requires reconciling what is desired with what is doable. See how the flexibility of ColdFusion MX and caching was used to get the best Web service possible even with a legacy system.


In previous articles, I explained how to set up, configure, and optimize Web services with ColdFusion MX (CFMX). The process is nowhere near as simple as Web service hypesters claim it to be. However, utilizing CFMX's new Web services-specific functionality was an interesting experience.

I ran into some issues initially setting up the site to communicate with the WebLogic servers, but, after a bit of research, we worked it all out. The next minor setback was with speed. The hardware guys conferred with the software development guys, and soon we had that problem beat too. It was at this time that we received our last request for the project.

We took two separate environments, one legacy and one Web, and we made them communicate. What was once not possible was now happening on demand in around 30 seconds—of course, we were told that was 30 seconds too long.

Get the full story
Did you miss the previous articles? Get up to speed by reading the articles in this list:

The request
Management, being the visionaries they're supposed to be, wanted to further consolidate the amount of time it took to run the Web service. In fact, they didn't want to just consolidate the run time; they wanted to eliminate it completely. Historically, if you looked over the history of the project, we did manage to continually cut run time. Now, we needed to cut it out altogether.

Our very first test run came in around five minutes. After the obvious issues, we brought the time down to two and a half minutes. Shortly thereafter, we brought the call down to under 30 seconds. Being tech geeks, we were understandably proud of our accomplishments. As employees, we had to get over that pride quickly and optimize even further. No rest for the weary, as they say.

Management wanted all three Web services to take no longer than an average page load. An average non-Web services page that is. Luckily for us, two fit into that category with no modifications; one ran under three seconds, and the other under one second. The 30-second service was the one that stumped us.

The problem
The interesting thing about the enterprise world is the way management can get gung-ho on a particular directive. "We will be bringing timely, up-to-the-second data for our customers." This was what was conveyed, so that is what they received (albeit with a 30-second delay). Management's response was, "We can't go live with a delay like that every time they hit that page. Isn't there something we can do?"

The coders' response, "Not if you want to deliver up to the second data."

After a pause, management came back with, "Well…what if it didn't have to be up to the second?" That is classic enterprise management.

Now, with this new tidbit of information, we had options. The most obvious choice was caching. This would provide instant feedback (i.e., zero load time), after an initial 30-second page load. The problem was how to cache it. Up to this point in the application, client variables had been the primary storage device. The ability to store all the variables in a database proved useful. Client variables wouldn't work with the complex objects returned by the Web services. This left us two options: converting the complex object to a query or storing it in a session variable.

The problem with converting the complex object into a query was that there was no simple way to do so. The object was an array of structs, with further arrays and structs embedded inside the main object. So what appeared to be a column in the object would vary from being undefined to holding an array of more structs, eliminating a simple translation to the query.

The main concern against using session variables was what happens if the machine goes down and the hardware load balancer sends the user to a different machine. The session variable (i.e., the cached complex object from the previous server) wouldn't be available on this new server, leaving the user on a new server without session information.

The solution
We opted for the session caching. We figured the extra 30-second hit to the user that would seldom occur was far better then the amount of time we'd have to spend converting the complex object into easily parsible queries.

First, we had to enable session management in the application. We did this by adding the following cfapplication tag shown in Listing A to the Application.cfm file.

We set up the cache to expire every 20 minutes. The logic here was that 20 minutes is more than enough time for the users to do what they need to do with the information. If they take longer than 20 minutes, the data should be refreshed, since they may have walked away from the computer or may be checking constantly for new updates. The data itself changed only once or twice a day.

The second thing we did was add the caching code after the Web service call. This is merely a simple assignment statement shown in Listing B.

Next, we had to add a bit of logic to handle the cache. We wrote a function (shown in Listing C) to check the cache status. If the cached data existed, we wanted to use that data. If it didn't exist (e.g., either it expired or this was the users first time through), we wanted to call the Web service and cache the complex object. Note how we've wrapped the caching statement with this new logic.

That's it, right? Or is it? In our initial excitement at delivering the goods, we thought so. It passed unit testing and went on to QA testing. It was there that we discovered a major logic bug. Our application was working exactly how we programmed it to. It would call the Web service, cache the results, and use that data until the 20 minutes were up.

Unfortunately, if two users shared a machine, this could be disastrous. One person could log in, check their data, and log out before the cache expired. A second user could then log in and, according to the app, still qualify for using the cache since the 20 minutes hadn't expired. This problem was fixed by adding a second session parameter that stored the user's ID. This second parameter, shown in Listing D, provided a way to check for user identity in the function.

Wrap-up
After this last minute request, we finally went live with the project. For all of their madness and flip-flopping back and forth on requests, management proved their worth. The users loved the new service. Users who hadn't accessed the Web site in years (yes, the site is that old) were calling in to talk about the new tool and the benefits it provided them. This led users to make new requests for enhancements.

The reason the tool was so strongly received and users felt the need to make suggestions for further enhancements was because the new tool loaded quickly and provided access to data that made them more productive. If the tool had rolled out with our initial load time, we may have gotten phone calls, but they wouldn't have been positive. More likely than not, the users simply would have left without telling us why, which is the worst possible outcome.

Web services are an important part of the Web's future. This is easily seen inside large, corporate enterprise environments. There is so much money spent on legacy systems that it's easier to hook up Web services to open up those systems, extending their useful lives, than it is to develop new tools to replace them. I hope this series of articles will prove helpful when you find yourself opening up new channels to these valuable legacy systems.

Editor's Picks

Free Newsletters, In your Inbox