Around the holiday season, I had a large number of packages delivered, and I noticed that delivery companies' tracking Web sites gave the impression of real-time data. It looked like the moment a package was scanned, the latest status would appear on the screen once you refreshed it. In reality, the scan status often wouldn't appear for hours after the package was scanned. In fact, I received one package before the system said it was shipped.
Thinking back, I've encountered many instances of similar systems; banks in particular come to mind. Banks' dependence upon batch processing in the middle of the night seems quaint at first glance; however, customers often feel there are double standards when withdrawals are looked at for overdraft fees the moment they occur, while deposits usually do not post until the evening batch at the earliest.
Some systems put a disclaimer on these kinds of views that reads, "These results may not reflect recent activity." This helps alleviate users' anger at the system since the user knows (if the user reads the disclaimer, that is) not to expect real-time information.
While this approach is accurate and it reflects the reality of a batch processing system, it is no longer satisfactory to end users. Many new systems without the legacy baggage use transactional databases rather than batch processing and handle just as many items per day. Users see that and wonder why all companies can't provide real-time data; of course, we know why: A company still using batch processing may be saddled with a million line application originally written in 1982, and it would take a decade to rewrite it. But to the end user who is comparing your product or service to a competitor's, it can be a significant factor in his or her decision.
Search engines are a great example of a batch processing system. It used to take weeks if not months for a new page or site to be added in the search results. Webmasters would keep a close eye on their server logs for AltaVista or Lycos to come around, and then check out their latest rankings. Search engines would post guidelines about how long after submitting a page to expect to see it in results. Then, some search engines started indexing sites sooner and faster, particularly major sites and sites with frequent updates. Users soon realized that if they had just heard about some new craze or viral whatever, some search engines were going to have that information and others were not. In other words, moving closer to a real-time system by reducing the time between batches became a major competitive advantage.
I recognize the reality that not every system can be transformed from a batch processing model to a real-time model. Sometimes the issue is legacy code; other times it is program architecture or physical architecture that can cause data delays or holdups (some systems are very resource intensive and caching data or results is important to take some of the load off the systems). But anything you can do to make your systems' have data that is closer to real-time data will be a major advantage in the marketplace.
J.JaDisclosure of Justin's industry affiliations: Justin James has a working arrangement with Microsoft to write an article for MSDN Magazine. He also has a contract with Spiceworks to write product buying guides.
———————————————————————————————————————————-Get weekly development tips in your inbox Keep your developer skills sharp by signing up for TechRepublic's free Web Developer newsletter, delivered each Tuesday. Automatically subscribe today!
Justin James is the Lead Architect for Conigent.