My previous post, “Cache is King: Learn more about website caching policies,” described three types of website caching and the benefits and disadvantages of each. This article will delve into several web caching policy considerations that are beneficial while instituting your own web caching strategy.
A URL is cached as it gets requested – All requested URLs that clients pass through the network are cached. As the URLs are subsequently requested again throughout the network, the cached version of the URLs are served within the network.
Cache must replace content when cache is full – When the programmed maximum cache size is full, then a replacement policy must “kick-in” that determines how cache is swapped.
Cache replacement policies include the Least Recently Used (LRU) and Least Frequently Used (LFU) pages.
- LRU replaces the least recently used pages, and maintains linked list of pages, where each newly fetched page is put at the head of the list, and the tail page is deleted when storage is exceeded based on the maximum cache size limit. LRU exploits the benefits of temporal locality, and performs better than LFU in practice, where no per-page counters exist.
- LFU replaces the least frequently used pages, and this can be an optimal replacement policy if all pages have the same size and page popularity does not change. The disadvantages include that it is slow to react to popularity changes, and needs to keep a statistics counter of every page, and does not consider page size.
Other considerations that can be added as part of the caching policy include the requested page size, client importance, and distance of the cache from the originating web server.
Cache Consistency Management
Also known as cache coherence, this refers to the consistency of data and information stored on local cache servers of a shared resource, such as web servers. The problem with caching is twofold; first, the cached pages may get modified by the server, and second, the client accessing the cached web pages may get a stale copy of the content.
Several solutions to maintaining a balance of cache content delivery include:
Finite “time-to-live” – An example of this is adding the “expires header”. The settings can also be set from the cache server where specific URLs are defined to a “time-to-live” for the requested page by days and hours. The proxy can cache the page and serve subsequent client requests until the “time-to-live” expires. Also, the proxy can send an “if-modified-since” query to the web server on any other subsequent request(s), and finally the web server can send the page, if it was modified.
Cache Invalidation – This means that the web server knows where the pages are cached and when a page is modified, it sends an invalidation message to the cache. At this point the cache marks the page as being stale, and then any subsequent requests are fetched fresh from the web server. To limit the flow of invalidation traffic, the server can set up an invalidation contract where the server limits the number of invalidations by setting a contract time interval. Each contract has an expiration date and time, and the web server only notifies those caches that have valid non-expired contracts.