Paging for Multicore Processors
Paging for multicore processors extends the classical paging problem to a setting in which several processes simultaneously share the cache. Recently, Hassidim studied cache eviction policies for multicores under the traditional competitive analysis metric, showing that LRU is not competitive against an offline policy that has the power of arbitrarily delaying request sequences to its advantage. In this paper the authors study caching under the more conservative model in which requests must be served as they arrive. They study the problem of minimizing the number of faults, deriving bounds on the competitive ratios of natural strategies to manage the cache.