Topology-Aware I/O Caching for Shared Storage Systems

Free registration required

Executive Summary

The main contribution of this paper is a topology-aware storage caching scheme for parallel architectures. In a parallel system with multiple storage caches, these caches form a shared cache space, and effective management of this space is a critical issue. Of particular interest is data migration (i.e., moving data from one storage cache to another at run-time), which may help reduce the distance between a data block and its customers. As the data access and sharing patterns change during execution, the authors can migrate data in the shared cache space to reduce access latencies.

  • Format: PDF
  • Size: 121.5 KB