Provided by: Cornell University
Date Added: Feb 2012
Modern high-performance memory subsystems support a high degree of concurrency. This is primarily accomplished by increasing the number of independent channels and/or increasing the number of independent banks in a channel. The authors propose a systematic and general approach to designing self-optimizing memory schedulers that can target arbitrary figures of merit (e.g., performance, throughput, energy and fairness). Using their framework, they instantiate three memory schedulers that target three important metrics: performance and energy efficiency of parallel workloads, as well as throughput/fairness of multiprogrammed workloads.