An Efficient Data Access Policy in Shared Last Level Cache

Provided by: WSEAS
Topic: Hardware
Format: PDF
Future multi-core systems will execute massive memory intensive applications with significant data sharing. On chip memory latency further increases as more cores are added since diameter of most on chip networks increases with increase in number of cores, which makes it difficult to implement caches with single uniform access latency, leading to Non-Uniform Cache Architectures (NUCA). Data movement and their management further impacts memory access latency and consume power. The authors observed that previous D-NUCA design have used a costly data access scheme to search data in the NUCA cache in order to obtain significant performance benefits.

Find By Topic