Download now Free registration required
Traditionally, much of the I/O system architecture and design was done with assumption that I/O data rates and latencies are very large when compared to throughput and latencies observed inside a CPU core or the memory subsystem. This has changed with the advent of higher data rates, for example 10 GbE, approaching cache miss latencies on the wire. Current device to CPU communication does not take full advantage of system capabilities such as the system cache hierarchy and effective utilization of system resources (memory bandwidth, etc.). Relevant system metrics include CPU utilization, memory bandwidth, latency, and power. This paper discusses how, by providing explicit cache management hints, one can significantly reduce I/O to memory bandwidth utilization, and systems interconnect bandwidth, and associated power consumption.
- Format: PDF
- Size: 334.6 KB