Institute of Electrical & Electronic Engineers
To achieve higher memory bandwidth in network-based multiprocessor architectures, multiple dynamic random access memories can be accessed simultaneously. In such architectures, not only resource utilization and latency are the critical issues but also a reordering mechanism is required to deliver the response transactions of concurrent memory accesses in-order. In this paper, the authors present memory-efficient on-chip network architecture to cope with these issues efficiently. Each node of the network is equipped with a novel Network Interface (NI) to deal with out-of-order delivery, and a priority-based router to decrease the network latency.