Download Now Free registration required
While high-level data parallel frameworks, like MapReduce, simplify the design and implementation of large-scale data processing systems, they do not naturally or efficiently support many important data mining and machine learning algorithms and can lead to inefficient learning systems. To help fill this critical void, the authors introduced the GraphLab abstraction which naturally expresses asynchronous, dynamic, graph-parallel computation while ensuring data consistency and achieving a high degree of parallel performance in the shared-memory setting. In this paper, they extend the GraphLab framework to the substantially more challenging distributed setting while preserving strong data consistency guarantees.
- Format: PDF
- Size: 1761.28 KB