Spark: Cluster Computing With Working Sets

Free registration required

Executive Summary

MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. The authors propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce.

  • Format: PDF
  • Size: 205.2 KB