Download now Free registration required
More than ever, designing new types of highly scalable data intensive computing is needed to qualify the new generation of scientific computing and analytics effectively perform complex tasks on massive amounts of data such as clustering, matrix computation, data mining, information extraction, etc. Map-Reduce, put forward by Google, is a well-known model for programming commodity computer clusters to perform large-scale data processing in a single pass. Hadoop is the most popular open-source implementation of the Map-Reduce model which provides a simple abstraction for large-scale distributed algorithm; it has become a popular distributed computing and data analysis paradigm in recent years.
- Format: PDF
- Size: 1781.76 KB