Parallel Coordinate Descent Methods for Big Data Optimization

In this paper, the authors show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convex function and a simple separable convex function. The theoretical speedup, as compared to the serial method, and referring to the number of iterations needed to approximately solve the problem with high probability, is a simple expression depending on the number of parallel processors and a natural and easily computable measure of separability of the smooth component of the objective function.

Provided by: University of Economics, Prague Topic: Data Centers Date Added: Dec 2012 Format: PDF

Download Now

Find By Topic