A Fast Distributed Proximal-Gradient Method

The authors present a distributed proximal-gradient method for optimizing the average of convex functions, each of which is the private local objective of an agent in a network with time-varying topology. The local objectives have distinct differentiable components, but they share a common non-differentiable component, which has a favorable structure suitable for effective computation of the proximal operator. In their method, each agent iteratively updates its estimate of the global minimum by optimizing its local objective function, and exchanging estimates with others via communication in the network.

Provided by: Massachusetts Institute of Technology Topic: Networking Date Added: Oct 2012 Format: PDF

Find By Topic