A Fast Distributed Proximal-Gradient Method

Provided by: Massachusetts Institute of Technology
Topic: Networking
Format: PDF
The authors present a distributed proximal-gradient method for optimizing the average of convex functions, each of which is the private local objective of an agent in a network with time-varying topology. The local objectives have distinct differentiable components, but they share a common non-differentiable component, which has a favorable structure suitable for effective computation of the proximal operator. In their method, each agent iteratively updates its estimate of the global minimum by optimizing its local objective function, and exchanging estimates with others via communication in the network.

Find By Topic