Distributed Stochastic Optimization for Constrained and Unconstrained Optimization
In this paper, the authors analyze the convergence of a distributed Robbins-Monro algorithm for both constrained and unconstrained optimization in multi-agent systems. The algorithm searches local minima of a (non-convex) objective function which is supposed to coincide with a sum of local utility functions of the agents. The algorithm under study consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. It is proved that an agreement is achieved between agents on the value of the estimate, the algorithm converges to the set of Kuhn-Tucker points of the optimization problem.