Designing Games for Distributed Optimization
In this paper, the authors focus on achieving this goal using the field of game theory. In particular, they derive a systematic methodology for designing local agent objective functions that guarantees equivalence between the resulting Nash equilibria and the optimizers of the system level objective and that the resulting game possesses an inherent structure that can be exploited in distributed learning, e.g., potential games. The control design can then be completed utilizing any distributed learning algorithm which guarantees convergence to a Nash equilibrium for the attained game structure. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.