Download now Free registration required
The authors introduce and establish the convergence of a distributed actor-critic method that orchestrates the coordination of multiple agents solving a general class of a Markov decision problem. The method leverages the centralized single-agent actor-critic algorithm of and uses a consensus-like algorithm for updating agents' policy parameters. As an application and to validate their approach they consider a reward collection problem as an instance of a multi-agent coordination problem in a partially known environment and subject to dynamical changes and communication constraints.
- Format: PDF
- Size: 140.05 KB