Cooperative Multi-Agent Reinforcement Learning with Partial Observations. (arXiv:2006.10822v2 [cs.LG] UPDATED)

In this paper, we propose a distributed zeroth-order policy optimization
method for Multi-Agent Reinforcement Learning (MARL). Existing MARL algorithms
often assume that every agent can observe the states and actions of all the
other agents in the network. This can be impractical in large-scale problems,
where sharing the state and action information with multi-hop neighbors may
incur significant communication overhead. The advantage of the proposed
zeroth-order policy optimization method is that it allows the agents to compute
the local policy gradients needed to update their local policy functions using
local estimates of the global accumulated rewards that depend on partial state
and action information only and can be obtained using consensus. Specifically,
to calculate the local policy gradients, we develop a new distributed
zeroth-order policy gradient estimator that relies on one-point
residual-feedback which, compared to existing zeroth-order estimators that also
rely on one-point feedback, significantly reduces the variance of the policy
gradient estimates improving, in this way, the learning performance. We show
that the proposed distributed zeroth-order policy optimization method with
constant stepsize converges to the neighborhood of a policy that is a
stationary point of the global objective function. The size of this
neighborhood depends on the agents’ learning rates, the exploration parameters,
and the number of consensus steps used to calculate the local estimates of the
global accumulated rewards. Moreover, we provide numerical experiments that
demonstrate that our new zeroth-order policy gradient estimator is more
sample-efficient compared to other existing one-point estimators.

DoctorMorDi

DoctorMorDi

Moderator and Editor