Policy Gradient Methods for Risk-Sensitive Distributional Reinforcement Learning with Provable Convergence

Policy Gradient Methods for Risk-Sensitive Distributional Reinforcement Learning with Provable Convergence
By: Minheng Xiao, Xian Yu, Lei Ying Posted: May 24, 2024
arXiv:2405.14749v1 Announce Type: cross
Abstract: Risk-sensitive reinforcement learning (RL) is crucial for maintaining reliable performance in many high-stakes applications. While most RL methods aim to learn a point estimate of the random cumulative cost, distributional RL (DRL) seeks to estimate the entire distribution of it. The distribution provides all necessary information about the cost and leads to a unified framework for handling various risk measures in a risk-sensitive setting. However, developing policy gradient methods for risk-sensitive DRL is inherently more complex as it pertains to finding the gradient of a probability measure. This paper introduces a policy gradient method for risk-sensitive DRL with general coherent risk measures, where we provide an analytical form of the probability measure’s gradient. We further prove the local convergence of the proposed algorithm under mild smoothness assumptions. For practical use, we also design a categorical distributional policy gradient algorithm (CDPG) based on categorical distributional policy evaluation and trajectory-based gradient estimation. Through experiments on a stochastic cliff-walking environment, we illustrate the benefits of considering a risk-sensitive setting in DRL.
Provided by:

DoctorMorDi

DoctorMorDi

Moderator and Editor