Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting. (arXiv:2306.13085v1 [cs.LG])
By: <a href="http://arxiv.org/find/cs/1/au:+Hong_Z/0/1/0/all/0/1">Zhang-Wei Hong</a>, <a href="http://arxiv.org/find/cs/1/au:+Agrawal_P/0/1/0/all/0/1">Pulkit Agrawal</a>, <a href="http://arxiv.org/find/cs/1/au:+Combes_R/0/1/0/all/0/1">Rémi Tachet des Combes</a>, <a href="http://arxiv.org/find/cs/1/au:+Laroche_R/0/1/0/all/0/1">Romain Laroche</a> Posted: June 23, 2023
Most offline reinforcement learning (RL) algorithms return a target policy
maximizing a trade-off between (1) the expected performance gain over the
behavior policy that collected the dataset, and (2) the risk stemming from the
out-of-distribution-ness of the induced state-action occupancy. It follows that
the performance of the target policy is strongly related to the performance of
the behavior policy and, thus, the trajectory return distribution of the
dataset. We show that in mixed datasets consisting of mostly low-return
trajectories and minor high-return trajectories, state-of-the-art offline RL
algorithms are overly restrained by low-return trajectories and fail to exploit
high-performing trajectories to the fullest. To overcome this issue, we show
that, in deterministic MDPs with stochastic initial states, the dataset
sampling can be re-weighted to induce an artificial dataset whose behavior
policy has a higher return. This re-weighted sampling strategy may be combined
with any offline RL algorithm. We further analyze that the opportunity for
performance improvement over the behavior policy correlates with the
positive-sided variance of the returns of the trajectories in the dataset. We
empirically show that while CQL, IQL, and TD3+BC achieve only a part of this
potential policy improvement, these same algorithms combined with our
reweighted sampling strategy fully exploit the dataset. Furthermore, we
empirically demonstrate that, despite its theoretical limitation, the approach
may still be efficient in stochastic environments. The code is available at
https://github.com/Improbable-AI/harness-offline-rl.
Provided by:
http://arxiv.org/icons/sfx.gif