Low Variance Off-policy Evaluation with State-based Importance Sampling

Low Variance Off-policy Evaluation with State-based Importance Sampling
By: David M. Bossens, Philip S. Thomas Posted: May 8, 2024
arXiv:2212.03932v5 Announce Type: replace-cross
Abstract: In many domains, the exploration process of reinforcement learning will be too costly as it requires trying out suboptimal policies, resulting in a need for off-policy evaluation, in which a target policy is evaluated based on data collected from a known behaviour policy. In this context, importance sampling estimators provide estimates for the expected return by weighting the trajectory based on the probability ratio of the target policy and the behaviour policy. Unfortunately, such estimators have a high variance and therefore a large mean squared error. This paper proposes state-based importance sampling estimators which reduce the variance by dropping certain states from the computation of the importance weight. To illustrate their applicability, we demonstrate state-based variants of ordinary importance sampling, weighted importance sampling, per-decision importance sampling, incremental importance sampling, doubly robust off-policy evaluation, and stationary density ratio estimation. Experiments in four domains show that state-based methods consistently yield reduced variance and improved accuracy compared to their traditional counterparts.
Provided by:

DoctorMorDi

DoctorMorDi

Moderator and Editor