Evaluating Online Bandit Exploration In Large-Scale Recommender System. (arXiv:2304.02572v2 [cs.IR] UPDATED)
By: <a href="http://arxiv.org/find/cs/1/au:+Guo_H/0/1/0/all/0/1">Hongbo Guo</a>, <a href="http://arxiv.org/find/cs/1/au:+Naeff_R/0/1/0/all/0/1">Ruben Naeff</a>, <a href="http://arxiv.org/find/cs/1/au:+Nikulkov_A/0/1/0/all/0/1">Alex Nikulkov</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhu_Z/0/1/0/all/0/1">Zheqing Zhu</a> Posted: June 23, 2023
Bandit learning has been an increasingly popular design choice for
recommender system. Despite the strong interest in bandit learning from the
community, there remains multiple bottlenecks that prevent many bandit learning
approaches from productionalization. One major bottleneck is how to test the
effectiveness of bandit algorithm with fairness and without data leakage.
Different from supervised learning algorithms, bandit learning algorithms
emphasize greatly on the data collection process through their explorative
nature. Such explorative behavior may induce unfair evaluation in a classic A/B
test setting. In this work, we apply upper confidence bound (UCB) to our large
scale short video recommender system and present a test framework for the
production bandit learning life-cycle with a new set of metrics. Extensive
experiment results show that our experiment design is able to fairly evaluate
the performance of bandit learning in the recommender system.
Provided by:
http://arxiv.org/icons/sfx.gif