Improving Proactive Dialog Agents Using Socially-Aware Reinforcement Learning. (arXiv:2211.15359v2 [cs.CL] UPDATED)
By: <a href="http://arxiv.org/find/cs/1/au:+Kraus_M/0/1/0/all/0/1">Matthias Kraus</a>, <a href="http://arxiv.org/find/cs/1/au:+Wagner_N/0/1/0/all/0/1">Nicolas Wagner</a>, <a href="http://arxiv.org/find/cs/1/au:+Riekenbrauck_R/0/1/0/all/0/1">Ron Riekenbrauck</a>, <a href="http://arxiv.org/find/cs/1/au:+Minker_W/0/1/0/all/0/1">Wolfgang Minker</a> Posted: June 23, 2023
The next step for intelligent dialog agents is to escape their role as silent
bystanders and become proactive. Well-defined proactive behavior may improve
human-machine cooperation, as the agent takes a more active role during
interaction and takes off responsibility from the user. However, proactivity is
a double-edged sword because poorly executed pre-emptive actions may have a
devastating effect not only on the task outcome but also on the relationship
with the user. For designing adequate proactive dialog strategies, we propose a
novel approach including both social as well as task-relevant features in the
dialog. Here, the primary goal is to optimize proactive behavior so that it is
task-oriented – this implies high task success and efficiency – while also
being socially effective by fostering user trust. Including both aspects in the
reward function for training a proactive dialog agent using reinforcement
learning showed the benefit of our approach for more successful human-machine
cooperation.
Provided by:
http://arxiv.org/icons/sfx.gif