Please use this identifier to cite or link to this item:
Title: End-to-end latent-variable task-oriented dialogue system with exact log-likelihood optimization
Authors: Xu, H.
Peng, Haiyun
Xie, H.
Cambria, Erik
Zhou, L.
Zheng, W.
Keywords: Engineering::Computer science and engineering
Issue Date: 2020
Source: Xu, H., Peng, H., Xie, H., Cambria, E., Zhou, L. & Zheng, W. (2020). End-to-end latent-variable task-oriented dialogue system with exact log-likelihood optimization. World Wide Web, 23, 1989-2002.
Journal: World Wide Web
Abstract: We propose an end-to-end dialogue model based on a hierarchical encoder-decoder, which employed a discrete latent variable to learn underlying dialogue intentions. The system is able to model the structure of utterances dominated by statistics of the language and the dependencies among utterances in dialogues without manual dialogue state design. We argue that the latent discrete variable interprets the intentions that guide machine responses generation. We also propose a model which can be refined autonomously with reinforcement learning, due to that intention selection at each dialogue turn can be formulated as a sequential decision-making process. Our experiments show that exact MLE optimized model is much more robust than neural variational inference on dialogue success rate with limited BLEU sacrifice.
ISSN: 1386-145X
DOI: 10.1007/s11280-019-00688-8
Rights: © 2019 Springer Science+Business Media, LLC, part of Springer Nature. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

Page view(s)

Updated on May 23, 2022

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.