Please use this identifier to cite or link to this item:
|Title:||Automatic evaluation of end-to-end dialog systems with adequacy-fluency metrics||Authors:||D'Haro, Luis Fernando
Banchs, Rafael E.
|Keywords:||Engineering::Computer science and engineering||Issue Date:||2018||Source:||D'Haro, L. F., Banchs, R. E., Hori, C. & Li, H. (2018). Automatic evaluation of end-to-end dialog systems with adequacy-fluency metrics. Computer Speech and Language, 55, 200-215. https://dx.doi.org/10.1016/j.csl.2018.12.004||Journal:||Computer Speech and Language||Abstract:||End-to-end dialog systems are gaining interest due to the recent advances of deep neural networks and the availability of large human–human dialog corpora. However, in spite of being of fundamental importance to systematically improve the performance of this kind of systems, automatic evaluation of the generated dialog utterances is still an unsolved problem. Indeed, most of the proposed objective metrics shown low correlation with human evaluations. In this paper, we evaluate a two-dimensional evaluation metric that is designed to operate at sentence level, which considers the syntactic and semantic information carried along the answers generated by an end-to-end dialog system with respect to a set of references. The proposed metric, when applied to outputs generated by the systems participating in track 2 of the DSTC-6 challenge, shows a higher correlation with human evaluations (up to 12.8% relative improvement at the system level) than the best of the alternative state-of-the-art automatic metrics currently available.||URI:||https://hdl.handle.net/10356/151218||ISSN:||0885-2308||DOI:||10.1016/j.csl.2018.12.004||Rights:||© 2018 Elsevier Ltd. All rights reserved.||Fulltext Permission:||none||Fulltext Availability:||No Fulltext|
|Appears in Collections:||SCSE Journal Articles|
Updated on May 27, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.