Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/184562
Title: Conversational explanations: discussing explainable AI with non-AI experts
Authors: Zhang, Tong
Zhang, Mengao
Low, Wei Yan
Yang, Jessie X.
Li, Boyang Albert
Keywords: Computer and Information Science
Issue Date: 2025
Source: Zhang, T., Zhang, M., Low, W. Y., Yang, J. X. & Li, B. A. (2025). Conversational explanations: discussing explainable AI with non-AI experts. 30th International Conference on Intelligent User Interfaces (IUI '25), 409-424. https://dx.doi.org/10.1145/3708359.3712143
Project: NRF-NRFF13-2021-0006
Conference: 30th International Conference on Intelligent User Interfaces (IUI '25)
Abstract: Explainable AI (XAI) aims to provide insights into the decisions made by AI models. To date, most XAI approaches provide only one-time, static explanations, which cannot cater to users' diverse knowledge levels and information needs. Conversational explanations have been proposed as an effective method to customize XAI explanations. However, building conversational explanation systems is hindered by the scarcity of training data. Training with synthetic data faces two main challenges: lack of data diversity and hallucination in the generated data. To alleviate these issues, we introduce a repetition penalty to promote data diversity and exploit a hallucination detector to filter out untruthful synthetic conversation turns. We conducted both automatic and human evaluations on the proposed system, fEw-shot Multi-round ConvErsational Explanation (EMCEE). For automatic evaluation, EMCEE achieves relative improvements of 81.6% in BLEU and 80.5% in ROUGE compared to the baselines. EMCEE also mitigates the degeneration of data quality caused by training on synthetic data. In human evaluations (N = 60), EMCEE outperforms baseline models and the control group in improving users' comprehension, acceptance, trust, and collaboration with static explanations by large margins. Through a fine-grained analysis of model responses, we further demonstrate that training on self-generated synthetic data improves the model's ability to generate more truthful and understandable answers, leading to better user interactions. To the best of our knowledge, this is the first conversational explanation method that can answer free-form user questions following static explanations.
URI: https://hdl.handle.net/10356/184562
ISBN: 9798400713064
DOI: 10.1145/3708359.3712143
Schools: College of Computing and Data Science 
Rights: © 2025 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution 4.0 International License.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:CCDS Conference Papers

Files in This Item:
File Description SizeFormat 
3708359.3712143.pdf3.9 MBAdobe PDFView/Open

Page view(s)

34
Updated on May 7, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.