Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/180616
Title: | May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability | Authors: | Zhang, Tong Yang, Jessie X. Li, Boyang |
Keywords: | Computer and Information Science | Issue Date: | 2024 | Source: | Zhang, T., Yang, J. X. & Li, B. (2024). May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability. International Journal of Human-Computer Interaction. https://dx.doi.org/10.1080/10447318.2024.2364986 | Project: | NRFNRFF13-2021-0006 | Journal: | International Journal of Human-Computer Interaction | Abstract: | Research in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-form conversations can enhance users’ comprehension of static explanations in image classification, improve acceptance and trust in the explanation methods, and facilitate human-AI collaboration. We conduct a human-subject experiment with 120 participants. Half serve as the experimental group and engage in a conversation with a human expert regarding the static explanations, while the other half are in the control group and read the materials regarding static explanations independently. We measure the participants’ objective and self-reported comprehension, acceptance, and trust of static explanations. Results show that conversations significantly improve participants’ comprehension, acceptance, trust, and collaboration with static explanations, while reading the explanations independently does not have these effects and even decreases users’ acceptance of explanations. Our findings highlight the importance of customized model explanations in the format of free-form conversations and provide insights for the future design of conversational explanations. | URI: | https://hdl.handle.net/10356/180616 | ISSN: | 1044-7318 | DOI: | 10.1080/10447318.2024.2364986 | Schools: | School of Computer Science and Engineering College of Computing and Data Science |
Rights: | © 2024 Taylor & Francis Group, LLC. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1080/10447318.2024.2364986. | Fulltext Permission: | embargo_20250815 | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
#May I Ask a Follow-up Question Understanding the Benefits of Conversations in Neural Network Explainability.pdf Until 2025-08-15 | 11.11 MB | Adobe PDF | Under embargo until Aug 15, 2025 |
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.