Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/180616
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhang, Tongen_US
dc.contributor.authorYang, Jessie X.en_US
dc.contributor.authorLi, Boyangen_US
dc.date.accessioned2024-10-15T04:23:31Z-
dc.date.available2024-10-15T04:23:31Z-
dc.date.issued2024-
dc.identifier.citationZhang, T., Yang, J. X. & Li, B. (2024). May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability. International Journal of Human-Computer Interaction. https://dx.doi.org/10.1080/10447318.2024.2364986en_US
dc.identifier.issn1044-7318en_US
dc.identifier.urihttps://hdl.handle.net/10356/180616-
dc.description.abstractResearch in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-form conversations can enhance users’ comprehension of static explanations in image classification, improve acceptance and trust in the explanation methods, and facilitate human-AI collaboration. We conduct a human-subject experiment with 120 participants. Half serve as the experimental group and engage in a conversation with a human expert regarding the static explanations, while the other half are in the control group and read the materials regarding static explanations independently. We measure the participants’ objective and self-reported comprehension, acceptance, and trust of static explanations. Results show that conversations significantly improve participants’ comprehension, acceptance, trust, and collaboration with static explanations, while reading the explanations independently does not have these effects and even decreases users’ acceptance of explanations. Our findings highlight the importance of customized model explanations in the format of free-form conversations and provide insights for the future design of conversational explanations.en_US
dc.language.isoenen_US
dc.relationNRFNRFF13-2021-0006en_US
dc.relation.ispartofInternational Journal of Human-Computer Interactionen_US
dc.rights© 2024 Taylor & Francis Group, LLC. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1080/10447318.2024.2364986.en_US
dc.subjectComputer and Information Scienceen_US
dc.titleMay I ask a follow-up question? Understanding the benefits of conversations inneural network explainabilityen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.schoolCollege of Computing and Data Scienceen_US
dc.identifier.doi10.1080/10447318.2024.2364986-
dc.description.versionSubmitted/Accepted versionen_US
dc.identifier.scopus2-s2.0-85200984129-
dc.subject.keywordsExplainable AI (XAI)en_US
dc.subject.keywordsConversationen_US
dc.description.acknowledgementThis work has been supported by the Nanyang Associate Professorshipand the National Research Foundation Fellowship (NRFNRFF13-2021-0006), Singapore.en_US
item.fulltextWith Fulltext-
item.grantfulltextembargo_20250815-
Appears in Collections:SCSE Journal Articles
Files in This Item:
File Description SizeFormat 
#May I Ask a Follow-up Question Understanding the Benefits of Conversations in Neural Network Explainability.pdf
  Until 2025-08-15
11.11 MBAdobe PDFUnder embargo until Aug 15, 2025

Page view(s)

67
Updated on Feb 10, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.