Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/174272
Title: | EID: facilitating explainable AI design discussions in team-based settings | Authors: | Zhang, Jiehuang Yu, Han |
Keywords: | Computer and Information Science | Issue Date: | 2023 | Source: | Zhang, J. & Yu, H. (2023). EID: facilitating explainable AI design discussions in team-based settings. International Journal of Crowd Science, 7(2), 47-54. https://dx.doi.org/10.26599/IJCS.2022.9100034 | Project: | Alibaba-NTUAIR2019B1 AISG2-RP-2020-019 A20G8b0102 FCP-NTU-RG-2021-014 |
Journal: | International Journal of Crowd Science | Abstract: | Artificial intelligence (AI) systems have many applications with tremendous current and future value to human society. As AI systems penetrate the aspects of everyday life, a pressing need arises to explain their decision-making processes to build trust and familiarity among end users. In high-stakes fields such as healthcare and self-driving cars, AI systems are required to have a minimum standard for accuracy and to provide well-designed explanations for their output, especially when they impact human life. Although many techniques have been developed to make algorithms explainable in human terms, no design methodologies that will allow software teams to systematically draw out and address explainability-related issues during AI design and conception have been established. In response to this gap, we proposed the explainability in design (EID) methodological framework for addressing explainability problems in AI systems. We explored the literature on AI explainability to narrow down the field into six major explainability principles that will aid designers in brainstorming around the metrics and guide the critical thinking process. EID is a step-by-step guide to AI design that has been refined over a series of user studies and interviews with experts in AI explainability. It is devised for software design teams to uncover and resolve potential issues in their AI products and to simply refine and explore the explainability of their products and systems. The EID methodology is a novel framework that aids in the design and conception stages of the AI pipeline and can be integrated into the form of a step-by-step card game. Empirical studies involving AI system designers have shown that EID can decrease the barrier of entry and the time and experience required to effectively make well-informed decisions for integrating explainability into their AI solutions. | URI: | https://hdl.handle.net/10356/174272 | ISSN: | 2398-7294 | DOI: | 10.26599/IJCS.2022.9100034 | Schools: | School of Computer Science and Engineering | Research Centres: | Alibaba-NTU Singapore Joint Research Institute | Rights: | © The author(s) 2023. The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/). | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
10159627.pdf | 1.48 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
50
5
Updated on Mar 27, 2025
Page view(s)
124
Updated on Mar 26, 2025
Download(s) 50
52
Updated on Mar 26, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.