Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/164493
Title: From medical imaging to explainable artificial intelligence
Authors: Tjoa, Erico
Keywords: Engineering::Computer science and engineering
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Tjoa, E. (2022). From medical imaging to explainable artificial intelligence. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/164493
Project: RIE2020 AME Programmatic Fund, Singapore (No. A20G8b0102)
Abstract: Deep Neural Network (DNN) has recently been recognized as one of the most powerful models capable of performing tasks at and beyond human capacity. With millions, even billions of parameters, DNN has been able to attain remarkable performance on hitherto difficult tasks such as computer vision, natural language processing and many other complex problems (including personalized healthcare and complex video games). Its capability has only increased further with better architecture designs and computing power. While DNN has been said to usher in the new artificial intelligence era, it comes with several problems. Beside its massive resource consumption, another important problem remains a challenge: DNN is a blackbox model. DNN is difficult to understand; it is not entirely clear how each neuron or parameter contributes to its performance, whether they are even relevant individually, or whether there is a universal meaningful way to understand its inner working at all. In response, researchers have proposed many different methods to study the blackbox models, some of which are considered posthoc methods, model-agnostic methods, visualizations etc. They have been studied under the big umbrella of eXplainable artificial intelligence (XAI). While some methods are based on sound mathematical principles, many other methods are based on heuristics. The meaning of \textit{explanation} sometimes becomes muddled by subjectivity. We started off our exploration into the topic by observing their applicability on a medical imaging problem, almost blindly. Since we did not find a satisfactory \textit{explanation} from our earlier experiments, we then attempted to approach the problem from different perspectives. We first tested the viability of common XAI methods by designing a computer vision experiment with a synthetic dataset that has very clear and obvious features. Common methods are expected to capture the features accurately. Our results show that heatmap-based methods have not performed very well. Thus, we decided to design methods that place interpretability and explainability at the very highest priority. More precisely, we experimented with the following methods: (1) General Pattern Theory (GPT) has been used to systematically capture features in objects in a component-wise manner. More precisely, we aim to represent object components with generators. (2) Interpretable Universal Approximation. SQANN and TNN (defined later) have been designed as universal approximators whose universal approximation property is provable in a very clear-cut, humanly understandable manner. By contrast, existing methods use heavy mathematical abstraction. (3) Self reward design. We leverage neural network components for solving reinforcement learning problems in an extremely interpretable manner. More precisely, in the design, each neuron is assigned a meaning, giving a very high level of transparency that is hard to compare with existing methods. Apart from novel designs, we have experimented with common methods. For example, with augmentative explanations, we study how much common methods improve the predictive accuracy of a model. Besides, we study XAI methods with respect to a popular Weakly Supervised Object Localization (WSOL) metric, the MaxBoxAcc, and tested the effect of Neural Backed Decision Tree w.r.t the same metric. kaBEDONN has been designed as a partial upgrade of SQANN, intended to provide easy-to-understand and easy-to-adjust explanations inspired by research on influential examples. Finally, we assert that the project will be concluded with a good balance of novelty and incremental improvements (and validation) on existing XAI methods.
URI: https://hdl.handle.net/10356/164493
DOI: 10.32657/10356/164493
Schools: Interdisciplinary Graduate School (IGS) 
Research Centres: Alibaba-NTU Singapore Joint Research Institute
Rights: This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:IGS Theses

Files in This Item:
File Description SizeFormat 
Erico - thesis.pdfErico's thesis (final)28.5 MBAdobe PDFThumbnail
View/Open

Page view(s)

392
Updated on Mar 28, 2024

Download(s) 50

158
Updated on Mar 28, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.