Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/150315
Title: Boosting knowledge distillation and interpretability
Authors: Song, Huan
Keywords: Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Song, H. (2021). Boosting knowledge distillation and interpretability. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/150315
Abstract: Deep Neural Network (DNN) can be applied in many fields to predict classification and can obtain high accuracy. However, Deep Neural Network is a black box, which means that it’s hard to explain how the Deep Neural Network can derive specific classification directly. The generally accepted interpretable model is the decision tree. Although decision tree classification accuracy is not as good as deep neural networks, it is a more intuitive and interpretable common model. By combining a deep neural network with a decision tree, it is possible to show the inner architecture of model without loss of accuracy. It can be helpful to learn why certain inputs can get specific output by distilling the knowledge from DNN model into a decision tree.
URI: https://hdl.handle.net/10356/150315
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
Boosting Knowledge Distillation and Interpretability.pdf
  Restricted Access
1.74 MBAdobe PDFView/Open

Page view(s)

136
Updated on May 19, 2022

Download(s)

5
Updated on May 19, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.