Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/174298
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGuertler, Leonen_US
dc.date.accessioned2024-03-26T00:47:45Z-
dc.date.available2024-03-26T00:47:45Z-
dc.date.issued2024-
dc.identifier.citationGuertler, L. (2024). TeLLMe what you see: using LLMs to explain neurons in vision models. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/174298en_US
dc.identifier.urihttps://hdl.handle.net/10356/174298-
dc.description.abstractAs the role of machine learning models continues to expand across diverse fields, the demand for model interpretability grows. This is particularly crucial for deep learning models, which are often referred to as black boxes, due to their highly nonlinear nature. This paper proposes a novel method for generating and evaluating concise explanations for the behavior of specific neurons in trained vision models. Doing so signifies an important step towards better understanding the decision making in neural networks. Our technique draws inspiration from a recently published framework that utilized GPT-4 for interpretability of language models. Here, we extend and expand the method to vision models, offering interpretations based on both neuron activations and weights in the network. We illustrate our approach using an AlexNet model and ViT trained on ImageNet, generating clear, human-readable explanations. Our method outperforms the current state-of-the-art in both quantitative and qualitative assessments, while also demonstrating superior capacity in capturing polysemic neuron behavior. The findings hold promise for enhancing transparency, trust and understanding in the deployment of deep learning vision models across various domains.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.relationSCSE23-0758en_US
dc.subjectComputer and Information Scienceen_US
dc.titleTeLLMe what you see: using LLMs to explain neurons in vision modelsen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorLuu Anh Tuanen_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeBachelor's degreeen_US
dc.contributor.supervisoremailanhtuan.luu@ntu.edu.sgen_US
dc.subject.keywordsExplainable AIen_US
dc.subject.keywordsLLMen_US
dc.subject.keywordsVision networken_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP.pdf
  Restricted Access
4.44 MBAdobe PDFView/Open

Page view(s)

128
Updated on Apr 26, 2025

Download(s)

19
Updated on Apr 26, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.