Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/142317
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Mopuri, Konda Reddy | en_US |
dc.contributor.author | Garg, Utsav | en_US |
dc.contributor.author | Babu, R. Venkatesh | en_US |
dc.date.accessioned | 2020-06-19T03:09:48Z | - |
dc.date.available | 2020-06-19T03:09:48Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | Mopuri, K. R., Garg, U., & Babu, R. V. (2019). CNN fixations : an unraveling approach to visualize the discriminative image regions. IEEE Transactions on Image Processing, 28(5), 2116-2125. doi:10.1109/TIP.2018.2881920 | en_US |
dc.identifier.issn | 1057-7149 | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/142317 | - |
dc.description.abstract | Deep convolutional neural networks (CNNs) have revolutionized the computer vision research and have seen unprecedented adoption for multiple tasks, such as classification, detection, and caption generation. However, they offer little transparency into their inner workings and are often treated as black boxes that deliver excellent performance. In this paper, we aim at alleviating this opaqueness of CNNs by providing visual explanations for the network's predictions. Our approach can analyze a variety of CNN-based models trained for computer vision applications, such as object recognition and caption generation. Unlike the existing methods, we achieve this via unraveling the forward pass operation. The proposed method exploits feature dependencies across the layer hierarchy and uncovers the discriminative image locations that guide the network's predictions. We name these locations CNN fixations, loosely analogous to human eye fixations. Our approach is a generic method that requires no architectural changes, additional training, or gradient computation, and computes the important image locations (CNN fixations). We demonstrate through a variety of applications that our approach is able to localize the discriminative image locations across different network architectures, diverse vision tasks, and data modalities. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartof | IEEE Transactions on Image Processing | en_US |
dc.rights | © 2018 IEEE. All rights reserved. | en_US |
dc.subject | Engineering::Computer science and engineering | en_US |
dc.title | CNN fixations : an unraveling approach to visualize the discriminative image regions | en_US |
dc.type | Journal Article | en |
dc.contributor.school | School of Computer Science and Engineering | en_US |
dc.identifier.doi | 10.1109/TIP.2018.2881920 | - |
dc.identifier.pmid | 30452367 | - |
dc.identifier.scopus | 2-s2.0-85056698918 | - |
dc.identifier.issue | 5 | en_US |
dc.identifier.volume | 28 | en_US |
dc.identifier.spage | 2116 | en_US |
dc.identifier.epage | 2125 | en_US |
dc.subject.keywords | Explainable AI | en_US |
dc.subject.keywords | CNN Visualization | en_US |
item.grantfulltext | none | - |
item.fulltext | No Fulltext | - |
Appears in Collections: | SCSE Journal Articles |
SCOPUSTM
Citations
10
33
Updated on Mar 18, 2023
Web of ScienceTM
Citations
10
27
Updated on Mar 20, 2023
Page view(s)
161
Updated on Mar 22, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.