Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/144391
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhang, Chien_US
dc.contributor.authorLin, Guoshengen_US
dc.contributor.authorLiu, Fayaoen_US
dc.contributor.authorYao, Ruien_US
dc.contributor.authorShen, Chunhuaen_US
dc.date.accessioned2020-11-03T02:29:27Z-
dc.date.available2020-11-03T02:29:27Z-
dc.date.issued2019-
dc.identifier.citationZhang, C., Lin, G., Liu, F., Yao, R., & Shen, C. (2019). CANet : class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/CVPR.2019.00536en_US
dc.identifier.urihttps://hdl.handle.net/10356/144391-
dc.description.abstractRecent progress in semantic segmentation is driven by deep Convolutional Neural Networks and large-scale labeled image datasets. However, data labeling for pixel-wise segmentation is tedious and costly. Moreover, a trained model can only make predictions within a set of pre-defined classes. In this paper, we present CANet, a class-agnostic segmentation network that performs few-shot segmentation on new classes with only a few annotated images available. Our network consists of a two-branch dense comparison module which performs multi-level feature comparison between the support image and the query image, and an iterative optimization module which iteratively refines the predicted results. Furthermore, we introduce an attention mechanism to effectively fuse information from multiple support examples under the setting of k-shot learning. Experiments on PASCAL VOC 2012 show that our method achieves a mean Intersection-over-Union score of 55.4% for 1-shot segmentation and 57.1% for 5-shot segmentation, outperforming state-of-the-art methods by a large margin of 14.6% and 13.2%, respectively.en_US
dc.description.sponsorshipAI Singaporeen_US
dc.description.sponsorshipMinistry of Education (MOE)en_US
dc.language.isoenen_US
dc.relationAISG-RP-2018-003en_US
dc.relationRG126/17 (S)en_US
dc.rights© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work is available at: https://doi.org/10.1109/CVPR.2019.00536en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleCANet : class-agnostic segmentation networks with iterative refinement and attentive few-shot learningen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conference2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)en_US
dc.identifier.doi10.1109/CVPR.2019.00536-
dc.description.versionAccepted versionen_US
dc.subject.keywordsRetrievalen_US
dc.subject.keywordsSegmentationen_US
dc.citation.conferencelocationLong Beach, CA, USAen_US
dc.description.acknowledgementG. Lin’s participation was partly supported by the National Research Foundation Singapore under its AI Singapore Programme [AISG-RP-2018-003] and a MOE Tier-1 research grant [RG126/17 (S)]. R. Yao’s participation was supported by the National Natural Scientific Foundation of China (NSFC) under Grant No. 61772530. We would like to thank NVIDIA for GPU donation. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.en_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Conference Papers
Files in This Item:
File Description SizeFormat 
gusoheng paper1 cvpr 2019.pdf1.39 MBAdobe PDFThumbnail
View/Open

SCOPUSTM   
Citations 1

205
Updated on Mar 19, 2023

Web of ScienceTM
Citations 5

140
Updated on Mar 18, 2023

Page view(s)

243
Updated on Mar 24, 2023

Download(s) 50

135
Updated on Mar 24, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.