Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/144391
Title: | CANet : class-agnostic segmentation networks with iterative refinement and attentive few-shot learning | Authors: | Zhang, Chi Lin, Guosheng Liu, Fayao Yao, Rui Shen, Chunhua |
Keywords: | Engineering::Computer science and engineering | Issue Date: | 2019 | Source: | Zhang, C., Lin, G., Liu, F., Yao, R., & Shen, C. (2019). CANet : class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/CVPR.2019.00536 | Project: | AISG-RP-2018-003 RG126/17 (S) |
Abstract: | Recent progress in semantic segmentation is driven by deep Convolutional Neural Networks and large-scale labeled image datasets. However, data labeling for pixel-wise segmentation is tedious and costly. Moreover, a trained model can only make predictions within a set of pre-defined classes. In this paper, we present CANet, a class-agnostic segmentation network that performs few-shot segmentation on new classes with only a few annotated images available. Our network consists of a two-branch dense comparison module which performs multi-level feature comparison between the support image and the query image, and an iterative optimization module which iteratively refines the predicted results. Furthermore, we introduce an attention mechanism to effectively fuse information from multiple support examples under the setting of k-shot learning. Experiments on PASCAL VOC 2012 show that our method achieves a mean Intersection-over-Union score of 55.4% for 1-shot segmentation and 57.1% for 5-shot segmentation, outperforming state-of-the-art methods by a large margin of 14.6% and 13.2%, respectively. | URI: | https://hdl.handle.net/10356/144391 | DOI: | 10.1109/CVPR.2019.00536 | Rights: | © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work is available at: https://doi.org/10.1109/CVPR.2019.00536 | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
gusoheng paper1 cvpr 2019.pdf | 1.39 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
1
205
Updated on Mar 19, 2023
Web of ScienceTM
Citations
5
140
Updated on Mar 18, 2023
Page view(s)
243
Updated on Mar 24, 2023
Download(s) 50
135
Updated on Mar 24, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.