Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/144343
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWu, Zhonghuaen_US
dc.contributor.authorTao, Qingyien_US
dc.contributor.authorLin, Guoshengen_US
dc.contributor.authorCai, Jianfeien_US
dc.date.accessioned2020-10-29T05:20:10Z-
dc.date.available2020-10-29T05:20:10Z-
dc.date.issued2020-
dc.identifier.citationWu, Z., Tao, Q., Lin, G., & Cai, J. (2020). Exploring bottom-up and top-down cues with attentive learning for webly supervised object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020. doi:10.1109/CVPR42600.2020.01295en_US
dc.identifier.urihttps://hdl.handle.net/10356/144343-
dc.description.abstractFully supervised object detection has achieved great success in recent years. However, abundant bounding boxes annotations are needed for training a detector for novel classes. To reduce the human labeling effort, we propose a novel webly supervised object detection (WebSOD) method for novel classes which only requires the web images without further annotations. Our proposed method combines bottom-up and top-down cues for novel class detection. Within our approach, we introduce a bottom-up mechanism based on the well-trained fully supervised object detector (i.e. Faster RCNN) as an object region estimator for web images by recognizing the common objectiveness shared by base and novel classes. With the estimated regions on the web images, we then utilize the top-down attention cues as the guidance for region classification. Furthermore, we propose a residual feature refinement (RFR) block to tackle the domain mismatch between web domain and the target domain. We demonstrate our proposed method on PASCAL VOC dataset with three different novel/base splits. Without any target-domain novel-class images and annotations, our proposed webly supervised object detection model is able to achieve promising performance for novel classes. Moreover, we also conduct transfer learning experiments on large scale ILSVRC 2013 detection dataset and achieve state-of-the-art performance.en_US
dc.description.sponsorshipAI Singaporeen_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.language.isoenen_US
dc.relationAISG-RP-2018-003en_US
dc.rights© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/CVPR42600.2020.01295en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleExploring bottom-up and top-down cues with attentive learning for webly supervised object detectionen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conferenceIEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020en_US
dc.identifier.doi10.1109/CVPR42600.2020.01295-
dc.description.versionAccepted versionen_US
dc.subject.keywordsObject Detectionen_US
dc.subject.keywordsDetectorsen_US
dc.citation.conferencelocationSeattle, WA, USAen_US
dc.description.acknowledgementThis research was mainly carried out at the Rapid-Rich Object Search (ROSE) Lab at the Nanyang Technological University, Singapore. The ROSE Lab is supported by the National Research Foundation, Singapore, and the Infocomm Media Development Authority, Singapore. This research is also partially supported by the National Research Foundation Singapore under its AI Singapore Programme (Award Number: AISG-RP-2018-003), the MOE Tier-1 research grants: RG28/18 (S) and RG22/19 (S) and the Monash FIT Start-up Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.en_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Conference Papers
Files in This Item:
File Description SizeFormat 
zhonghua 06907.pdf2.78 MBAdobe PDFThumbnail
View/Open

SCOPUSTM   
Citations 50

7
Updated on Mar 10, 2023

Page view(s)

202
Updated on Mar 20, 2023

Download(s) 50

85
Updated on Mar 20, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.