Please use this identifier to cite or link to this item:
|Title:||Exploring bottom-up and top-down cues with attentive learning for webly supervised object detection||Authors:||Wu, Zhonghua
|Keywords:||Engineering::Computer science and engineering||Issue Date:||2020||Source:||Wu, Z., Tao, Q., Lin, G., & Cai, J. (2020). Exploring bottom-up and top-down cues with attentive learning for webly supervised object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020. doi:10.1109/CVPR42600.2020.01295||Project:||AISG-RP-2018-003||Abstract:||Fully supervised object detection has achieved great success in recent years. However, abundant bounding boxes annotations are needed for training a detector for novel classes. To reduce the human labeling effort, we propose a novel webly supervised object detection (WebSOD) method for novel classes which only requires the web images without further annotations. Our proposed method combines bottom-up and top-down cues for novel class detection. Within our approach, we introduce a bottom-up mechanism based on the well-trained fully supervised object detector (i.e. Faster RCNN) as an object region estimator for web images by recognizing the common objectiveness shared by base and novel classes. With the estimated regions on the web images, we then utilize the top-down attention cues as the guidance for region classification. Furthermore, we propose a residual feature refinement (RFR) block to tackle the domain mismatch between web domain and the target domain. We demonstrate our proposed method on PASCAL VOC dataset with three different novel/base splits. Without any target-domain novel-class images and annotations, our proposed webly supervised object detection model is able to achieve promising performance for novel classes. Moreover, we also conduct transfer learning experiments on large scale ILSVRC 2013 detection dataset and achieve state-of-the-art performance.||URI:||https://hdl.handle.net/10356/144343||DOI:||10.1109/CVPR42600.2020.01295||Rights:||© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/CVPR42600.2020.01295||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Conference Papers|
Updated on Dec 7, 2022
Updated on Jan 28, 2023
Updated on Jan 28, 2023
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.