Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/142175
Title: | Efficient video object co-localization with co-saliency activated tracklets | Authors: | Jerripothula, Koteswar Rao Cai, Jianfei Yuan, Junsong |
Keywords: | Engineering::Computer science and engineering | Issue Date: | 2018 | Source: | Jerripothula, K. R., Cai, J., & Yuan, J. (2019). Efficient video object co-localization with co-saliency activated tracklets. IEEE Transactions on Circuits and Systems for Video Technology, 29(3), 744-755. doi:10.1109/tcsvt.2018.2805811 | Journal: | IEEE Transactions on Circuits and Systems for Video Technology | Abstract: | Video object co-localization is the task of jointly localizing common visual objects across videos. Due to the large variations both across the videos and within each video, it is quite challenging to identify and track the common objects jointly. Unlike the previous joint frameworks that use a large number of bounding box proposals to attack the problem, we propose to leverage co-saliency activated tracklets to efficiently address the problem. To highlight the common object regions, we first explore inter-video commonness, intra-video commonness, and motion saliency to generate the co-saliency maps for a small number of selected key frames at regular intervals. Object proposals of high objectness and co-saliency scores in those frames are tracked across each interval to build tracklets. Finally, the best tube for a video is obtained through selecting the optimal tracklet from each interval with the help of confidence and smoothness constraints. Experimental results on the benchmark YouTube-objects dataset show that the proposed method outperforms the state-of-the-art methods in terms of accuracy and speed under both weakly supervised and unsupervised settings. Moreover, by noticing the existing benchmark dataset lacks of sufficient annotations for object localization (only one annotated frame per video), we further annotate more than 15k frames of the YouTube videos and develop a new benchmark dataset for video co-localization. | URI: | https://hdl.handle.net/10356/142175 | ISSN: | 1051-8215 | DOI: | 10.1109/TCSVT.2018.2805811 | Rights: | © 2018 IEEE. All rights reserved. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Efficient Video Object Co-Localization With Co-Saliency Activated Tracklets.pdf | 3.84 MB | Adobe PDF | View/Open |
SCOPUSTM
Citations
8
Updated on Jan 15, 2021
PublonsTM
Citations
7
Updated on Jan 14, 2021
Page view(s)
20
Updated on Jan 21, 2021
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.