Please use this identifier to cite or link to this item:
Title: Video saliency detection with robust temporal alignment and local-global spatial contrast
Authors: Ren, Zhixiang
Chia, Clement Liang-Tien
Rajan, Deepu
Issue Date: 2012
Source: Ren, Z., Chia, C. L. T., & Rajan, D. (2012). Video saliency detection with robust temporal alignment and local-global spatial contrast. Proceedings of the 2nd ACM International Conference on Multimedia Retrieval - ICMR '12.
Abstract: Video saliency detection, the task to detect attractive content in a video, has broad applications in multimedia understanding and retrieval. In this paper, we propose a new framework for spatiotemporal saliency detection. To better estimate the salient motion in temporal domain, we take advantage of robust alignment by sparse and low-rank decomposition to jointly estimate the salient foreground motion and the camera motion. Consecutive frames are transformed and aligned, and then decomposed to a low-rank matrix representing the background and a sparse matrix indicating the objects with salient motion. In the spatial domain, we address several problems of local center-surround contrast based model, and demonstrate how to utilize global information and prior knowledge to improve spatial saliency detection. Individual component evaluation demonstrates the effectiveness of our temporal and spatial methods. Final experimental results show that the combination of our spatial and temporal saliency maps achieve the best overall performance compared to several state-of-the-art methods.
DOI: 10.1145/2324796.2324851
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Conference Papers

Citations 50

Updated on Feb 3, 2023

Page view(s) 5

Updated on Feb 2, 2023

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.