Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/101879
Title: Spatio-temporal enhanced sparse feature selection for video saliency estimation
Authors: Luo, Ye
Tian, Qi
Keywords: DRNTU::Engineering::Electrical and electronic engineering
Issue Date: 2012
Source: Luo, Y., & Tian, Q. (2012). Spatio-temporal enhanced sparse feature selection for video saliency estimation. 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.33-38.
Conference: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (2012 : Providence, Rhode Island, US)
Abstract: Video saliency mechanism is crucial in the human visual system and helpful to object detection and recognition. In this paper we propose a novel video saliency model that video saliency should be both consistently salient among consecutive frames and temporally novel due to motion or appearance changes. Based on the model, temporal coherence, in addition to spatial saliency, is fully considered by introducing temporal consistence and temporal difference into sparse feature selections. Features selected spatio-temporally are enhanced and fused together to generate the proposed video saliency maps. Comparisons with several state-of-th-art methods on two public video datasets further demonstrate the effectiveness of our method.
URI: https://hdl.handle.net/10356/101879
http://hdl.handle.net/10220/16359
DOI: 10.1109/CVPRW.2012.6239258
Schools: School of Electrical and Electronic Engineering 
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:EEE Conference Papers

SCOPUSTM   
Citations 50

10
Updated on Mar 9, 2025

Page view(s) 50

502
Updated on Mar 18, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.