Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/98874
Title: | Discovering thematic objects in image collections and videos | Authors: | Katsaggelos, Aggelos K. Yuan, Junsong Zhao, Gangqiang Fu, Yun Li, Zhu Wu, Ying |
Keywords: | DRNTU::Engineering::Electrical and electronic engineering | Issue Date: | 2011 | Series/Report no.: | IEEE transactions on image processing | Abstract: | Given a collection of images or a short video sequence, we define a thematic object as the key object that frequently appears and is the representative of the visual contents. Successful discovery of the thematic object is helpful for object search and tagging, video summarization and understanding, etc. However, this task is challenging because 1) there lacks a priori knowledge of the thematic objects, such as their shapes, scales, locations, and times of re-occurrences, and 2) the thematic object of interest can be under severe variations in appearances due to viewpoint and lighting condition changes, scale variations, etc. Instead of using a top-down generative model to discover thematic visual patterns, we propose a novel bottom-up approach to gradually prune uncommon local visual primitives and recover the thematic objects. A multilayer candidate pruning procedure is designed to accelerate the image data mining process. Our solution can efficiently locate thematic objects of various sizes and can tolerate large appearance variations of the same thematic object. Experiments on challenging image and video data sets and comparisons with existing methods validate the effectiveness of our method. | URI: | https://hdl.handle.net/10356/98874 http://hdl.handle.net/10220/13463 |
ISSN: | 1057-7149 | DOI: | 10.1109/TIP.2011.2181952 | Schools: | School of Electrical and Electronic Engineering | Fulltext Permission: | none | Fulltext Availability: | No Fulltext |
Appears in Collections: | EEE Journal Articles |
SCOPUSTM
Citations
10
32
Updated on May 13, 2023
Web of ScienceTM
Citations
10
30
Updated on May 25, 2023
Page view(s) 50
542
Updated on May 30, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.