Please use this identifier to cite or link to this item:
Title: Visual attention modeling and its applications
Authors: Fang, Yuming
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Issue Date: 2012
Source: Fang, Y. (2012). Visual attention modeling and its applications. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: The visual environment for observers is usually complex, and it is impossible for the human visual system (HVS) to process all signal components and figure out their relationships immediately. Selective attention in the HVS allocates most processing resources to the salient regions rather than the entire visual view equally. There are two different types of visual attention mechanism: bottom-up and top-down. Visual attention mechanism will cause the salient regions automatically ‘pop out’ in visual scenes. In this thesis, we explore the visual attention modeling and its applications in visual signal processing. Firstly, we propose a saliency detection model for images based on human visual sensitivity and amplitude spectrum. The amplitude spectrum is adopted to represent color, intensity, and orientation distributions for image patches. The saliency value of each image patch is calculated by not only the differences between amplitude spectrum of this patch and other patches in the whole image, but also the visual impacts of these differences determined by human visual sensitivity. Due to the integration of the characteristics of the HVS and better feature representation, the proposed saliency detection model can achieve better performance than existing ones.
DOI: 10.32657/10356/51079
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Theses

Files in This Item:
File Description SizeFormat 
TsceG0900533E.pdfMain article5.24 MBAdobe PDFThumbnail

Page view(s) 50

Updated on Nov 28, 2020


Updated on Nov 28, 2020

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.