Please use this identifier to cite or link to this item:
|Title:||Finding visual attention regions in videos||Authors:||Ang, Kenny Wen Bin||Keywords:||DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision||Issue Date:||2010||Abstract:||Whenever one looks at a video or image, there are regions in these videos or images that are often more prominent than other regions. The gaze of human eye would notice these regions first before moving on to other parts in the video or image. These regions are called salient regions. In this report, a project on finding visual attention regions in videos is described. A video is made up of several frames, and by playing multiple frames per second, it shows the moving images. Based on Shannon’s Information Theory, an event that is unique contains high information. Hence if a particular region in a video frame is unique, it would stand out in the video and gain the notice of human gaze. To make use of Shannon’s Information Theory, it is needed to divide or split every frame into spatiotemporal events. It means that each frame would be split into patches of equal size and each patch would be containing information of the video. If a particular patch is unique, it would mean that this patch contains higher information. Each video has a spatial and temporal score that would be added up to form the spatiotemporal saliency score. This spatiotemporal saliency score shows the salient regions of the video as brighter pixel intensity than other. Hence by doing a threshold on the spatiotemporal saliency score, the model would be able to show the salient regions only and discarding the rest. Lastly, different video sequences would be tested to check if the result is accurate. The method used in this project might not be the same as other research papers. For example, some may perceive salient regions to be moving regions in the video. The method used in this project is showing regions with the most information and not only the moving regions in a video.||URI:||http://hdl.handle.net/10356/35693||Rights:||Nanyang Technological University||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Student Reports (FYP/IA/PA/PI)|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.