Please use this identifier to cite or link to this item:
|Title:||Object tracking in intelligent visual surveillance||Authors:||Yuan, Yuan||Keywords:||DRNTU::Engineering||Issue Date:||2016||Abstract:||A growing number of cameras and explosion of video data bring large growth of labour of manual monitoring. At the moment, the Intelligent Visual Surveillance (IVS) concept provides a practical solution for visual surveillance to reduce the labour of manual monitoring, by introducing video analysis algorithms including automated object detection, tracking, scene interpretation, anomaly detection, visual event indexing/retrieval, etc. The objective of this thesis is to study the object tracking algorithms for IVS. Appearance change is a challenging problem in tracking moving objects in surveillance video. It presents difficulties to the tracker in linking the same object in different frames if the appearance changes significantly. Unlike conventional methods addressing the appearance change problem by investigating different online appearance model update algorithms, we handle this problem by predicting the target appearance change during tracking. Based on the analysis of motion related appearance change, we propose Structure Complexity Coefficients (SCC) to predict the appearance stability of moving objects. Different from the standard Hidden Markov Model (HMM) based tracking algorithms where observations between different frames are assumed to be independent, we consider the observation dependency between consecutive frames with the target appearance stability predicted by SCC. Experimental results show the proposed SCC can accurately predict the target appearance change and improve the tracking accuracy of moving objects under the Observation Dependent Hidden Markov Model (OD-HMM) based tracking framework. Occlusion is another common challenge to visual object tracking for surveillance, especially for the case where occlusion and large appearance change co-exist. If occlusion occurs, the appearance of the ''occluder'' should not be treated as the target in appearance model update. On the other hand, if appearance change occurs, the changed target appearance should be updated into the appearance model. For recent research works which address the occlusion problem, there exists uncertainty in distinguishing occlusion and large appearance change scenarios in model update process. To address this problem, we propose a backward model validation based visual tracking (BVT) algorithm, which performs model update first in frame n and then uses the information from the incoming frame (the n+1th frame) to backward check whether the update is valid (occurrence of appearance variation) or not (occurrence of occlusion). In this way, the uncertainty of validating unpredictable features with the existing appearance models can be avoided. Moreover, an adaptive feature fusion method is designed to properly integrate the color based feature with texture based one. The proposed feature extraction method provides a robust representation of the target with both rotation and shape deformation. Due to growing volume of video data, bandwidth limitation, environment change, camera variety, etc., video quality degradation in IVS becomes another unneglectable challenge which could affect the tracking accuracy and performance of other video analysis algorithms. For most object tracking tasks in IVS, results from object detection can be a very important supplementary for tracking since the object is known (e.g., pedestrian, vehicle) before tracking. In this thesis, we conduct the first robustness investigation of pedestrian detectors for IVS with video quality degradation. We build a Distorted Surveillance Video Data Set (DSurVD) for robustness analysis, which includes four general distortions of compression distortion, resolution reduction, white noise and brightness variation. Moreover, we propose an approach to quantify detection stability and design a robustness measure named Robustness Quadrangle. The proposed Robustness Quadrangle can be used to compare the detection robustness by considering both detection accuracy on good quality video and stability with video quality degradation. Robustness analysis on several popular pedestrian detectors shows that there is still much room for the robustness improvement of current detectors.||URI:||http://hdl.handle.net/10356/69045||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Theses|
Updated on May 12, 2021
Updated on May 12, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.