Please use this identifier to cite or link to this item:
|Title:||Principled asymmetric boosting approaches to rapid training and classification in face detection||Authors:||Pham, Minh Tri||Keywords:||DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition||Issue Date:||2009||Source:||Pham, M. T. (2009). Principled asymmetric boosting approaches to rapid training and classification in face detection. Doctoral thesis, Nanyang Technological University, Singapore.||Abstract:||Asymmetric boosting, while acknowledged to be important to imbalanced classification problems like face detection, is often based on the trial-and-error methodology to obtain the best boosted classifier, rather than on principled methods. This thesis solves a number of issues related to asymmetric boosting and the use of asymmetric boosting in face detection. It shows how a proper understanding and use of asymmetric boosting leads to improvement in the learning time, the learning capacity, the detection speed and the detection accuracy of a face detector. First, an integrated framework for both online learning and asymmetric learning of a boosted classifier is presented. In addition, the proposed method adaptively balances the skewness of the weight distribution of the two classes presented to the weak classifiers, allowing them to be trained more equally. An additional constraint on propagating the weights of the data points is introduced, allowing the online learning to converge faster. When compared with the Online Boosting algorithm recently applied to object detection problems, a 0-10% increase in accuracy and 5-30% gain in learning speed were observed. Second, training a face detector using boosting and Haar-like features often requires weeks of computation on a single CPU machine. The bottleneck is in the training of a weak classifier, currently in the order of minutes. Traditional techniques for training a weak classifier usually run in time O(NT log N), with N examples (approximately 10,000), and T Haar-like features (approximately 40,000). A method to train a weak classifier in time O(N+T) is presented, by using only the statistics of the weighted input data. Experimental results reveal a significantly reduced training time of a face detector from weeks to just a few hours. In particular, this method trades off a minimal increase in training time for a very large increase in the set of Haar-like features explored, enjoying a significant gain in accuracy. Third, a generalized framework for representing a boosted classifier with multiple exit nodes is introduced. A method for training such a classifier is also proposed, which combines the recent idea of propagating scores across boosted classifiers and the use of asymmetric goals. A means for determining the ideal asymmetric goal is provided, which is theoretically justified under a conservative bound on the operating point target in the receiver-operator characteristic (ROC) curve, and is empirically near-optimal under the exact bound. Moreover, the method automatically minimizes the number of weak classifiers, avoiding the need to retrain a boosted classifier multiple times as in conventional methods. Experimental results show a significant reduction in the training time and the number of weak classifiers, as well as an improvement in accuracy. Fourth, a set of bounds on the generalization ability of a boosted classifier trained with an asymmetric goal is proposed, as current generalization bounds are not designed for asymmetric errors. The proposed bounds show that, unlike traditional boosting methods where there is no difference between a margin of a positive example and that of a negative example, the penalties applied to the margins are different for different classes.||URI:||https://hdl.handle.net/10356/15664||DOI:||10.32657/10356/15664||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Theses|
Page view(s) 5791
Updated on May 7, 2021
Updated on May 7, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.