Please use this identifier to cite or link to this item:
Title: Graph model-based feature point matching
Authors: Wang, Chen
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Issue Date: 2013
Source: Wang, C. (2013). Graph model-based feature point matching. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: Feature point matching aims to automatically establish point-to-point correspondences between two images acquired from the same scene but through two different viewpoints. This matching process is essential to many image processing and computer vision tasks, such as image registration, object detection and tracking, to name a few. However, the well-known SIFT feature descriptors together with the use of the nearest neighbor matching criterion oftentimes lead to many incorrect point-to-point correspondences. This would be incurred especially when the two images undergo a large viewpoint variation and/or contain a severely cluttered background. To improve the performance of feature point matching process, three contributions are made in this thesis: 1) mismatch removal; 2) common visual pattern discovery; and 3) feature histogram equalization (FHE). The first two are graph model-based approaches, and the last one is a novel technique on the enhancement of feature descriptors. More details are provided as follows. For certain image processing applications such as matching two images captured from disparate views, or the so-called wide-baseline image matching as demonstrated in this thesis, a sufficiently large number of point-to-point correspondences are needed in the first place. However, these corresponding pairs often contain mismatched ones. Consequently, the accuracy of the spatial transformation estimation between the two images under matching will be reduced to a certain degree. Therefore, how to identify and remove the mismatched corresponding pairs from the established ones is the main goal of our first contribution. For that, a bipartite graph model is exploited to the segmented image regions to establish all possible region-to-region correspondences. After that one-to-one region correspondences, called the coherent region pairs (CRPs), can be further identified by using the Hungarian method and the proposed region-to-region similarity measurement metric. The established CRPs will be utilized as the reference information in order to identify and remove those mismatched point-to-point correspondences. Extensive experimental results have demonstrated that our proposed mismatch removal method could reduce incorrect SIFT-based point-to-point correspondences for the application of wide-baseline image matching that normally requires a large targeted number of matching pairs. For the second contribution on common visual pattern discovery, the entire work still begins with the SIFT feature descriptors. However, the main objective is to yield the correct point-to-point matching pairs in the first place, while the actual number of such established matching pairs tends to be small. The key novelty of our approach lies in the use of the directed graph (or diagraph) model that has two link weights on each link, rather than one link weight inherited in the conventional graph model. The principle of pairwise spatial consistency is first exploited to generate the initial link weight for each link. Based on this link weight value, two relative link weights are then generated by considering the relativeness of neighboring vertices at each vertex of the link. For that, the n-ranking process and a novel link weight enhancement technique are proposed. Consequently, the resulted relative link weights are more robust to combat various adverse scenarios such as large viewpoint variations and indiscriminative feature descriptors. Based on the relative link weights generated at each assumed scale change factor, the strongly-associated subgraph can be extracted from the digraph by applying the non-cooperative game theory for handling non-symmetric adjacency matrix issue. All the vertices (i.e., point-to-point correspondences) belonging to the strongly-associated subgraph extracted from the diagraph established at an assumed scale change factor are collectively treated as one common visual pattern; hopefully, this set of vertices corresponds to one visual object. If this is not the case, our proposed topological splitting algorithm might be able to further discriminate them. Extensive experiments have been conducted on the simulated SIFT feature points for standing out the technical challenges, followed by performing evaluations on six thoughtfully chosen natural image pairs and Columbia dataset to demonstrate the efficacy and robustness of the proposed method. For the third contribution, feature contrast is first introduced as a measurement of the degree of self-similarity contained in an image pair. Note that, for an image pair with strong self-similarity, the established SIFT feature descriptors would be quite similar to each other, therefore the feature contrast is low. The FHE is then proposed to equalize SIFT feature descriptors by independently modifying the vector-component values of feature descriptors at each vector dimension; consequently, the feature contrast is effectively enhanced for the purpose of better discrimination of the feature descriptors. Extensive simulation results have clearly shown that our proposed FHE method could effectively improve the precision of SIFT-based point-to-point correspondences, especially to those image pairs containing a large amount of self-similar regions. As expected, if the image pairs contain only a few or even no self-similar regions, the performance gain would be marginal in this case.
DOI: 10.32657/10356/54912
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
thesis_final.pdfMain article3.11 MBAdobe PDFThumbnail

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.