Correspondence matching in urban scenes across different views
Lee, Jimmy Addison
Date of Issue2011
School of Computer Engineering
Centre for Multimedia and Network Technology
The high degree of symmetry and repetition in urban environments makes it difficult for computer vision to establish correspondences during wide-baseline matching. The level of difficulty increases with respect to the magnitude of viewpoint changes. This thesis highlights these problems and looks into different approaches to tackle the problems. First, verification approaches are considered for enhancing the correspondence matching results by eliminating false matches. They comprise geometric and appearance-based measurements, e.g., the Euclidean distance and angle approximations, gabor texture, pattern approximation, etc. to determine the reliability of each match. Second, we investigate new ways to increase the number of point correspondences regardless of the viewing variations. We define hypotheses of building facades by planar convex quadrilaterals in images, and we call them “q-regions”. A projective transformation model can be derived from each pair of q-regions in two images. If a pair of q-regions is correctly aligned, all line segments and interest points within the pair of q-regions will fit the projective transformation model and match accordingly. Consequently, the largest pair of correctly aligned q-regions will produce the largest list of matches. Third and last, using a similar concept, we introduce a fully affine invariant descriptor, coined PRIUS (Projective Region Invariants for Urban Scenes). The interest points in the q-regions and their neighboring points within close spatial proximity are used to describe the q-regions, and to robustly align building facades through their projectivity.
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision