Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/41433
Title: | Learning transformation invariance for pairwise image matching | Authors: | Chen, Xi | Keywords: | DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision | Issue Date: | 2008 | Source: | Chen, X. (2008). Learning transformation invariance for pairwise image matching. Doctoral thesis, Nanyang Technological University, Singapore. | Abstract: | Image matching is a fundamental problem in computer vision. In this thesis, we address the image matching problem as learning and classifying correspondences. More precisely, we formulate the image matching problem as: given a set of training pairs of images that implicitly captures the transformation(with both positive and negative classes), identify if a new pair of test images is matched via the transformation class. In this formulation, all the training data, as well as test data, are image pairs. The approach taken is to consider only relative visual content, rather than absolute visual content, so the learned image matching classifier could be applied to images of totally different visual content as compared to the training data. This is in contrast to appearance-based object detection methods, for which once the training process is completed, the classifiers may only be used to recognize objects of the same categories with the training images. | URI: | https://hdl.handle.net/10356/41433 | DOI: | 10.32657/10356/41433 | Schools: | School of Computer Engineering | Research Centres: | Centre for Multimedia and Network Technology | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
ChenXi08.pdf | 20.78 MB | Adobe PDF | View/Open |
Page view(s) 50
581
Updated on Mar 27, 2024
Download(s) 20
227
Updated on Mar 27, 2024
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.