Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/62877
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTao, Qingyi
dc.date.accessioned2015-04-30T06:27:40Z
dc.date.available2015-04-30T06:27:40Z
dc.date.copyright2015en_US
dc.date.issued2015
dc.identifier.urihttp://hdl.handle.net/10356/62877
dc.description.abstractDifferent from the sensor based indoor localization approaches, vision-based approach for mobile indoor localization does not rely on the hardware infrastructure and therefore is scalable and inexpensive. Two key technical areas to implement a visual search based indoor navigation system are: 1) efficient and accurate image retrieval capability and 2) 3D model reconstruction with images. This report will discuss the techniques available for each stage of the image retrieval process. Given the former research results on datasets such as paintings and landmarks as the benchmark, the indoor dataset will be experimented to prove the feasibility of visual search based indoor navigation. For feature extraction, the performance of traditional Scale-invariant feature transform (SIFT) [1] descriptor is found to be most stable but it is too slow for indoor navigation. Hence, Block based Frequency Domain Laplacian of Gaussian (BFLog) [2] is then used to improve the traditional detector in SIFT algorithm. For global feature generation, Scalable Compressed Fisher Vector (SCFV) [3] slightly outperforms Bag-of-Words (BoW) [4], Fisher Vector (FV) [5] and Vector of Locally Aggregated Descriptors (VLAD) [6]. However, the precision is not as expected for an indoor navigation system if the retrieved image is purely determined by global feature matching. The top images should be re-ranked with local feature matching to get a good matching accuracy. The precision is tested to be larger than 80% on the indoor dataset. 3D model reconstruction is achieved by creating point cloud with PhotoSynth [7]. The user position is derived by solving the camera pose from the correspondence of reference points and query points. An iOS application is developed based on these visual search methodologies. The interactive user interface with voice input and augmented reality is designed to enhance the user experience.en_US
dc.format.extent45 p.en_US
dc.language.isoenen_US
dc.rightsNanyang Technological University
dc.subjectDRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer visionen_US
dc.titleImage matching based indoor localization and navigation for mobile usersen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorCai Jianfeien_US
dc.contributor.schoolSchool of Computer Engineeringen_US
dc.description.degreeBachelor of Engineering (Computer Science)en_US
item.fulltextWith Fulltext-
item.grantfulltextrestricted-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
Tao_Qingyi_Image_Matching_Based_Indoor_Localization_and_Navigation_for_Mobile_Users.pdf
  Restricted Access
15.03 MBAdobe PDFView/Open

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.