Please use this identifier to cite or link to this item:
|Title:||Enhancing place recognition with deep convolutional neural network using bag-of-visual-words||Authors:||Soh, Wei Xin||Keywords:||Engineering::Aeronautical engineering||Issue Date:||2021||Publisher:||Nanyang Technological University||Source:||Soh, W. X. (2021). Enhancing place recognition with deep convolutional neural network using bag-of-visual-words. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/150823||Project:||C102||Abstract:||This Computer Vision (CV) project has developed an unsupervised Convolutional Neural Network (CNN) solution for enhanced Visual Place Recognition (VPR), with the usage of Bag-of-Visual-Words (BoVW) for automatic image clustering. BoVW enables automatic generation of image clusters and automatic labelling of training data. These labelled image clusters can be used as input data to train CNN models for VPR. Extraction of image frames was performed from videos of the public dataset, which were subsequently used to automatically generate image clusters. This proved more efficient than most well-known deep learning methods which often required time-consuming manual labelling, especially for extremely large quantities of images. Experiments were conducted on a public dataset to validate that the proposed solution was able to achieve better recognition performance compared to the traditional BoVW approach. This project can potentially be applied to the Advanced Remanufacturing & Technology Centre (ARTC) production shopfloor in various aspects such as Automated Guided Vehicle (AGV) localization with the proposed unsupervised deep learning solution.||URI:||https://hdl.handle.net/10356/150823||Fulltext Permission:||embargo_restricted_20230405||Fulltext Availability:||With Fulltext|
|Appears in Collections:||MAE Student Reports (FYP/IA/PA/PI)|
Files in This Item:
|Soh Wei Xin FYP Report.pdf|
|4.87 MB||Adobe PDF||Under embargo until Apr 05, 2023|
Updated on Nov 29, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.