Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLi, Jingwenen_US
dc.identifier.citationLi, J. (2021). Scene understanding for unmanned vehicle using deep learning. Master's thesis, Nanyang Technological University, Singapore.
dc.description.abstractThe continuous development of automation and artificial intelligence provides important conditions and resources for the application of unmanned vehicles in all aspects of human life. In the field of unmanned vehicle distribution, achieving efficient and prepared navigation is the top priority. Using scene categories to optimize navigation strategies has significant practical application value. In recent years, deep learning models, especially deep convolutional neural networks, have been widely and successfully applied in natural scene image classification because of their appropriate semantic feature extraction capabilities. However, the classification layer used by these high-level methods for feature fusion is not very effective, but mid-level methods can make up for this shortcoming. Therefore, this dissertation focuses on the application of deep learning methods in the classification of unmanned vehicle natural scenes. The main research work and results are summarized as following. (1) The research progress of deep learning methods based on convolutional neural networks in the field of natural scene classification is reviewed. Among the existing multiple scene classification algorithms, core optimizations focus on multi-scale, multi-category and combined target detection. Therefore, this dissertation chooses to optimize the classification layer in the scene classification algorithm. (2) Based on the respective advantages of mid-level visual representations and high-level visual information, a new scene classification structure is proposed. The model combines a deep learning-based convolutional neural network with a VLAD-based encoder classifier. This method further assists the mapping of the underlying features extracted by the convolutional neural network to the classification results with the help of VLAD's clustering features ideas, thereby improving the performance of scene classification. In addition, in order to detect the performance of the algorithm efficiently and conveniently, this dissertation extracts two new datasets Places29 and Places29_v2 dedicated to experiments on the basis of the Places365-standard dataset. The experimental results demonstrate that the method proposed in this dissertation has better average accuracy than the existing convolutional neural network models, and can achieve better detection results with ideal detection speed. (3) Aiming at the problem of the lack of effective actual datasets in the field of scene classification for unmanned vehicle distribution applications, this dissertation manually collected and labeled two new natural scene classification image datasets. The sources of the two datasets are images from unmanned vehicles and taken manually on the NTU campus. They are manually labeled as single-label dataset and multi-label dataset. After completing the construction of the datasets based on the above work, this dissertation also tested and evaluated their category rationality and data distribution performance.en_US
dc.publisherNanyang Technological Universityen_US
dc.subjectEngineering::Electrical and electronic engineering::Computer hardware, software and systemsen_US
dc.titleScene understanding for unmanned vehicle using deep learningen_US
dc.typeThesis-Master by Courseworken_US
dc.contributor.supervisorWang Dan Weien_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeMaster of Science (Computer Control and Automation)en_US
item.fulltextWith Fulltext-
Appears in Collections:EEE Theses
Files in This Item:
File Description SizeFormat 
Li Jingwen's dissertation final version.pdf
  Restricted Access
7.03 MBAdobe PDFView/Open

Page view(s)

Updated on Aug 13, 2022


Updated on Aug 13, 2022

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.