Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/163470
Title: Pseudo vision: end-to-end autonomous driving with 2D LiDAR
Authors: Chau, Yuan Qi
Keywords: Engineering::Mechanical engineering
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Chau, Y. Q. (2022). Pseudo vision: end-to-end autonomous driving with 2D LiDAR. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/163470
Project: C164
Abstract: This Project introduced a novel data representation called Pseudo Vision that enables End-to-end Autonomous Driving using 2D LiDAR as the sole sensor for the perception of the vehicle’s surroundings. Pseudo Vision is interoperable across 2D LiDARs of any Points Per Scan (PPS) and allows an explainable decision-making process by visualizing the feature. The Pseudo Vision Data Representation has two parameters, i.e. The position of the ego-vehicle and the Resolution of the image. Experiments have been carried out to investigate the relationships between each parameter and the impact on driving performances across three models, i.e. 3-layer Fully-Connected Neural Network (FCNN), a 3-layer Convolutional Neural Network (CNN), and a state-of-the-art Convolutional Neural Network (SOTA-CNN). The SOTA-CNN model is selected by doing a benchmark of 135 State-of-the-art CNN models, with metrics like Accuracy & Inference Time both on PyTorch & TensorRT and the number of parameters for each model made available to researchers and engineers that might need an empirical model selection. From the experiments, it is concluded that Convolutional Neural Networks or CNN-based model allows the visualization and understanding of the mechanism in which they extract features from imagery inputs and make decisions, which should be preferred use in situations where understanding of the context of the problem is required over a brute-force model like FCNN. It is also shown that it is possible to transfer learning from a different task like classification on ImageNet1k to an autonomous driving task; There is a significant increase in pre-trained CNN and pre- trained SOTA-CNN models’ driving performance when compared to their respective non-pre-trained models. Not only that, the Pseudo Vision performs better when the position of the ego-vehicle is below the center of the image because it reserves more space for information that is in front of the vehicle than behind the vehicle in the Pseudo Vision, which is more important during driving; The Pseudo Vision performs better when the resolution of the image is large because it allows the models to learn to differentiate smaller distances.
URI: https://hdl.handle.net/10356/163470
Schools: School of Mechanical and Aerospace Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:MAE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP_FinalV2_ChauYuanQi.pdf
  Restricted Access
31.96 MBAdobe PDFView/Open

Page view(s)

104
Updated on Feb 24, 2024

Download(s)

10
Updated on Feb 24, 2024

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.