Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/157529
Title: Robust deep learning techniques for traffic road sign recognition
Authors: Ng, Amos Kai Yung
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Ng, A. K. Y. (2022). Robust deep learning techniques for traffic road sign recognition. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/157529
Abstract: Over the past years, deep learning strategies have proven effective in the field of object detection. However, there has been little research done using newer state-of-the-art deep learning techniques in the area of traffic road sign detection for use cases in autonomous vehicles. As such, this project seeks to explore the use of YOLOv4(You Only Look Once Version 4) algorithm, a popular and relatively new deep learning strategy to be used as a method to develop a robust traffic road sign object detection model. This report will show how the object detection model is trained and the different detection results obtained based on the respective models and parameters used. The object detection model was tested and trained on the TT100K (Tsinghua Tencent 100K) dataset which is known to be a good benchmark to emulate real-world input images from cameras of autonomous vehicles. Additionally, the report will also evaluate the limitations of the tests conducted as well as how the project might be improved upon in the future.
URI: https://hdl.handle.net/10356/157529
Schools: School of Electrical and Electronic Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP Report A3253-211.pdf
  Restricted Access
2.44 MBAdobe PDFView/Open

Page view(s)

56
Updated on Nov 28, 2023

Download(s)

4
Updated on Nov 28, 2023

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.