Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/140789
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWong, Ezekiel Ngan Sengen_US
dc.date.accessioned2020-06-02T03:38:33Z-
dc.date.available2020-06-02T03:38:33Z-
dc.date.issued2020-
dc.identifier.urihttps://hdl.handle.net/10356/140789-
dc.description.abstractThe ability to navigate is a key aspect of mobile robots, especially so for autonomous ones. However, the presence of glare has always been an issue for them. As these autonomous mobile robots do not have human supervision, they navigate primarily through the use of their onboard cameras for environmental perception and localisation. With the presence of glare, this can seriously hamper their ability to navigate and in the worst case, cause accidents. As such, this paper will attempt to train a machine learning algorithm in order to rectify this issue. The model, when trained, will be able to segment portions of images containing glare, preventing the robot from using them in its navigational duties.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.subjectEngineering::Electrical and electronic engineeringen_US
dc.titleMobile robot navigation using deep learningen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorWang Hanen_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeBachelor of Engineering (Electrical and Electronic Engineering)en_US
dc.contributor.supervisoremailhw@ntu.edu.sgen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP_Final_Report.pdf
  Restricted Access
3.64 MBAdobe PDFView/Open

Page view(s)

135
Updated on Jul 1, 2022

Download(s) 50

23
Updated on Jul 1, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.