Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/153501
Title: Universal adversarial example construction against autonomous vehicle
Authors: Beh, Nicholas Chee Kwang
Keywords: Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Beh, N. C. K. (2021). Universal adversarial example construction against autonomous vehicle. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/153501
Project: SCSE20-0841
Abstract: Autonomous Vehicles (AVs) have seen a rapid pace of development and made significant strides in technological capabilities. While AVs do not suffer from human error, they are not immune to other types of errors and even more worryingly, malicious attacks. Most AVs today utilize multiple machine learning models which may or may not be resistant against adversarial attacks. A white-box attack conducted using Universal Adversarial Perturbations (Iterative-DeepFool) on the traffic light recognition component of the Baidu Apollo Autonomous Driving System (ADS) platform revealed that the model failed to hold up in conditions other than daylight. Furthermore, the perturbation is imperceptible to the human eye, posing an even greater safety risk. We also explore the current safeguards in place in Apollo and hypothesize potential solutions to mitigate this issue.
URI: https://hdl.handle.net/10356/153501
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
SCSE20-0841 Nicholas Beh FYP Report.pdf
  Restricted Access
7.02 MBAdobe PDFView/Open

Page view(s)

296
Updated on Mar 23, 2025

Download(s) 50

84
Updated on Mar 23, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.