Please use this identifier to cite or link to this item:
Title: Domain adaption for semantic segmentation
Authors: Saklani Pankaj
Keywords: DRNTU::Engineering::Computer science and engineering
Issue Date: 2019
Abstract: Semantic Segmentation is regarded as one of the most challenging and high-level problem, in computer vision. It allows for greater sense of image understanding, allows for it to have a wide variety of application spanning from medical image processing to autonomous driving, which would be the focus of the project. Since many applications share common features and thus can be trained on the same datasets, however, all of them do not share the application domain and thus might have several deviations from the training dataset, such as illumination, geographical location, image quality. This project will discuss the use of domain adaption in the context of semantic segmentation, creating and evaluation a Convolutional Neural Network model to train on Berkley Deep Drive Dataset and through domain adaption, perform semantic segmentation on ApolloScape Dataset. The report will elaborate on the domain adaption technique Maximum Mean Discrepancy, the data pre-processing methods, the data augmentation methods, and the architecture of the model created. The training hyper-parameters, system configuration would also be presented in the report. Furthermore, the evaluation matrix, problems encountered, and results of the project will be discussed upon, in the report. Though this evaluation, it was discovered that the proposed model was successful in showcasing that the domain adaption technique has greater accuracy that normal segmentation model that do the same task.
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Saklani_Pankaj_FYP_Report (1).pdf
  Restricted Access
1.31 MBAdobe PDFView/Open

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.