Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/175141
Title: | Learning deep networks for image segmentation | Authors: | Akash, T | Keywords: | Computer and Information Science | Issue Date: | 2024 | Publisher: | Nanyang Technological University | Source: | Akash, T. (2024). Learning deep networks for image segmentation. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175141 | Abstract: | The domain of image processing and computer vision has witnessed significant strides in semantic segmentation, primarily propelled by advancements in Deep Convolutional Networks (DCNN). This paper conducts a comprehensive evaluation of traditional semantic segmentation methods, such as FastSCNN with its lightweight model and U-Net with its precise localization capabilities, compared with modern approaches like the Segment Anything Model (SAM) and its lightweight alternative, FastSAM. By implementing these varied models on the common benchmarking Cityscapes dataset, we dissect their strengths and weaknesses through various metrics. The study extends to adjusting and optimizing these models' parameters to enhance their performance. Furthermore, the research explores the integration of prompt-guided methodologies into conventional segmentation frameworks to elevate their adaptability and utility more robustly to unseen data. The future objective is to fuse the precision of traditional methods with the versatility of prompt-based techniques to forge models that are not only accurate but also proficient in handling unseen data scenarios. | URI: | https://hdl.handle.net/10356/175141 | Schools: | School of Computer Science and Engineering | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FYP_REPORT_TAKASH.pdf Restricted Access | Learning Deep Networks for Image Segmentation | 27.99 MB | Adobe PDF | View/Open |
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.