Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/172909
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChai, Youxiangen_US
dc.date.accessioned2023-12-31T07:54:34Z-
dc.date.available2023-12-31T07:54:34Z-
dc.date.issued2023-
dc.identifier.citationChai, Y. (2023). Deep learning methods with less supervision. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/172909en_US
dc.identifier.urihttps://hdl.handle.net/10356/172909-
dc.description.abstractTo tackle the immense burden of acquiring accurate, pixel-level annotations for semantic segmentation tasks, we propose a weakly-supervised deep learning framework. We incorporate state-of-the-art foundational models to propagate pseudo-labels. Then, explore the viability of training a fully convolutional network based on our pseudo-labels. In addition, we experiment and evaluate the results of different loss functions and attempt the refinement of masks using conditional random fields.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.relationSCSE22-0688en_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Image processing and computer visionen_US
dc.titleDeep learning methods with less supervisionen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorLin Guoshengen_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeBachelor of Engineering (Computer Science)en_US
dc.contributor.supervisoremailgslin@ntu.edu.sgen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
Deep Learning Methods with Less Supervision.pdf
  Restricted Access
Undergraduate project report1.88 MBAdobe PDFView/Open

Page view(s)

396
Updated on Apr 27, 2025

Download(s) 50

203
Updated on Apr 27, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.