Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/148091
Title: | Learning to see in the dark | Authors: | Chen, Sihao | Keywords: | Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
Issue Date: | 2021 | Publisher: | Nanyang Technological University | Source: | Chen, S. (2021). Learning to see in the dark. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/148091 | Abstract: | Low-light image enhancement aims to improve the visibility of images taken in low-light or nighttime conditions. Currently, most deep models are trained using synthetic low-light datasets or manually collected datasets with small sizes, which limits their generalization capability when encountering the low-light images captured in the wild. In this study, a domain adaptation framework is proposed to translate images between synthetic low-light images and real low-light images. Meanwhile, we embed a method into the proposed domain adaptation framework to generate low-light images of different brightness levels, which helps with the training process of low-light enhancement networks via data augmentation. Finally, an attention-guided U-Net is trained on the augmented dataset. Qualitative and quantitative evaluations show that our method is comparable to other state-of-the-art methods. | URI: | https://hdl.handle.net/10356/148091 | Schools: | School of Computer Science and Engineering | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FYP_Final_Report_Amended.pdf Restricted Access | 7.95 MB | Adobe PDF | View/Open |
Page view(s)
318
Updated on May 7, 2025
Download(s)
19
Updated on May 7, 2025
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.