Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/144579
Title: | Digital makeup using machine learning algorithms | Authors: | Malani, Surabhi | Keywords: | Engineering::Computer science and engineering::Software | Issue Date: | 2020 | Publisher: | Nanyang Technological University | Project: | SCSE19-0593 | Abstract: | Self-photographs, also known as selfies, have become indispensable in social media and the glamor industry. One’s face can be further enhanced with modern photo-editing software such as Adobe Photoshop. These makeup tools can digitally beautify the face with a click. Beauty industries have begun to embrace virtual makeup to support their customers’ online shopping experience. The project aims to evaluate the proof of concept behind virtual makeup. It investigates the “how’s” and “what’s” behind the implementation of digital makeup, and also analyses how the program fared against different use cases. Curious individuals can experiment with how their appearance will change according to the latest trends, through a simple automated algorithm. The author implemented a state-of-the-art algorithm, in Python, for semantic segmentation of portrait images using fully convolutional networks (FCN) and other open-source libraries. This was followed by an example-based skin and hair colour transfer using N-dimensional Probability Density Function (PDF) statistical transfer. Ethnically diverse datasets were built with the photographs offered by enthusiastic photographers on the Internet. Colour transfer was done on a Part-to-Part basis between semantically similar features, and then parsed back onto the original image for a completed look. The author has delivered an application that performed a full face-to-face makeup transfer involving a series of part-to-part colour transfer for each individual facial feature. The end results accurately capture the essence of the reference image. The application is able to obtain a reasonable segmentation of the input and reference image, and successfully performed a colour transfer, revealing visually aesthetic results. This resource-efficient program adopted high-performant technology, taking a total of 284 seconds to execute. The colour transfer algorithm is not optimal when applied to humans because the human eye can easily perceive the distortion. Careful consideration can be made by ensuring that the input images have a relatively similar histogram distribution. Further research can be conducted to improve the algorithmic scope or even adopt more sophisticated and advanced technology that can parse makeup with content awareness. | URI: | https://hdl.handle.net/10356/144579 | Schools: | School of Computer Science and Engineering | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Surabhi_Malani_Final_Report.pdf Restricted Access | FYP Report - Surabhi Malani | 135.49 MB | Adobe PDF | View/Open |
Page view(s) 50
521
Updated on Mar 22, 2025
Download(s) 50
100
Updated on Mar 22, 2025
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.