Please use this identifier to cite or link to this item:
|Title:||Automated image generation||Authors:||Leong, Alex Kah Wai||Keywords:||Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision||Issue Date:||2022||Publisher:||Nanyang Technological University||Source:||Leong, A. K. W. (2022). Automated image generation. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/156588||Abstract:||In the movie and animation industry, concept art of character designs and scenery plays a crucial part in a movie or animation’s success. In current times, concept arts are usually created only by professional graphic design concept artists. These concept artists would use highly specialised graphic design software to create these concept arts for movie / animation directions so that they are able to plan and coordinate ideas for specific actions in a movie / animation scene. Even though concept artists are crucial, they are also very costly to hire and for small movie and animation studios and companies that are on a tight budget, thus these small studios and companies find it hard to generate ideas for a movie / animation scene due not being able to hire more concept artists. Therefore, an intuitive approach would then be needed to solve this problem. Image generation using Generative Adversarial Network (GAN) has become an immensely popular topic in the field of Computer Vision in recent years. With the emergence of new state of the art GAN architectures, the Pix2Pix architecture which is a type of Conditional GAN demonstrated that it was able to generate detailed images of buildings, bags and street maps given an input [Edges, Image Labels, aerial map]. In this project, we will be exploring Pix2Pix image-to-image translation abilities to generate facial images of Japanese Manga / Anime characters given a user sketch as its input, images of Asian faces given an artist sketch as its input and vice versa to serve as alternative ways to generate character design concept arts which can serve as inspirational and motivating ideas for the movie and animation directors of a small company. We will also be experimenting with Enhanced Super-Resolution Generative Adversarial Networks to sharpen the output image of the Pix2Pix network, and also observe the effects of using traditional edge detection (Canny edge detection) methods versus using a state-of-the-art edge detection method called Holistically-Nested Edge Detection to generate edges of an image to serve as a paired training image input for the Pix2Pix network. Based on the results of our experiments, image-to-image translation using GAN may be a viable option to replace concept artists for small movie and animation studios in the near future as they can be trained to generate specific types of images given the right quality and amount of training data.||URI:||https://hdl.handle.net/10356/156588||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Student Reports (FYP/IA/PA/PI)|
Updated on May 20, 2022
Updated on May 20, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.