Please use this identifier to cite or link to this item:
Title: Computational design tools for cartooning
Authors: Pradeep Kumar Jayaraman
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Computer graphics
Issue Date: 2017
Source: Pradeep Kumar Jayaraman. (2017). Computational design tools for cartooning. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: Cartoons have a long and rich history in entertainment industry. While traditional methods to create 2D cartoons are prohibitively tedious, the use of computers has greatly simplified and improved the efficiency in cartoon production. Still, a majority of established software tools for cartooning use the computer merely as a canvas. Recently, there is an increased research interest in developing computer-assisted methods to simplify the tedious manual work performed by artists, thereby allowing them to focus on creative endeavours. Such works intelligently analyze artwork with computational models, and have been applied to problems such as line drawing vectorization, inbetweening, coloring, and animation. In this research, we mainly focus on three areas that have not received much attention: drawing and digitizing freeform artwork from paper medium, automatically generating shading on 2D line drawings, and semantically manipulating or animating existing cartoon images. These areas are unified under the roof of efficient computational cartoon techniques. We also present an application of the developed methods for interactive 3D reconstruction of organic objects from natural images. We first explore drawing and digitizing 2D line drawings from a paper medium. We present a novel method that directly recognizes and vectorizes strokes drawn on paper using everyday pens/pencils, in front of a commodity webcam. Compared to offline methods where the paper is scanned and then the strokes are vectorized, our online method analyzes the entire drawing process and captures both spatial and temporal information of the strokes similar to a digital tablet. By this, we can reconstruct strokes as drawn by the user, without ambiguities caused by stroke junctions, intersections, etc. Our method may facilitate the development of various multimedia applications such as line drawing vectorization, video scribing, and pen input interface. We discuss potential future directions to scale up this work to enable usage in demanding applications. Second, we consider the problem of generating shading on detailed 2D line drawings with wrinkles. Shading is a tedious process in comic/cartoon production given the volume of contents that artists have to prepare regularly over tight schedule. While we can automate shading with the presence of geometry, 2D artists are very unlikely to model the geometry for every single drawing which is actually also a very tedious task. In this work, we aim to automate shading generation by analyzing the shapes, interactions, and spatial arrangement of wrinkle strokes in a clean line drawing. By this, artists can focus more on the design rather than the tedious manual work, and experiment with different shading under different light conditions. To achieve this, we propose a novel technique that contains three key technical contributions. First, we model five perceptual cues by exploring relevant psychological principles to estimate the local depth profile around strokes automatically. Second, we formulate stroke interpretation as a global optimization model that balances different interpretations suggested by the perceptual cues and minimizes the discrepancy. Third, we develop a wrinkle-aware inflation method to generate a height field surface that can support shading. We demonstrate various styles of shading effects including 3D-like soft shading and 2D manga style shading on a number of line drawings. Third, we observe that a huge amount of 2D cartoon images are digitally available today. Hence, there is a significant artistic value if we can somehow re-use this content and enable end-users to manipulate or animate these images to create intriguing results with minimum effort. Regarding this, we undertake the problem of modeling the hairs in a given cartoon image with consistent layering and occlusion, so that we can produce various visual effects from just a single image. We propose a novel 2.5D modeling approach to deal with this problem. Given an input image, we segment the hairs of the cartoon character into regions of hair strands. Then, we apply our novel layering metric to automatically estimate the depth ordering among the hair strands, employ our hair completion method to fill the occluded regions, and create a 2.5D model of the cartoon hair. By this, we can produce various visual effects, such as windblown hair animations and local hair editing. Finally, we present an application of the developed methods to the problem of interactively reconstructing high-relief 3D geometry from a single photo. We begin by constructing a 2.5D model by segmenting the image into regions, followed by layering and completion that can handle three common cases of occlusions. Next, users can interactively markup slope and curvature cues on the image to guide our constrained optimization model to inflate and lift up the image regions to 3D. Finally, we stitch and optimize the inflated layers to produce a high-relief 3D model. Compared to previous work, we can generate high-relief geometry that can plausibly support large viewing angles, and handle complex organic objects with varying shape profiles.
DOI: 10.32657/10356/69653
Schools: School of Computer Science and Engineering 
Research Centres: Game Lab 
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Theses

Files in This Item:
File Description SizeFormat 
thesis_g1103344k.pdf23.89 MBAdobe PDFThumbnail

Page view(s) 50

Updated on Jul 18, 2024

Download(s) 20

Updated on Jul 18, 2024

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.