Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/173109
Title: | Hierarchical vectorization for facial images | Authors: | Fu, Qian Liu, Linlin Hou, Fei He, Ying |
Keywords: | Engineering::Civil engineering | Issue Date: | 2023 | Source: | Fu, Q., Liu, L., Hou, F. & He, Y. (2023). Hierarchical vectorization for facial images. Computational Visual Media, 10(1), 97-118. https://dx.doi.org/10.1007/s41095-022-0314-4 | Project: | RG20/20 | Journal: | Computational Visual Media | Abstract: | The explosive growth of social media means portrait editing and retouching are in high demand. While portraits are commonly captured and stored as raster images, editing raster images is non-trivial and requires the user to be highly skilled. Aiming at developing intuitive and easy-to-use portrait editing tools, we propose a novel vectorization method that can automatically convert raster images into a 3-tier hierarchical representation. The base layer consists of a set of sparse diffusion curves (DCs) which characterize salient geometric features and low-frequency colors, providing a means for semantic color transfer and facial expression editing. The middle level encodes specular highlights and shadows as large, editable Poisson regions (PRs) and allows the user to directly adjust illumination by tuning the strength and changing the shapes of PRs. The top level contains two types of pixel-sized PRs for high-frequency residuals and fine details such as pimples and pigmentation. We train a deep generative model that can produce high-frequency residuals automatically. Thanks to the inherent meaning in vector primitives, editing portraits becomes easy and intuitive. In particular, our method supports color transfer, facial expression editing, highlight and shadow editing, and automatic retouching. To quantitatively evaluate the results, we extend the commonly used FLIP metric (which measures color and feature differences between two images) to consider illumination. The new metric, illumination-sensitive FLIP, can effectively capture salient changes in color transfer results, and is more consistent with human perception than FLIP and other quality measures for portrait images. We evaluate our method on the FFHQR dataset and show it to be effective for common portrait editing tasks, such as retouching, light editing, color transfer, and expression editing.[Figure not available: see fulltext.]. | URI: | https://hdl.handle.net/10356/173109 | ISSN: | 2096-0433 | DOI: | 10.1007/s41095-022-0314-4 | Schools: | School of Computer Science and Engineering | Rights: | ⃝© The Author(s) 2023. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate cred it to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
s41095-022-0314-4.pdf | 20.03 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
50
2
Updated on May 2, 2025
Page view(s)
138
Updated on May 6, 2025
Download(s) 50
45
Updated on May 6, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.