Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/172575
Title: | VToonify: controllable high-resolution portrait video style transfer | Authors: | Yang, Shuai Jiang, Liming Liu, Ziwei Loy, Chen Change |
Keywords: | Computer and Information Science | Issue Date: | 2022 | Source: | Yang, S., Jiang, L., Liu, Z. & Loy, C. C. (2022). VToonify: controllable high-resolution portrait video style transfer. ACM Transactions On Graphics, 41(6), 203:1-203:15. https://dx.doi.org/10.1145/3550454.3555437 | Journal: | ACM Transactions on Graphics | Abstract: | Generating high-quality artistic portrait videos is an important and desirable task in computer graphics and vision. Although a series of successful portrait image toonification models built upon the powerful StyleGAN have been proposed, these image-oriented methods have obvious limitations when applied to videos, such as the fixed frame size, the requirement of face alignment, missing non-facial details and temporal inconsistency. In this work, we investigate the challenging controllable high-resolution portrait video style transfer by introducing a novel VToonify framework. Specifically, VToonify leverages the mid-and high-resolution layers of StyleGAN to render high-quality artistic portraits based on the multi-scale content features extracted by an encoder to better preserve the frame details. The resulting fully convolutional architecture accepts non-Aligned faces in videos of variable size as input, contributing to complete face regions with natural motions in the output. Our framework is compatible with existing StyleGAN-based image toonification models to extend them to video toonification, and inherits appealing features of these models for flexible style control on color and intensity. This work presents two instantiations of VToonify built upon Toonify and DualStyleGAN for collection-based and exemplar-based portrait video style transfer, respectively. Extensive experimental results demonstrate the effectiveness of our proposed VToonify framework over existing methods in generating high-quality and temporally-coherent artistic portrait videos with flexible style controls. Code and pretrained models are available at our project page: www.mmlab-ntu.com/project/vtoonify/. | URI: | https://hdl.handle.net/10356/172575 | ISSN: | 0730-0301 | DOI: | 10.1145/3550454.3555437 | DOI (Related Dataset): | 10.21979/N9/7PGAOA | Schools: | School of Computer Science and Engineering | Research Centres: | S-Lab | Rights: | © 2022 Association for Computing Machinery. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1145/3550454.3555437. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
VToonify_cameral_ready.pdf | Preprint | 4.74 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
10
42
Updated on May 6, 2025
Page view(s)
188
Updated on May 5, 2025
Download(s) 50
63
Updated on May 5, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.