Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/163319
Title: Text2Human: text-driven controllable human image generation
Authors: Jiang, Yuming
Yang, Shuai
Qju, Haonan
Wu, Wayne
Loy, Chen Change
Liu, Ziwei
Keywords: Engineering::Computer science and engineering
Issue Date: 2022
Source: Jiang, Y., Yang, S., Qju, H., Wu, W., Loy, C. C. & Liu, Z. (2022). Text2Human: text-driven controllable human image generation. ACM Transactions On Graphics, 41(4), 162-. https://dx.doi.org/10.1145/3528223.3530104
Project: 2021-T1-001-088
IAF-ICP
Journal: ACM Transactions on Graphics
Abstract: Generating high-quality and diverse human images is an important yet challenging task in vision and graphics. However, existing generative models often fall short under the high diversity of clothing shapes and textures. Furthermore, the generation process is even desired to be intuitively controllable for layman users. In this work, we present a text-driven controllable framework, Text2Human, for a high-quality and diverse human generation. We synthesize full-body human images starting from a given human pose with two dedicated steps. 1) With some texts describing the shapes of clothes, the given human pose is first translated to a human parsing map. 2) The final human image is then generated by providing the system with more attributes about the textures of clothes. Specifically, to model the diversity of clothing textures, we build a hierarchical texture-aware codebook that stores multi-scale neural representations for each type of texture. The codebook at the coarse level includes the structural representations of textures, while the codebook at the fine level focuses on the details of textures. To make use of the learned hierarchical codebook to synthesize desired images, a diffusion-based transformer sampler with mixture of experts is firstly employed to sample indices from the coarsest level of the codebook, which then is used to predict the indices of the codebook at finer levels. The predicted indices at different levels are translated to human images by the decoder learned accompanied with hierarchical codebooks. The use of mixture-of-experts allows for the generated image conditioned on the fine-grained text input. The prediction for finer level indices refines the quality of clothing textures. Extensive quantitative and qualitative evaluations demonstrate that our proposed Text2Human framework can generate more diverse and realistic human images compared to state-of-the-art methods. Our project page is https://yumingj.github.io/projects/Text2Human.html. Code and pretrained models are available at https://github.com/yumingj/Text2Human.
URI: https://hdl.handle.net/10356/163319
ISSN: 0730-0301
DOI: 10.1145/3528223.3530104
Rights: © 2022 Association for Computing Machinery. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 50

4
Updated on Feb 1, 2023

Web of ScienceTM
Citations 50

1
Updated on Feb 3, 2023

Page view(s)

15
Updated on Feb 6, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.