Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/180233
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHu, Taoen_US
dc.contributor.authorHong, Fangzhouen_US
dc.contributor.authorLiu, Ziweien_US
dc.date.accessioned2024-09-26T01:19:29Z-
dc.date.available2024-09-26T01:19:29Z-
dc.date.issued2024-
dc.identifier.citationHu, T., Hong, F. & Liu, Z. (2024). StructLDM: structured latent diffusion for 3D human generation. 2024 European Conference on Computer Vision (ECCV). https://dx.doi.org/10.48550/arXiv.2404.01241en_US
dc.identifier.urihttps://hdl.handle.net/10356/180233-
dc.description.abstractRecent 3D human generative models have achieved remarkable progress by learning 3D-aware GANs from 2D images. However, existing 3D human generative methods model humans in a compact 1D latent space, ignoring the articulated structure and semantics of human body topology. In this paper, we explore more expressive and higher-dimensional latent space for 3D human modeling and propose StructLDM, a diffusion-based unconditional 3D human generative model, which is learned from 2D images. StructLDM solves the challenges imposed due to the high-dimensional growth of latent space with three key designs: 1) A semantic structured latent space defined on the dense surface manifold of a statistical human body template. 2) A structured 3D-aware auto-decoder that factorizes the global latent space into several semantic body parts parameterized by a set of conditional structured local NeRFs anchored to the body template, which embeds the properties learned from the 2D training data and can be decoded to render view-consistent humans under different poses and clothing styles. 3) A structured latent diffusion model for generative human appearance sampling. Extensive experiments validate StructLDM's state-of-the-art generation performance and illustrate the expressiveness of the structured latent space over the well-adopted 1D latent space. Notably, StructLDM enables different levels of controllable 3D human generation and editing, including pose/view/shape control, and high-level tasks including compositional generations, part-aware clothing editing, 3D virtual try-on, etc. Our project page is at: https://taohuumd.github.io/projects/StructLDM/.en_US
dc.description.sponsorshipMinistry of Education (MOE)en_US
dc.language.isoenen_US
dc.relationMOET2EP20221- 0012en_US
dc.relationNTU-NAPen_US
dc.relationIAF-ICPen_US
dc.relationRIE2020en_US
dc.relation.uri10.21979/N9/BXUEXVen_US
dc.rights© 2024 ECCV. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder.en_US
dc.subjectComputer and Information Scienceen_US
dc.titleStructLDM: structured latent diffusion for 3D human generationen_US
dc.typeConference Paperen
dc.contributor.schoolCollege of Computing and Data Scienceen_US
dc.contributor.conference2024 European Conference on Computer Vision (ECCV)en_US
dc.contributor.researchS-Laben_US
dc.identifier.doi10.48550/arXiv.2404.01241-
dc.description.versionSubmitted/Accepted versionen_US
dc.identifier.urlhttp://arxiv.org/abs/2404.01241v3-
dc.subject.keywords3D human generationen_US
dc.subject.keywordsLatent diffusion modelen_US
dc.citation.conferencelocationMilan, Italyen_US
dc.description.acknowledgementThis study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOET2EP20221- 0012), NTU NAP, and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).en_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:CCDS Conference Papers
Files in This Item:
File Description SizeFormat 
StructLDM Structured Latent Diffusion for 3D Human Generation.pdfPreprint37.77 MBAdobe PDFView/Open

Page view(s)

94
Updated on Jan 22, 2025

Download(s)

7
Updated on Jan 22, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.