Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/180256
Title: LN3Diff: scalable latent neural fields diffusion for speedy 3D generation
Authors: Lan, Yushi
Hong, Fangzhou
Yang, Shuai
Zhou, Shangchen
Meng, Xuyi
Dai, Bo
Pan, Xingang
Loy, Chen Change
Keywords: Computer and Information Science
Issue Date: 2024
Source: Lan, Y., Hong, F., Yang, S., Zhou, S., Meng, X., Dai, B., Pan, X. & Loy, C. C. (2024). LN3Diff: scalable latent neural fields diffusion for speedy 3D generation. 2024 European Conference on Computer Vision (ECCV). https://dx.doi.org/10.48550/arXiv.2403.12019
Conference: 2024 European Conference on Computer Vision (ECCV)
Abstract: The field of neural rendering has witnessed significant progress with advancements in generative models and differentiable rendering techniques. Though 2D diffusion has achieved success, a unified 3D diffusion pipeline remains unsettled. This paper introduces a novel framework called LN3Diff to address this gap and enable fast, high-quality, and generic conditional 3D generation. Our approach harnesses a 3D-aware architecture and variational autoencoder (VAE) to encode the input image into a structured, compact, and 3D latent space. The latent is decoded by a transformer-based decoder into a high-capacity 3D neural field. Through training a diffusion model on this 3D-aware latent space, our method achieves state-of-the-art performance on ShapeNet for 3D generation and demonstrates superior performance in monocular 3D reconstruction and conditional 3D generation across various datasets. Moreover, it surpasses existing 3D diffusion methods in terms of inference speed, requiring no per-instance optimization. Our proposed LN3Diff presents a significant advancement in 3D generative modeling and holds promise for various applications in 3D vision and graphics tasks.
URI: https://hdl.handle.net/10356/180256
URL: http://arxiv.org/abs/2403.12019v2
DOI: 10.48550/arXiv.2403.12019
DOI (Related Dataset): 10.21979/N9/UZ06ZG
Schools: College of Computing and Data Science 
Research Centres: S-Lab
Rights: © 2024 ECCV. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:CCDS Conference Papers

Files in This Item:
File Description SizeFormat 
_ECCV_2024_LN3Diff.pdfPreprint10 MBAdobe PDFView/Open

Page view(s)

74
Updated on Dec 4, 2024

Download(s)

12
Updated on Dec 4, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.