Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/180265
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWu, Tianxingen_US
dc.contributor.authorSi, Chenyangen_US
dc.contributor.authorJiang, Yumingen_US
dc.contributor.authorHuang, Ziqien_US
dc.contributor.authorLiu, Ziweien_US
dc.date.accessioned2024-09-26T01:02:22Z-
dc.date.available2024-09-26T01:02:22Z-
dc.date.issued2024-
dc.identifier.citationWu, T., Si, C., Jiang, Y., Huang, Z. & Liu, Z. (2024). FreeInit: bridging initialization gap in video diffusion models. 2024 European Conference on Computer Vision (ECCV). https://dx.doi.org/10.48550/arXiv.2312.07537en_US
dc.identifier.urihttps://hdl.handle.net/10356/180265-
dc.description.abstractThough diffusion-based video generation has witnessed rapid progress, the inference results of existing models still exhibit unsatisfactory temporal consistency and unnatural dynamics. In this paper, we delve deep into the noise initialization of video diffusion models, and discover an implicit training-inference gap that attributes to the unsatisfactory inference quality.Our key findings are: 1) the spatial-temporal frequency distribution of the initial noise at inference is intrinsically different from that for training, and 2) the denoising process is significantly influenced by the low-frequency components of the initial noise. Motivated by these observations, we propose a concise yet effective inference sampling strategy, FreeInit, which significantly improves temporal consistency of videos generated by diffusion models. Through iteratively refining the spatial-temporal low-frequency components of the initial latent during inference, FreeInit is able to compensate the initialization gap between training and inference, thus effectively improving the subject appearance and temporal consistency of generation results. Extensive experiments demonstrate that FreeInit consistently enhances the generation quality of various text-to-video diffusion models without additional training or fine-tuning.en_US
dc.description.sponsorshipMinistry of Education (MOE)en_US
dc.description.sponsorshipNanyang Technological Universityen_US
dc.language.isoenen_US
dc.relationMOET2EP20221- 0012en_US
dc.relationRIE2020en_US
dc.relation.uridoi:10.21979/N9/JMCW1Wen_US
dc.rights© 2024 ECCV. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder.en_US
dc.subjectComputer and Information Scienceen_US
dc.titleFreeInit: bridging initialization gap in video diffusion modelsen_US
dc.typeConference Paperen
dc.contributor.schoolCollege of Computing and Data Scienceen_US
dc.contributor.conference2024 European Conference on Computer Vision (ECCV)en_US
dc.contributor.researchS-Laben_US
dc.identifier.doi10.48550/arXiv.2312.07537-
dc.description.versionSubmitted/Accepted versionen_US
dc.identifier.urlhttp://arxiv.org/abs/2312.07537v2-
dc.subject.keywordsComputer visionen_US
dc.subject.keywordsPattern recognitionen_US
dc.citation.conferencelocationMilan, Italyen_US
dc.description.acknowledgementThis study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOET2EP20221- 0012), NTU NAP, and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).en_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:CCDS Conference Papers
Files in This Item:
File Description SizeFormat 
FreeInit.pdfPreprint44.89 MBAdobe PDFView/Open

Page view(s)

101
Updated on Jan 22, 2025

Download(s)

20
Updated on Jan 22, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.