Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/180265
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wu, Tianxing | en_US |
dc.contributor.author | Si, Chenyang | en_US |
dc.contributor.author | Jiang, Yuming | en_US |
dc.contributor.author | Huang, Ziqi | en_US |
dc.contributor.author | Liu, Ziwei | en_US |
dc.date.accessioned | 2024-09-26T01:02:22Z | - |
dc.date.available | 2024-09-26T01:02:22Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Wu, T., Si, C., Jiang, Y., Huang, Z. & Liu, Z. (2024). FreeInit: bridging initialization gap in video diffusion models. 2024 European Conference on Computer Vision (ECCV). https://dx.doi.org/10.48550/arXiv.2312.07537 | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/180265 | - |
dc.description.abstract | Though diffusion-based video generation has witnessed rapid progress, the inference results of existing models still exhibit unsatisfactory temporal consistency and unnatural dynamics. In this paper, we delve deep into the noise initialization of video diffusion models, and discover an implicit training-inference gap that attributes to the unsatisfactory inference quality.Our key findings are: 1) the spatial-temporal frequency distribution of the initial noise at inference is intrinsically different from that for training, and 2) the denoising process is significantly influenced by the low-frequency components of the initial noise. Motivated by these observations, we propose a concise yet effective inference sampling strategy, FreeInit, which significantly improves temporal consistency of videos generated by diffusion models. Through iteratively refining the spatial-temporal low-frequency components of the initial latent during inference, FreeInit is able to compensate the initialization gap between training and inference, thus effectively improving the subject appearance and temporal consistency of generation results. Extensive experiments demonstrate that FreeInit consistently enhances the generation quality of various text-to-video diffusion models without additional training or fine-tuning. | en_US |
dc.description.sponsorship | Ministry of Education (MOE) | en_US |
dc.description.sponsorship | Nanyang Technological University | en_US |
dc.language.iso | en | en_US |
dc.relation | MOET2EP20221- 0012 | en_US |
dc.relation | RIE2020 | en_US |
dc.relation.uri | doi:10.21979/N9/JMCW1W | en_US |
dc.rights | © 2024 ECCV. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. | en_US |
dc.subject | Computer and Information Science | en_US |
dc.title | FreeInit: bridging initialization gap in video diffusion models | en_US |
dc.type | Conference Paper | en |
dc.contributor.school | College of Computing and Data Science | en_US |
dc.contributor.conference | 2024 European Conference on Computer Vision (ECCV) | en_US |
dc.contributor.research | S-Lab | en_US |
dc.identifier.doi | 10.48550/arXiv.2312.07537 | - |
dc.description.version | Submitted/Accepted version | en_US |
dc.identifier.url | http://arxiv.org/abs/2312.07537v2 | - |
dc.subject.keywords | Computer vision | en_US |
dc.subject.keywords | Pattern recognition | en_US |
dc.citation.conferencelocation | Milan, Italy | en_US |
dc.description.acknowledgement | This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOET2EP20221- 0012), NTU NAP, and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). | en_US |
item.fulltext | With Fulltext | - |
item.grantfulltext | open | - |
Appears in Collections: | CCDS Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FreeInit.pdf | Preprint | 44.89 MB | Adobe PDF | View/Open |
Page view(s)
101
Updated on Jan 22, 2025
Download(s)
20
Updated on Jan 22, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.