Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/178453
Title: FAST-VQA: efficient end-to-end video quality assessment with fragment sampling
Authors: Wu, Haoning
Chen, Chaofeng
Hou, Jingwen
Liao, Liang
Wang, Annan
Sun, Wenxiu
Yan, Qiong
Lin, Weisi
Keywords: Computer and Information Science
Issue Date: 2022
Source: Wu, H., Chen, C., Hou, J., Liao, L., Wang, A., Sun, W., Yan, Q. & Lin, W. (2022). FAST-VQA: efficient end-to-end video quality assessment with fragment sampling. 17th European Conference on Computer Vision (ECCV 2022), LNCS 13666, 538-554. https://dx.doi.org/10.1007/978-3-031-20068-7_31
Conference: 17th European Conference on Computer Vision (ECCV 2022)
Abstract: Current deep video quality assessment (VQA) methods are usually with high computational costs when evaluating high-resolution videos. This cost hinders them from learning better video-quality-related representations via end-to-end training. Existing approaches typically consider naive sampling to reduce the computational cost, such as resizing and cropping. However, they obviously corrupt quality-related information in videos and are thus not optimal to learn good representations for VQA. Therefore, there is an eager need to design a new quality-retained sampling scheme for VQA. In this paper, we propose Grid Mini-patch Sampling (GMS), which allows consideration of local quality by sampling patches at their raw resolution and covers global quality with contextual relations via mini-patches sampled in uniform grids. These mini-patches are spliced and aligned temporally, named as fragments. We further build the Fragment Attention Network (FANet) specially designed to accommodate fragments as inputs. Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations. It improves state-of-the-art accuracy by around 10 % while reducing 99.5 % FLOPs on 1080P high-resolution videos. The newly learned video-quality-related representations can also be transferred into smaller VQA datasets, boosting the performance on these scenarios. Extensive experiments show that FAST-VQA has good performance on inputs of various resolutions while retaining high efficiency. We publish our code at https://github.com/timothyhtimothy/FAST-VQA.
URI: https://hdl.handle.net/10356/178453
URL: https://link.springer.com/chapter/10.1007/978-3-031-20068-7_31
ISBN: 9783031200670
DOI: 10.1007/978-3-031-20068-7_31
Schools: College of Computing and Data Science 
School of Computer Science and Engineering 
Research Centres: S-Lab
Rights: © 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:CCDS Conference Papers

SCOPUSTM   
Citations 5

102
Updated on May 7, 2025

Page view(s)

124
Updated on May 5, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.