Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/178454
Title: DisCoVQA: temporal distortion-content transformers for video quality assessment
Authors: Wu, Haoning
Chen, Chaofeng
Liao, Liang
Hou, Jingwen
Sun, Wenxiu
Yan, Qiong
Lin, Weisi
Keywords: Computer and Information Science
Issue Date: 2023
Source: Wu, H., Chen, C., Liao, L., Hou, J., Sun, W., Yan, Q. & Lin, W. (2023). DisCoVQA: temporal distortion-content transformers for video quality assessment. IEEE Transactions On Circuits and Systems for Video Technology, 33(9), 4840-4854. https://dx.doi.org/10.1109/TCSVT.2023.3249741
Journal: IEEE Transactions on Circuits and Systems for Video Technology
Abstract: Compared with spatial counterparts, temporal relationships between frames and their influences on video quality assessment (VQA) are still relatively under-studied in existing works. These relationships lead to two important types of effects for video quality. Firstly, some meaningless temporal variations (such as shaking, flicker, and unsmooth scene transitions) cause temporal distortions that degrade quality of videos. Secondly, the human visual system often has different attention to frames with different contents, resulting in their different importance to the overall video quality. Based on prominent time-series modeling ability of transformers, we propose a novel and effective transformer-based VQA method to tackle these two issues. To better differentiate temporal variations and thus capture the temporal distortions, we design the Spatial-Temporal Distortion Extraction (STDE) module that extracts multi-level spatial-temporal features with a video swin transformer tiny (Swin-T) backbone and uses temporal difference layer to further capture these distortions. To tackle with temporal quality attention, we propose the encoder-decoder-like temporal content transformer (TCT). We also introduce the temporal sampling on features to reduce the input length for the TCT, so as to improve the learning effectiveness and efficiency of this module. Consisting of the STDE and the TCT, the proposed Temporal Distortion-Content Transformers for Video Quality Assessment (DisCoVQA) reaches state-of-the-art performance on several VQA benchmarks without any extra pre-training datasets and up to 10% better generalization ability than existing methods. We also conduct extensive ablation experiments to prove the effectiveness of each part in our proposed model, and provide visualizations to prove that the proposed modules achieve our intention on modeling these temporal issues. Our code is published at https://github.com/QualityAssessment/DisCoVQA.
URI: https://hdl.handle.net/10356/178454
ISSN: 1051-8215
DOI: 10.1109/TCSVT.2023.3249741
Schools: College of Computing and Data Science 
School of Computer Science and Engineering 
Research Centres: S-Lab
Rights: © 2023 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:CCDS Journal Articles

SCOPUSTM   
Citations 10

40
Updated on May 2, 2025

Page view(s)

90
Updated on May 5, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.