Please use this identifier to cite or link to this item:
Title: Cross-modal graph with meta concepts for video captioning
Authors: Wang, Hao
Lin, Guosheng
Hoi, Steven C. H.
Miao, Chunyan
Keywords: Computer Science - Computer Vision and Pattern Recognition
Issue Date: 2022
Source: Wang, H., Lin, G., Hoi, S. C. H. & Miao, C. (2022). Cross-modal graph with meta concepts for video captioning. IEEE Transactions On Image Processing, 31, 5150-5162.
Project: AISG-GC-2019-003
Journal: IEEE Transactions on Image Processing 
Abstract: Video captioning targets interpreting the complex visual contents as text descriptions, which requires the model to fully understand video scenes including objects and their interactions. Prevailing methods adopt off-the-shelf object detection networks to give object proposals and use the attention mechanism to model the relations between objects. They often miss some undefined semantic concepts of the pretrained model and fail to identify exact predicate relationships between objects. In this paper, we investigate an open research task of generating text descriptions for the given videos, and propose Cross-Modal Graph (CMG) with meta concepts for video captioning. Specifically, to cover the useful semantic concepts in video captions, we weakly learn the corresponding visual regions for text descriptions, where the associated visual regions and textual words are named cross-modal meta concepts. We further build meta concept graphs dynamically with the learned cross-modal meta concepts. We also construct holistic video-level and local frame-level video graphs with the predicted predicates to model video sequence structures. We validate the efficacy of our proposed techniques with extensive experiments and achieve state-of-the-art results on two public datasets.
ISSN: 1057-7149
DOI: 10.1109/TIP.2022.3192709
Rights: © 2022 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

Page view(s)

Updated on Dec 3, 2022

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.