Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/142314
Title: More is better : precise and detailed image captioning using online positive recall and missing concepts mining
Authors: Zhang, Mingxing
Yang, Yang
Zhang, Hanwang
Ji, Yanli
Shen, Heng Tao
Chua, Tat-Seng
Keywords: Engineering::Computer science and engineering
Issue Date: 2018
Source: Zhang, M., Yang, Y., Zhang, H., Ji, Y., Shen, H. T., & Chua, T.-S. (2019). More is better : precise and detailed image captioning using online positive recall and missing concepts mining. IEEE Transactions on Image Processing, 28(1), 32-44. doi:10.1109/TIP.2018.2855415
Journal: IEEE Transactions on Image Processing
Abstract: Recently, a great progress in automatic image captioning has been achieved by using semantic concepts detected from the image. However, we argue that existing concepts-to-caption framework, in which the concept detector is trained using the image-caption pairs to minimize the vocabulary discrepancy, suffers from the deficiency of insufficient concepts. The reasons are two-fold: 1) the extreme imbalance between the number of occurrence positive and negative samples of the concept and 2) the incomplete labeling in training captions caused by the biased annotation and usage of synonyms. In this paper, we propose a method, termed online positive recall and missing concepts mining, to overcome those problems. Our method adaptively re-weights the loss of different samples according to their predictions for online positive recall and uses a two-stage optimization strategy for missing concepts mining. In this way, more semantic concepts can be detected and a high accuracy will be expected. On the caption generation stage, we explore an element-wise selection process to automatically choose the most suitable concepts at each time step. Thus, our method can generate more precise and detailed caption to describe the image. We conduct extensive experiments on the MSCOCO image captioning data set and the MSCOCO online test server, which shows that our method achieves superior image captioning performance compared with other competitive methods.
URI: https://hdl.handle.net/10356/142314
ISSN: 1057-7149
DOI: 10.1109/TIP.2018.2855415
Rights: © 2018 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 10

51
Updated on Nov 28, 2022

Web of ScienceTM
Citations 10

45
Updated on Dec 3, 2022

Page view(s)

120
Updated on Dec 4, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.