Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/142314
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Mingxing | en_US |
dc.contributor.author | Yang, Yang | en_US |
dc.contributor.author | Zhang, Hanwang | en_US |
dc.contributor.author | Ji, Yanli | en_US |
dc.contributor.author | Shen, Heng Tao | en_US |
dc.contributor.author | Chua, Tat-Seng | en_US |
dc.date.accessioned | 2020-06-19T02:54:56Z | - |
dc.date.available | 2020-06-19T02:54:56Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | Zhang, M., Yang, Y., Zhang, H., Ji, Y., Shen, H. T., & Chua, T.-S. (2019). More is better : precise and detailed image captioning using online positive recall and missing concepts mining. IEEE Transactions on Image Processing, 28(1), 32-44. doi:10.1109/TIP.2018.2855415 | en_US |
dc.identifier.issn | 1057-7149 | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/142314 | - |
dc.description.abstract | Recently, a great progress in automatic image captioning has been achieved by using semantic concepts detected from the image. However, we argue that existing concepts-to-caption framework, in which the concept detector is trained using the image-caption pairs to minimize the vocabulary discrepancy, suffers from the deficiency of insufficient concepts. The reasons are two-fold: 1) the extreme imbalance between the number of occurrence positive and negative samples of the concept and 2) the incomplete labeling in training captions caused by the biased annotation and usage of synonyms. In this paper, we propose a method, termed online positive recall and missing concepts mining, to overcome those problems. Our method adaptively re-weights the loss of different samples according to their predictions for online positive recall and uses a two-stage optimization strategy for missing concepts mining. In this way, more semantic concepts can be detected and a high accuracy will be expected. On the caption generation stage, we explore an element-wise selection process to automatically choose the most suitable concepts at each time step. Thus, our method can generate more precise and detailed caption to describe the image. We conduct extensive experiments on the MSCOCO image captioning data set and the MSCOCO online test server, which shows that our method achieves superior image captioning performance compared with other competitive methods. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartof | IEEE Transactions on Image Processing | en_US |
dc.rights | © 2018 IEEE. All rights reserved. | en_US |
dc.subject | Engineering::Computer science and engineering | en_US |
dc.title | More is better : precise and detailed image captioning using online positive recall and missing concepts mining | en_US |
dc.type | Journal Article | en |
dc.contributor.school | School of Computer Science and Engineering | en_US |
dc.identifier.doi | 10.1109/TIP.2018.2855415 | - |
dc.identifier.pmid | 30010565 | - |
dc.identifier.scopus | 2-s2.0-85049964023 | - |
dc.identifier.issue | 1 | en_US |
dc.identifier.volume | 28 | en_US |
dc.identifier.spage | 32 | en_US |
dc.identifier.epage | 44 | en_US |
dc.subject.keywords | Precise and Detailed Image Captioning | en_US |
dc.subject.keywords | Semantic Concepts | en_US |
item.fulltext | No Fulltext | - |
item.grantfulltext | none | - |
Appears in Collections: | SCSE Journal Articles |
SCOPUSTM
Citations
5
63
Updated on Mar 23, 2024
Web of ScienceTM
Citations
5
53
Updated on Oct 26, 2023
Page view(s)
187
Updated on Mar 28, 2024
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.