Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/164145
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Haoen_US
dc.contributor.authorLin, Guoshengen_US
dc.contributor.authorHoi, Steven C. H.en_US
dc.contributor.authorMiao, Chunyanen_US
dc.date.accessioned2023-01-06T06:05:12Z-
dc.date.available2023-01-06T06:05:12Z-
dc.date.issued2022-
dc.identifier.citationWang, H., Lin, G., Hoi, S. C. H. & Miao, C. (2022). Paired cross-modal data augmentation for fine-grained image-to-text retrieval. 30th ACM International Conference on Multimedia (MM 2022), 5517-5526. https://dx.doi.org/10.1145/3503161.3547809en_US
dc.identifier.isbn9781450392037-
dc.identifier.urihttps://hdl.handle.net/10356/164145-
dc.description.abstractThis paper investigates an open research problem of generating text-image pairs to improve the training of fine-grained image-to-text cross-modal retrieval task, and proposes a novel framework for paired data augmentation by uncovering the hidden semantic information of StyleGAN2 model. Specifically, we first train a StyleGAN2 model on the given dataset. We then project the real images back to the latent space of StyleGAN2 to obtain the latent codes. To make the generated images manipulatable, we further introduce a latent space alignment module to learn the alignment between StyleGAN2 latent codes and the corresponding textual caption features. When we do online paired data augmentation, we first generate augmented text through random token replacement, then pass the augmented text into the latent space alignment module to output the latent codes, which are finally fed to StyleGAN2 to generate the augmented images. We evaluate the efficacy of our augmented data approach on two public cross-modal retrieval datasets, in which the promising experimental results demonstrate the augmented text-image pair data can be trained together with the original data to boost the image-to-text cross-modal retrieval performance.en_US
dc.description.sponsorshipAI Singaporeen_US
dc.description.sponsorshipMinistry of Education (MOE)en_US
dc.description.sponsorshipMinistry of Health (MOH)en_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.language.isoenen_US
dc.relationAISG-GC-2019-003en_US
dc.relationNRF-NRFI05-2019-0002en_US
dc.relationMOH/NIC/HAIG03/2017en_US
dc.relationAISG-RP-2018-003en_US
dc.relationRG95/20en_US
dc.rights© 2022 The owner/author(s). Publication rights licensed to ACM. All rights reserved. This paper was published in the Proceedings of 30th ACM International Conference on Multimedia (MM 2022) and is made available with permission of The owner/author(s).en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titlePaired cross-modal data augmentation for fine-grained image-to-text retrievalen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conference30th ACM International Conference on Multimedia (MM 2022)en_US
dc.identifier.doi10.1145/3503161.3547809-
dc.description.versionSubmitted/Accepted versionen_US
dc.identifier.spage5517en_US
dc.identifier.epage5526en_US
dc.subject.keywordsImage-to-Text Retrievalen_US
dc.subject.keywordsComputing Methodologiesen_US
dc.citation.conferencelocationLisbon, Portugalen_US
dc.description.acknowledgementThis research is supported, in part, by the National Research Foun- dation (NRF), Singapore under its AI Singapore Programme (AISG Award No: AISG-GC-2019-003) and under its NRF Investigator- ship Programme (NRFI Award No. NRF-NRFI05-2019-0002). This research is supported, in part, by the Singapore Ministry of Health under its National Innovation Challenge on Active and Confident Ageing (NIC Project No. MOH/NIC/HAIG03/2017). This research is also supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2018-003), and the MOE AcRF Tier-1 research grant: RG95/20.en_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Conference Papers
Files in This Item:
File Description SizeFormat 
Full text.pdf1.51 MBAdobe PDFView/Open

Page view(s)

59
Updated on May 28, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.