Please use this identifier to cite or link to this item:
Title: Graph edit distance reward : learning to edit scene graph
Authors: Chen, Lichang
Lin, Guosheng
Wang, Shijie
Wu, Qingyao
Keywords: Engineering::Computer science and engineering
Issue Date: 2020
Source: Chen, L., Lin, G., Wang, S., & Wu, Q. (2020). Graph edit distance reward : learning to edit scene graph. Proceedings of the European Conference on Computer Vision (ECCV) 2020.
Project: AISG-RP-2018-003
RG28/18 (S)
RG22/19 (S)
Abstract: Scene Graph, as a vital tool to bridge the gap between language domain and image domain, has been widely adopted in the cross-modality task like VQA. In this paper, we propose a new method to edit the scene graph according to the user instructions, which has never been explored. To be specific, in order to learn editing scene graphs as the semantics given by texts, we propose a Graph Edit Distance Reward, which is based on the Policy Gradient and Graph Matching algorithm, to optimize neural symbolic model. In the context of text-editing image retrieval, we validate the effectiveness of our method in CSS and CRIR dataset. Besides, CRIR is a new synthetic dataset generated by us, which we will publish it soon for future use.
Rights: © 2020 Springer Nature Switzerland AG. This is a post-peer-review, pre-copyedit version of an article published in European Conference on Computer Vision (ECCV) 2020.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Conference Papers

Files in This Item:
File Description SizeFormat 
ECCV_Final_version.pdf880.08 kBAdobe PDFView/Open

Page view(s)

Updated on Jan 19, 2022


Updated on Jan 19, 2022

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.