Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/161913
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTajrobehkar, Mitraen_US
dc.date.accessioned2022-09-26T06:40:02Z-
dc.date.available2022-09-26T06:40:02Z-
dc.date.issued2022-
dc.identifier.citationTajrobehkar, M. (2022). Semi-supervised learning for visual relation annotation. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/161913en_US
dc.identifier.urihttps://hdl.handle.net/10356/161913-
dc.description.abstractDue to the powerful ability to learn low-level and high-level general visual features, deep neural networks (DNNs) have been used as the basic structure in many CV applications such as object detection, semantic segmentation, relation detection, and annotation, etc. While most of the research focuses are on maximizing overall performance during training a machine learning model, not much attention is given to evaluate the robustness i.e. against visual content manipulation, until very recent years. Lack of model robustness especially with respect to consistency and discrimination can be due to various reasons, e.g. data distributions, inadequacy in learning process, model sensitivity to different regions of feature space, etc. Discrimination refers to the model capability of predictions to distinguish between individual class samples, while consistency refers to the model capability of predictions to remain stable despite input variations. To address these challenges, the central focus of this thesis is on representation learning – for two most challenging Computer Vision (CV) tasks: Scene Graph Generation (SGG), and Visual Question Answering (VQA). For the first research direction, we propose a novel head network to tackle the problem of non-discrimination by learning semantic pairwise feature representation. The second research direction addresses the model instability by generating better representation. Through a consensus model, common feature representations that are reasoned from various samples are learned to increase the robustness of consensus. In summary, the major contributions of this thesis are as follows: We propose a meta-architecture — learning-to-align —, called Align R-CNN, for dynamic object feature concatenation to deal with visual reasoning task. Taking scene graph generation as an example, humans learn to describe visual relationships between objects, semantically (e.g., riding behind, sitting on). We propose a semantic transformation that parses an input image into <subject-relation-object> triplet and then extracts visually grounded semantic features by re-aligning features of subject from its relative object and relation. We argue that the previous works are highly limited by naive concatenation and as a result, they fail to discriminate between riding and feeding for object pair of person and horse. Moreover, naive concatenated pairwise features may collapse less frequent but meaningful predicate e.g., sitting on into more frequent but meaningless one e.g., on. Compared with existing model relation representations that utilize scene graphs to connect the objects, the proposed Align R-CNN has two key advantages: 1) maintains good representations during training while removing the irrelevant features from the objects; 2) dynamic learning, which enables model deals with different pairs. These advantages prevent the proposed Align R-CNN from over-fitting with the biased dataset. Note that the proposed framework can be utilized in a community that seeks zero-shot predictions. We propose a framework to enhance model consistency by generating desirable feature consistency. This line of research addresses lack of VQA models that measure the robustness of consensus against linguistic variations in ques- tions. For instance, while reference question “How many cars in picture?” and its syntactic one “How many automobiles are there?” are semantically identical in meaning, model may predict an incorrect answer for syntactic one. Besides, the model should be powerful enough to predict the right answer for “How many red cars seen in picture?” Inspired from unsupervised feature representation learning, we use contrastive learning, which sufficiently learns better representation from both vision and language inputs. However, we ar- gue that training the model with naive contrastive learning framework that using random intra-class and random non-target sequences as positive and negative examples is sub-optimal. Thus, it may not boost the model performance on robust VQA benchmark. The proposed method dedicates a principal head network to generate positive and negative samples for contrastive learning by adding adversarial perturbations. Specifically, it generates hard positive samples by adding large perturbations to both input images and questions to maximize the conditional likelihood. The proposed framework has two key advantages: 1) the generative model from embedding representation offers rich information to increase model stability; 2) by exploring the effects of single modalities and multi-modal attacks, the model mitigates correlation between the bias and the learned features.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).en_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Image processing and computer visionen_US
dc.titleSemi-supervised learning for visual relation annotationen_US
dc.typeThesis-Doctor of Philosophyen_US
dc.contributor.supervisorZhang Hanwangen_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeDoctor of Philosophyen_US
dc.contributor.organizationAgency for Science, Technology and Research (A*STAR)en_US
dc.contributor.supervisor2Lim Joo-Hweeen_US
dc.identifier.doi10.32657/10356/161913-
dc.contributor.supervisoremailhanwangzhang@ntu.edu.sg, joohwee@i2r.a-star.edu.sgen_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:SCSE Theses
Files in This Item:
File Description SizeFormat 
Mitra_Tajrobehkar_PhD_Thesis.pdf11 MBAdobe PDFThumbnail
View/Open

Page view(s)

239
Updated on Apr 14, 2024

Download(s) 50

162
Updated on Apr 14, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.