Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJia, Mengxien_US
dc.contributor.authorCheng, Xinhuaen_US
dc.contributor.authorLu, Shijianen_US
dc.contributor.authorZhang, Jianen_US
dc.identifier.citationJia, M., Cheng, X., Lu, S. & Zhang, J. (2022). Learning disentangled representation implicitly via transformer for occluded person re-identification. IEEE Transactions On Multimedia, 3141267-.
dc.description.abstractPerson re-IDentification (re-ID) under various occlusions has been a long-standing challenge as person images with different types of occlusions often suffer from misalignment in image matching and ranking. Most existing methods tackle this challenge by aligning spatial features of body parts according to external semantic cues or feature similarities but this alignment approach is complicated and sensitive to noises. We design DRL-Net, a disentangled representation learning network that handles occluded re-ID without requiring strict person image alignment or any additional supervision. Leveraging transformer architectures, DRL-Net achieves alignment-free re-ID via global reasoning of local features of occluded person images. It measures image similarity by automatically disentangling the representation of undefined semantic components, e.g., human body parts or obstacles, under the guidance of semantic preference object queries in the transformer. In addition, we design a decorrelation constraint in the transformer decoder and impose it over object queries for better focus on different semantic components. To better eliminate interference from occlusions, we design a contrast feature learning technique (CFL) for better separation of occlusion features and discriminative ID features. Extensive experiments over occluded and holistic reID benchmarks show that the DRL-Net achieves superior re-ID performance consistently and outperforms the state-of-the-art by large margins for occluded re-ID dataset.en_US
dc.relation.ispartofIEEE Transactions on Multimediaen_US
dc.rights© 2021 IEEE. All rights reserved.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleLearning disentangled representation implicitly via transformer for occluded person re-identificationen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.subject.keywordsPerson Re-Identificationen_US
dc.subject.keywordsRepresentation Learningen_US
dc.description.acknowledgementThis work was supported in part by Shenzhen Fundamental Research Program (No.GXWD20201231165807007-20200807164903001).en_US
item.fulltextNo Fulltext-
Appears in Collections:SCSE Journal Articles

Citations 10

Updated on Jul 12, 2024

Web of ScienceTM
Citations 20

Updated on Oct 27, 2023

Page view(s)

Updated on Jul 16, 2024

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.