Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/180836
Title: | A unified deep semantic expansion framework for domain-generalized person re-identification | Authors: | Ang, Eugene P. W. Lin, Shan Kot, Alex Chichung |
Keywords: | Engineering | Issue Date: | 2024 | Source: | Ang, E. P. W., Lin, S. & Kot, A. C. (2024). A unified deep semantic expansion framework for domain-generalized person re-identification. Neurocomputing, 600, 128120-. https://dx.doi.org/10.1016/j.neucom.2024.128120 | Journal: | Neurocomputing | Abstract: | Supervised Person Re-identification (Person ReID) methods have achieved excellent performance when training and testing within one camera network. However, they usually suffer from considerable performance degradation when applied to different camera systems. In recent years, many Domain Adaptation Person ReID methods have been proposed, achieving impressive performance without requiring labeled data from the target domain. However, these approaches still need the unlabeled data of the target domain during the training process, making them impractical in many real-world scenarios. Our work focuses on the more practical Domain Generalized Person Re-identification (DG-ReID) problem. Given one or more source domains, it aims to learn a generalized model that can be applied to unseen target domains. One promising research direction in DG-ReID is the use of implicit deep semantic feature expansion, and our previous method, Domain Embedding Expansion (DEX), is one such example that achieves powerful results in DG-ReID. However, in this work we show that DEX and other similar implicit deep semantic feature expansion methods, due to limitations in their proposed loss function, fail to reach their full potential on large evaluation benchmarks as they have a tendency to saturate too early. Leveraging on this analysis, we propose Unified Deep Semantic Expansion, our novel framework that unifies implicit and explicit semantic feature expansion techniques in a single framework to mitigate this early over-fitting and achieve a new state-of-the-art (SOTA) in all DG-ReID benchmarks. Further, we apply our method on more general image retrieval tasks, also surpassing the current SOTA in all of these benchmarks by wide margins. | URI: | https://hdl.handle.net/10356/180836 | ISSN: | 0925-2312 | DOI: | 10.1016/j.neucom.2024.128120 | Schools: | School of Electrical and Electronic Engineering | Research Centres: | Rapid-Rich Object Search (ROSE) Lab | Rights: | © 2024 Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies. | Fulltext Permission: | none | Fulltext Availability: | No Fulltext |
Appears in Collections: | EEE Journal Articles |
SCOPUSTM
Citations
50
2
Updated on May 7, 2025
Page view(s)
67
Updated on May 6, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.