Cross‐modality person re‐identification using hybrid mutual learning

Zhang, Zhong and Dong, Qing and Wang, Sen and Liu, Shuang and Xiao, Baihua and Durrani, Tariq S. (2022) Cross‐modality person re‐identification using hybrid mutual learning. IET Computer Vision, 17 (1). pp. 1-12. ISSN 1751-9640 (https://doi.org/10.1049/cvi2.12123)

[thumbnail of Ludvigsen-etal-arXiv-2022-The-dangers-of-computational-law-and-cybersecurity-perspectives-from-engineering-and-the-AI-Act]
Preview
Text. Filename: Ludvigsen_etal_arXiv_2022_The_dangers_of_computational_law_and_cybersecurity_perspectives_from_engineering_and_the_AI_Act.pdf
Final Published Version
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (727kB)| Preview

Abstract

Cross-modality person re-identification (Re-ID) aims to retrieve a query identity from red, green, blue (RGB) images or infrared (IR) images. Many approaches have been proposed to reduce the distribution gap between RGB modality and IR modality. However, they ignore the valuable collaborative relationship between RGB modality and IR modality. Hybrid Mutual Learning (HML) for cross-modality person Re-ID is proposed, which builds the collaborative relationship by using mutual learning from the aspects of local features and triplet relation. Specifically, HML contains local-mean mutual learning and triplet mutual learning where they focus on transferring local representational knowledge and structural geometry knowledge so as to reduce the gap between RGB modality and IR modality. Furthermore, Hierarchical Attention Aggregation is proposed to fuse local feature maps and local feature vectors to enrich the information of the classifier input. Extensive experiments on two commonly used data sets, that is, SYSU-MM01 and RegDB verify the effectiveness of the proposed method.