Cross-modality person re-identification via local paired graph attention network

Zhou, Jianglin and Dong, Qing and Zhang, Zhong and Liu, Shuang and Durrani, Tariq S. (2023) Cross-modality person re-identification via local paired graph attention network. Sensors, 23 (8). 4011. ISSN 1424-8220 (https://doi.org/10.3390/s23084011)

[thumbnail of Zhou-etal-Sensors-2023-Cross-modality-person-re-identification]
Preview
Text. Filename: Zhou_etal_Sensors_2023_Cross_modality_person_re_identification.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (1MB)| Preview

Abstract

Cross-modality person re-identification (ReID) aims at searching a pedestrian image of RGB modality from infrared (IR) pedestrian images and vice versa. Recently, some approaches have constructed a graph to learn the relevance of pedestrian images of distinct modalities to narrow the gap between IR modality and RGB modality, but they omit the correlation between IR image and RGB image pairs. In this paper, we propose a novel graph model called Local Paired Graph Attention Network (LPGAT). It uses the paired local features of pedestrian images from different modalities to build the nodes of the graph. For accurate propagation of information among the nodes of the graph, we propose a contextual attention coefficient that leverages distance information to regulate the process of updating the nodes of the graph. Furthermore, we put forward Cross-Center Contrastive Learning (C3L) to constrain how far local features are from their heterogeneous centers, which is beneficial for learning the completed distance metric. We conduct experiments on the RegDB and SYSU-MM01 datasets to validate the feasibility of the proposed approach.