Part-guided graph convolution networks for person re-identification

Zhang, Zhong and Zhang, Haijia and Liu, Shuang and Xie, Yuan and Durrani, Tariq S. (2021) Part-guided graph convolution networks for person re-identification. Pattern Recognition, 120. 108155. ISSN 0031-3203 (https://doi.org/10.1016/j.patcog.2021.108155)

[thumbnail of Zhang-etal-PR-2021-Part-guided-graph-convolution-networks-for-person-re-identification]
Preview
Text. Filename: Zhang_etal_PR_2021_Part_guided_graph_convolution_networks_for_person_re_identification.pdf
Accepted Author Manuscript
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (1MB)| Preview

Abstract

Recently, part-based deep models have achieved promising performance in person re-identification (Re-ID), yet these models ignore the inter-local relationship of the corresponding parts among pedestrian images and the intra-local relationship between adjacent parts in one pedestrian image. As a result, the feature representations are hard to learn the information from the same parts of other pedestrian images and are lack of the contextual information of pedestrian. In this paper, we propose a novel deep graph model named Part-Guided Graph Convolution Network (PGCN) for person Re-ID, which could simultaneously learn the inter-local relationship and the intra-local relationship for feature representations. Specifically, we construct the inter-local graph using the local features extracted from the same parts of pedestrian images and build the adjacency matrix using the similarity so as to mine the inter-local relationship. Meanwhile, we construct the intra-local graph using the local features extracted from different body parts in one pedestrian image, and propose the fractional dynamic mechanism (FDM) to accurately describe the correlations between adjacent parts in the optimization process. Finally, after the graph convolutional operation, the inter-local relationship and the intra-local relationship are injected into the feature representations of pedestrian images. Extensive experiments are conducted on Market-1501, CUHK03, DukeMTMC-reID and MSMT17, and the experimental results show the proposed PGCN exceeds state-of-the-art methods by an overwhelming margin.