Cross-scale Vision Transformer for crowd localization

Liu, Shuang and Lian, Yu and Zhang, Zhong and Xiao, Baihua and Durrani, Tariq S. (2024) Cross-scale Vision Transformer for crowd localization. Journal of King Saud University - Computer and Information Sciences, 36 (2). 101972. ISSN 1319-1578 (https://doi.org/10.1016/j.jksuci.2024.101972)

[thumbnail of Liu-etal-JKSUCIS-2024-Cross-scale-Vision-Transformer-for-crowd-localization]
Preview
Text. Filename: Liu-etal-JKSUCIS-2024-Cross-scale-Vision-Transformer-for-crowd-localization.pdf
Final Published Version
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (2MB)| Preview

Abstract

Crowd localization can provide the positions of individuals and the total number of people, which has great application value for security monitoring and public management, meanwhile it meets the challenges of lighting, occlusion and perspective effect. In recent times, Transformer has been applied in crowd localization to overcome these challenges. Yet such kind of methods only consider to integrate the multi-scale information once, which results in incomplete multi-scale information fusion. In this paper, we propose a novel Transformer network named Cross-scale Vision Transformer (CsViT) for crowd localization, which simultaneously fuses multi-scale information during both the encoder and decoder stages and meanwhile building the long-range context dependencies on the combined feature maps. To this end, we design the multi-scale encoder to fuse the feature maps of multiple scales at corresponding positions so as to obtain the combined feature maps, and meanwhile design the multi-scale decoder to integrate the tokens at multiple scales when modeling the long-range context dependencies. Furthermore, we propose Multi-scale SSIM (MsSSIM) loss to adaptively compute head regions and optimize the similarity at multiple scales. Specifically, we set the adaptive windows with different scales for each head and compute the loss values within these windows so as to enhance the accuracy of the predicted distance transform map. We perform comprehensive experiments on five public datasets, and the results obtained validate the effectiveness of our method.