Multi-scale pedestrian intent prediction using 3D joint information as spatio-temporal representation

Ahmed, Sarfraz and Al Bazi, Ammar and Saha, Chitta and Rajbhandari, Sujan and Huda, M. Nazmul (2023) Multi-scale pedestrian intent prediction using 3D joint information as spatio-temporal representation. Expert Systems with Applications, 225. 120077. ISSN 0957-4174 (https://doi.org/10.1016/j.eswa.2023.120077)

[thumbnail of Ahmed-etal-ESA-2023-Multi-scale-pedestrian-intent-prediction-using-3D-joint-information-as-spatio-temporal-representation]
Preview
Text. Filename: Ahmed_etal_ESA_2023_Multi_scale_pedestrian_intent_prediction_using_3D_joint_information_as_spatio_temporal_representation.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (1MB)| Preview

Abstract

There has been a rise of use of Autonomous Vehicles on public roads. With the predicted rise of road traffic accidents over the coming years, these vehicles must be capable of safely operate in the public domain. The field of pedestrian detection has significantly advanced in the last decade, providing high-level accuracy, with some technique reaching near-human level accuracy. However, there remains further work required for pedestrian intent prediction to reach human-level performance. One of the challenges facing current pedestrian intent predictors are the varying scales of pedestrians, particularly smaller pedestrians. This is because smaller pedestrians can blend into the background, making them difficult to detect, track or apply pose estimations techniques. Therefore, in this work, we present a novel intent prediction approach for multi-scale pedestrians using 2D pose estimation and a Long Short-term memory (LSTM) architecture. The pose estimator predicts keypoints for the pedestrian along the video frames. Based on the accumulation of these keypoints along the frames, spatio-temporal data is generated. This spatio-temporal data is fed to the LSTM for classifying the crossing behaviour of the pedestrians. We evaluate the performance of the proposed techniques on the popular Joint Attention in Autonomous Driving (JAAD) dataset and the new larger-scale Pedestrian Intention Estimation (PIE) dataset. Using data generalisation techniques, we show that the proposed technique outperformed the state-of-the-art techniques by up to 7%, reaching up to 94% accuracy while maintaining a comparable run-time of 6.1 ms.

ORCID iDs

Ahmed, Sarfraz, Al Bazi, Ammar, Saha, Chitta, Rajbhandari, Sujan ORCID logoORCID: https://orcid.org/0000-0001-8742-118X and Huda, M. Nazmul;