GPU acceleration of an iterative scheme for gas-kinetic model equations with memory reduction techniques

Zhu, Lianhua and Wang, Peng and Chen, Songze and Guo, Zhaoli and Zhang, Yonghao (2019) GPU acceleration of an iterative scheme for gas-kinetic model equations with memory reduction techniques. Computer Physics Communications, 245. 106861. ISSN 0010-4655

[img] Text (Zhu-etal-CPC-2019-GPU-acceleration-of-an-iterative-scheme-for-gas-kinetic-model-equations)
Zhu_etal_CPC_2019_GPU_acceleration_of_an_iterative_scheme_for_gas_kinetic_model_equations.pdf
Accepted Author Manuscript
Restricted to Repository staff only until 14 August 2020.
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (2MB) | Request a copy from the Strathclyde author

    Abstract

    This paper presents a Graphics Processing Unit (GPU) acceleration of an iteration-based discrete velocity method (DVM) for gas-kinetic model equations. Unlike the previous GPU parallelization of explicit kinetic schemes, this work is based on a fast converging iterative scheme. The memory reduction techniques previously proposed for DVM are applied for GPU computing, enabling full three-dimensional (3D) solutions of kinetic model equations in the contemporary GPUs usually with a limited memory capacity that otherwise would need terabytes of memory. The GPU algorithm is validated against the direct simulation Monte Carlo (DSMC) simulation of the 3D lid-driven cavity flow and the supersonic rarefied gas flow past a cube with the phase-space grid points up to 0.7 trillion. The computing performance profiling on three models of GPUs shows that the two main kernel functions can utilize 56% ~ 79% of the GPU computing and memory resources. The performance of the GPU algorithm is compared with a typical parallel CPU implementation of the same algorithm using the Message Passing Interface (MPI). The comparison shows that the GPU program on K40 and K80 achieves 1.2 ~ 2.8 and 1.2 ~ 2.4 speedups for the 3D lid-driven cavity flow, respectively, compared with the MPI parallelized CPU program running on 96 CPU cores.