Picture of DNA strand

Pioneering chemical biology & medicinal chemistry through Open Access research...

Strathprints makes available scholarly Open Access content by researchers in the Department of Pure & Applied Chemistry, based within the Faculty of Science.

Research here spans a wide range of topics from analytical chemistry to materials science, and from biological chemistry to theoretical chemistry. The specific work in chemical biology and medicinal chemistry, as an example, encompasses pioneering techniques in synthesis, bioinformatics, nucleic acid chemistry, amino acid chemistry, heterocyclic chemistry, biophysical chemistry and NMR spectroscopy.

Explore the Open Access research of the Department of Pure & Applied Chemistry. Or explore all of Strathclyde's Open Access research...

A multi-level parallel solver for rarefied gas flows in porous media

Ho, Minh Tuan and Zhu, Lianhua and Wu, Lei and Wang, Peng and Guo, Zhaoli and Li, Zhi-Hui and Zhang, Yonghao (2019) A multi-level parallel solver for rarefied gas flows in porous media. Computer Physics Communications, 234. pp. 14-25. ISSN 0010-4655

[img]
Preview
Text (Ho-etal-CPC-2018-A-multi-level-parallel-solver-for-rarefied-gas-flows-in-porous-media)
Ho_etal_CPC_2018_A_multi_level_parallel_solver_for_rarefied_gas_flows_in_porous_media.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (1MB)| Preview

    Abstract

    A high-performance gas kinetic solver using multi-level parallelization is developed to enable pore-scale simulations of rarefied flows in porous media. The Bhatnagar–Gross–Krook model equation is solved by the discrete velocity method with an iterative scheme. The multi-level MPI/OpenMP parallelization is implemented with the aim to efficiently utilize the computational resources to allow direct simulation of rarefied gas flows in porous media based on digital rock images for the first time. The multi-level parallel approach is analyzed in detail confirming its better performance than the commonly-used MPI processing alone for an iterative scheme. With high communication efficiency and appropriate load balancing among CPU processes, parallel efficiency of 94% is achieved for 1536 cores in the 2D simulations, and 81% for 12288 cores in the 3D simulations. While decomposition in the spatial space does not affect the simulation results, one additional benefit of this approach is that the number of subdomains can be kept minimal to avoid deterioration of the convergence rate of the iteration process. This multi-level parallel approach can be readily extended to solve other Boltzmann model equations.