MVMS-RCN : a dual-domain unified CT reconstruction with multi-sparse-view and multi-scale refinement-correction

Fan, Xiaohong and Chen, Ke and Yi, Huaming and Yang, Yin and Zhang, Jianping (2024) MVMS-RCN : a dual-domain unified CT reconstruction with multi-sparse-view and multi-scale refinement-correction. IEEE Transactions on Computational Imaging. ISSN 2333-9403 (In Press)

[thumbnail of Fan-etal-IEEE-TCI-2024-MVMS-RCN-a-dual-domain-unified-CT-reconstruction] Text. Filename: Fan-etal-IEEE-TCI-2024-MVMS-RCN-a-dual-domain-unified-CT-reconstruction.pdf
Accepted Author Manuscript
Restricted to Repository staff only until 1 January 2099.

Download (4MB) | Request a copy

Abstract

Computed Tomography (CT) is one of the most important diagnostic imaging techniques in clinical applications. Sparse-view CT imaging reduces the number of projection views to a lower radiation dose and alleviates the potential risk of radiation exposure. Most existing deep learning (DL) and deep unfolding sparse-view CT reconstruction methods: 1) do not fully use the projection data; 2) do not always link their architecture designs to a mathematical theory; 3) do not flexibly deal with multi-sparse-view reconstruction assignments. This paper aims to use mathematical ideas and design optimal DL imaging algorithms for sparse-view CT reconstructions. We propose a novel dual-domain unified framework that offers a great deal of flexibility for multi-sparse-view CT reconstruction through a single model. This framework combines the theoretical advantages of model-based methods with the superior reconstruction performance of DL-based methods, resulting in the expected generalizability of DL. We propose a refinement module that utilizes unfolding projection domain to refine full-sparse-view projection errors, as well as an image domain correction module that distills multi-scale geometric error corrections to reconstruct sparse-view CT. This provides us with a new way to explore the potential of projection information and a new perspective on designing network architectures. The multi-scale geometric correction module is end-to-end learnable, and our method could function as a plug-and-play reconstruction technique, adaptable to various applications. Extensive experiments demonstrate that our framework is superior to other existing state-of-the-art methods. Our source codes are available at https://github.com/fanxiaohong/MVMS-RCN.

ORCID iDs

Fan, Xiaohong, Chen, Ke ORCID logoORCID: https://orcid.org/0000-0002-6093-6623, Yi, Huaming, Yang, Yin and Zhang, Jianping;