Diffeomorphic unsupervised deep learning model for mono- and multi-modality registration

Theljani, Anis and Chen, Ke (2020) Diffeomorphic unsupervised deep learning model for mono- and multi-modality registration. Journal of Algorithms and Computational Technology, 14. pp. 1-10. ISSN 1748-3026 (https://doi.org/10.1177/1748302620973528)

[thumbnail of Theljani-Chen-JACT-2020-Diffeomorphic-unsupervised-deep-learning-model]
Preview
Text. Filename: Theljani-Chen-JACT-2020-Diffeomorphic-unsupervised-deep-learning-model.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (1MB)| Preview

Abstract

Different from image segmentation, developing a deep learning network for image registration is less straightforward because training data cannot be prepared or supervised by humans unless they are trivial (e.g. pre-designed affine transforms). One approach for an unsupervised deep leaning model is to self-train the deformation fields by a network based on a loss function with an image similarity metric and a regularisation term, just with traditional variational methods. Such a function consists in a smoothing constraint on the derivatives and a constraint on the determinant of the transformation in order to obtain a spatially smooth and plausible solution. Although any variational model may be used to work with a deep learning algorithm, the challenge lies in achieving robustness. The proposed algorithm is first trained based on a new and robust variational model and tested on synthetic and real mono-modal images. The results show how it deals with large deformation registration problems and leads to a real time solution with no folding. It is then generalised to multi-modal images. Experiments and comparisons with learning and non-learning models demonstrate that this approach can deliver good performances and simultaneously generate an accurate diffeomorphic transformation.

ORCID iDs

Theljani, Anis and Chen, Ke ORCID logoORCID: https://orcid.org/0000-0002-6093-6623;