Content-seam-preserving multi-alignment network for visual-sensor-based image stitching

Fan, Xiaoting and Sun, Long and Zhang, Zhong and Liu, Shuang and Durrani, Tariq S. (2023) Content-seam-preserving multi-alignment network for visual-sensor-based image stitching. Sensors, 23 (17). 7488. ISSN 1424-8220 (https://doi.org/10.3390/s23177488)

[thumbnail of Fan-etal-Sensors-2023-Content-seam-preserving-multi-alignment-network]
Preview
Text. Filename: Fan_etal_Sensors_2023_Content_seam_preserving_multi_alignment_network.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (1MB)| Preview

Abstract

As an important representation of scenes in virtual reality and augmented reality, image stitching aims to generate a panoramic image with a natural field-of-view by stitching multiple images together, which are captured by different visual sensors. Existing deep-learning-based methods for image stitching only conduct a single deep homography to perform image alignment, which may produce inevitable alignment distortions. To address this issue, we propose a content-seam-preserving multi-alignment network (CSPM-Net) for visual-sensor-based image stitching, which could preserve the image content consistency and avoid seam distortions simultaneously. Firstly, a content-preserving deep homography estimation was designed to pre-align the input image pairs and reduce the content inconsistency. Secondly, an edge-assisted mesh warping was conducted to further align the image pairs, where the edge information is introduced to eliminate seam artifacts. Finally, in order to predict the final stitched image accurately, a content consistency loss was designed to preserve the geometric structure of overlapping regions between image pairs, and a seam smoothness loss is proposed to eliminate the edge distortions of image boundaries. Experimental results demonstrated that the proposed image-stitching method can provide favorable stitching results for visual-sensor-based images and outperform other state-of-the-art methods.