High-performance reconstruction method combining total variation with a video denoiser for compressed ultrafast imaging

Pei, Chengquan and Li, David and Shen, Qian and Zhang, Shian and Qi, Dalong and Jin, Chengzhi and Dong, Le (2024) High-performance reconstruction method combining total variation with a video denoiser for compressed ultrafast imaging. Applied Optics, 63 (8). C32-C40. ISSN 1559-128X (https://doi.org/10.1364/AO.506058)

[thumbnail of Pei-etal-AO-2024-A-high-performance-reconstruction-method] Text. Filename: Pei-etal-AO-2024-A-high-performance-reconstruction-method.pdf
Accepted Author Manuscript
Restricted to Repository staff only until 25 January 2025.
License: Strathprints license 1.0

Download (5MB) | Request a copy

Abstract

Compressed ultrafast photography (CUP) is a novel two-dimensional (2D) imaging technique to capture ultrafast dynamic scenes. Effective image reconstruction is essential inCUPsystems.However, existing reconstruction algorithms mostly rely on image priors and complex parameter spaces. Therefore, in general, they are time-consuming and result in poor imaging quality, which limits their practical applications. In this paper, we propose a novel reconstruction algorithm, to the best of our knowledge, named plug-in-plug-fast deep video denoising net-total variation (PnP-TV-FastDVDnet), which exploits an image’s spatial features and correlation features in the temporal dimension. Therefore, it offers higher-quality images than those in previously reported methods. First, we built a forward mathematical model of the CUP, and the closed-formsolution of the three suboptimization problems was derived according to plug-in and plug-out frames. Secondly, we used an advanced video denoising algorithm based on a neural network named FastDVDnet to solve the denoising problem. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) are improved on actual CUP data compared with traditional algorithms. On benchmark and real CUP datasets, the proposed method shows the comparable visual results while reducing the running time by96% over state-of-the-art algorithms.