Deep learning and bidirectional optical flow based viewport predictions for 360° video coding
Adhuran, Jayasingam and Kulupana, Gosala and Fernando, Anil (2022) Deep learning and bidirectional optical flow based viewport predictions for 360° video coding. IEEE Access, 10. pp. 118380-118396. ISSN 2169-3536 (https://doi.org/10.1109/access.2022.3219861)
Preview |
Text.
Filename: Adhuran_etal_IEEEA_2022_Deep_learning_and_bidirectional_optical_flow_based_viewport_predictions.pdf
Final Published Version License: Download (5MB)| Preview |
Abstract
The rapid development of virtual reality applications continues to urge better compression of 360° videos owing to the large volume of content. These videos are typically converted to 2-D formats using various projection techniques in order to benefit from ad-hoc coding tools designed to support conventional 2-D video compression. Although recently emerged video coding standard, Versatile Video Coding (VVC) introduces 360° video specific coding tools, it fails to prioritize the user observed regions in 360° videos, represented by the rectilinear images called the viewports. This leads to the encoding of redundant regions in the video frames, escalating the bit rate cost of the videos. In response to this issue, this paper proposes a novel 360° video coding framework for VVC which exploits user observed viewport information to alleviate pixel redundancy in 360° videos. In this regard, bidirectional optical flow, Gaussian filter and Spherical Convolutional Neural Networks (Spherical CNN) are deployed to extract perceptual features and predict user observed viewports. By appropriately fusing the predicted viewports on the 2-D projected 360° video frames, a novel Regions of Interest (ROI) aware weightmap is developed which can be used to mask the source video and introduce adaptive changes to the Lagrange and quantization parameters in VVC. Comprehensive experiments conducted in the context of VVC Test Model (VTM) 7.0 show that the proposed framework can improve bitrate reduction, achieving an average bitrate saving of 5.85% and up to 17.15% at the same perceptual quality which is measured using Viewport Peak Signal-To-Noise Ratio (VPSNR).
-
-
Item type: Article ID code: 83354 Dates: DateEvent4 November 2022Published2 November 2022AcceptedSubjects: Science > Mathematics > Electronic computers. Computer science Department: Faculty of Science > Computer and Information Sciences Depositing user: Pure Administrator Date deposited: 29 Nov 2022 13:54 Last modified: 11 Nov 2024 13:42 URI: https://strathprints.strath.ac.uk/id/eprint/83354