Automatic annotation of subsea pipelines using deep learning

Stamoulakatos, Anastasios and Cardona, Javier and McCaig, Chris and Murray, David and Filius, Hein and Atkinson, Robert and Bellekens, Xavier and Michie, Craig and Andonovic, Ivan and Lazaridis, Pavlos and Hamilton, Andrew and Hossain, Md. Moinul and Di Caterina, Gaetano and Tachtatzis, Christos (2020) Automatic annotation of subsea pipelines using deep learning. Sensors, 20 (3). 674. ISSN 1424-8220 (https://doi.org/10.3390/s20030674)

[thumbnail of Stamoulakatos-etal-Sensors-2020-Automatic-annotation-of-subsea-pipelines-using-deep-learning]
Preview
Text. Filename: Stamoulakatos_etal_Sensors_2020_Automatic_annotation_of_subsea_pipelines_using_deep_learning.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (7MB)| Preview

Abstract

Regulatory requirements for sub-sea oil and gas operators mandates the frequent inspection of pipeline assets to ensure that their degradation and damage are maintained at acceptable levels. The inspection process is usually sub-contracted to surveyors who utilise sub-sea Remotely Operated Vehicles (ROVs), launched from a surface vessel and piloted over the pipeline. ROVs capture data from various sensors/instruments which are subsequently reviewed and interpreted by human operators, creating a log of event annotations; a slow, labour-intensive and costly process. The paper presents an automatic image annotation framework that identifies/classifies key events of interest in the video footage viz. exposure, burial, field joints, anodes and free spans. The reported methodology utilises transfer learning with a Deep Convolutional Neural Network (ResNet-50), fine-tuned on real-life, representative data from challenging sub-sea environments with low lighting conditions, sand agitation, sea-life and vegetation. The network outputs are configured to perform multi-label image classifications for the critical events. The annotation performance varies between 95.1% and 13 99.7% in terms of accuracy and 90.4% and 99.4% in terms of F1-Score depending on event type. The performance results are on a per-frame basis and corroborate the potential of the algorithm to be the foundation for an intelligent decision support framework that automates the annotation process. The solution can execute annotations in real-time and is significantly more cost-effective than human-only approaches.