Automatic annotation of subsea pipelines using deep learning

Stamoulakatos, Anastasios and Cardona, Javier and McCaig, Chris and Murray, David and Filius, Hein and Atkinson, Robert and Bellekens, Xavier and Michie, Craig and Andonovic, Ivan and Lazaridis, Pavlos and Hamilton, Andrew and Hossain, Md. Moinul and Di Caterina, Gaetano and Tachtatzis, Christos (2020) Automatic annotation of subsea pipelines using deep learning. Sensors, 20 (3). 674. ISSN 1424-8220 (https://doi.org/10.3390/s20030674)

[thumbnail of Stamoulakatos-etal-Sensors-2020-Automatic-annotation-of-subsea-pipelines-using-deep-learning]
Preview
Text. Filename: Stamoulakatos_etal_Sensors_2020_Automatic_annotation_of_subsea_pipelines_using_deep_learning.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (7MB)| Preview

Abstract

Regulatory requirements for sub-sea oil and gas operators mandates the frequent inspection of pipeline assets to ensure that their degradation and damage are maintained at acceptable levels. The inspection process is usually sub-contracted to surveyors who utilise sub-sea Remotely Operated Vehicles (ROVs), launched from a surface vessel and piloted over the pipeline. ROVs capture data from various sensors/instruments which are subsequently reviewed and interpreted by human operators, creating a log of event annotations; a slow, labour-intensive and costly process. The paper presents an automatic image annotation framework that identifies/classifies key events of interest in the video footage viz. exposure, burial, field joints, anodes and free spans. The reported methodology utilises transfer learning with a Deep Convolutional Neural Network (ResNet-50), fine-tuned on real-life, representative data from challenging sub-sea environments with low lighting conditions, sand agitation, sea-life and vegetation. The network outputs are configured to perform multi-label image classifications for the critical events. The annotation performance varies between 95.1% and 13 99.7% in terms of accuracy and 90.4% and 99.4% in terms of F1-Score depending on event type. The performance results are on a per-frame basis and corroborate the potential of the algorithm to be the foundation for an intelligent decision support framework that automates the annotation process. The solution can execute annotations in real-time and is significantly more cost-effective than human-only approaches.

ORCID iDs

Stamoulakatos, Anastasios ORCID logoORCID: https://orcid.org/0000-0002-8279-9973, Cardona, Javier ORCID logoORCID: https://orcid.org/0000-0002-9284-1899, McCaig, Chris, Murray, David, Filius, Hein, Atkinson, Robert ORCID logoORCID: https://orcid.org/0000-0002-6206-2229, Bellekens, Xavier ORCID logoORCID: https://orcid.org/0000-0003-1849-5788, Michie, Craig ORCID logoORCID: https://orcid.org/0000-0001-5132-4572, Andonovic, Ivan ORCID logoORCID: https://orcid.org/0000-0001-9093-5245, Lazaridis, Pavlos, Hamilton, Andrew ORCID logoORCID: https://orcid.org/0000-0002-8436-8325, Hossain, Md. Moinul, Di Caterina, Gaetano ORCID logoORCID: https://orcid.org/0000-0002-7256-0897 and Tachtatzis, Christos ORCID logoORCID: https://orcid.org/0000-0001-9150-6805;