A framework for breast cancer classification using Multi-DCNNs

Ragab, Dina A. and Attallah, Omneya and Sharkas, Maha and Ren, Jinchang and Marshall, Stephen (2021) A framework for breast cancer classification using Multi-DCNNs. Computers in Biology and Medicine, 131. 104245. ISSN 1879-0534

[thumbnail of Ragab-etal-CBM-2021-A-framework-for-breast-cancer-classification] Text (Ragab-etal-CBM-2021-A-framework-for-breast-cancer-classification)
Ragab_etal_CBM_2021_A_framework_for_breast_cancer_classification.pdf
Accepted Author Manuscript
Restricted to Repository staff only until 29 January 2022.
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (9MB) | Request a copy from the Strathclyde author

    Abstract

    Background: Deep learning (DL) is the fastest-growing field of machine learning (ML). Deep convolutional neural networks (DCNN) are currently the main tool used for image analysis and classification purposes. There are several DCNN architectures among them AlexNet, GoogleNet, and residual networks (ResNet). Method: This paper presents a new computer-aided diagnosis (CAD) system based on feature extraction and classification using DL techniques to help radiologists to classify breast cancer lesions in mammograms. This is performed by four different experiments to determine the optimum approach. The first one consists of end-to-end pre-trained fine-tuned DCNN networks. In the second one, the deep features of the DCNNs are extracted and fed to a support vector machine (SVM) classifier with different kernel functions. The third experiment performs deep features fusion to demonstrate that combining deep features will enhance the accuracy of the SVM classifiers. Finally, in the fourth experiment, principal component analysis (PCA) is introduced to reduce the large feature vector produced in feature fusion and to decrease the computational cost. The experiments are performed on two datasets (1) the curated breast imaging subset of the digital database for screening mammography (CBIS-DDSM) and (2) the mammographic image analysis society digital mammogram database (MIAS). Results and Conclusions: The accuracy achieved using deep features fusion for both datasets proved to be the highest compared to the state-of-the-art CAD systems. Conversely, when applying the PCA on the feature fusion sets, the accuracy did not improve; however, the computational cost decreased as the execution time decreased.

    ORCID iDs

    Ragab, Dina A. ORCID logoORCID: https://orcid.org/0000-0001-6107-9099, Attallah, Omneya, Sharkas, Maha, Ren, Jinchang ORCID logoORCID: https://orcid.org/0000-0001-6116-3194 and Marshall, Stephen ORCID logoORCID: https://orcid.org/0000-0001-7079-5628;