Weakly supervised deep semantic segmentation using CNN and ELM with semantic candidate regions

Xu, Xinying and Li, Guiqing and Xie, Gang and Ren, Jinchang and Xie, Xinlin (2019) Weakly supervised deep semantic segmentation using CNN and ELM with semantic candidate regions. Complexity, 2019. 9180391. ISSN 1076-2787

[thumbnail of Xu-etal-Complexity-2019-Weakly-supervised-deep-semantic-segmentation-using-CNN-and-ELM] Text (Xu-etal-Complexity-2019-Weakly-supervised-deep-semantic-segmentation-using-CNN-and-ELM)
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (3MB)


    The task of semantic segmentation is to obtain strong pixel-level annotations for each pixel in the image. For fully supervised semantic segmentation, the task is achieved by a segmentation model trained using pixel-level annotations. However, the pixel-level annotation process is very expensive and time-consuming. To reduce the cost, the paper proposes a semantic candidate regions trained extreme learning machine (ELM) method with image-level labels to achieve pixel-level labels mapping. In this work, the paper casts the pixel mapping problem into a candidate region semantic inference problem. Specifically, after segmenting each image into a set of superpixels, superpixels are automatically combined to achieve segmentation of candidate region according to the number of image-level labels. Semantic inference of candidate regions is realized based on the relationship and neighborhood rough set associated with semantic labels. Finally, the paper trains the ELM using the candidate regions of the inferred labels to classify the test candidate regions. The experiment is verified on the MSRC dataset and PASCAL VOC 2012, which are popularly used in semantic segmentation. The experimental results show that the proposed method outperforms several state-of-the-art approaches for deep semantic segmentation.

    ORCID iDs

    Xu, Xinying, Li, Guiqing, Xie, Gang, Ren, Jinchang ORCID logoORCID: https://orcid.org/0000-0001-6116-3194 and Xie, Xinlin;