Reset filters

Search publications


By keyword
By department

No publications found.

 

Transductive meta-learning with enhanced feature ensemble for few-shot semantic segmentation

Authors: Karimi APoullis C


Affiliations

1 Immersive and Creative Technologies Lab, Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada.
2 Immersive and Creative Technologies Lab, Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada. charalambos@poullis.org.

Description

This paper addresses few-shot semantic segmentation and proposes a novel transductive end-to-end method that overcomes three key problems affecting performance. First, we present a novel ensemble of visual features learned from pretrained classification and semantic segmentation networks with the same architecture. Our approach leverages the varying discriminative power of these networks, resulting in rich and diverse visual features that are more informative than a pretrained classification backbone that is not optimized for dense pixel-wise classification tasks used in most state-of-the-art methods. Secondly, the pretrained semantic segmentation network serves as a base class extractor, which effectively mitigates false positives that occur during inference time and are caused by base objects other than the object of interest. Thirdly, a two-step segmentation approach using transductive meta-learning is presented to address the episodes with poor similarity between the support and query images. The proposed transductive meta-learning method addresses the prediction by first learning the relationship between labeled and unlabeled data points with matching support foreground to query features (intra-class similarity) and then applying this knowledge to predict on the unlabeled query image (intra-object similarity), which simultaneously learns propagation and false positive suppression. To evaluate our method, we performed experiments on benchmark datasets, and the results demonstrate significant improvement with minimal trainable parameters of 2.98M. Specifically, using Resnet-101, we achieve state-of-the-art performance for both 1-shot and 5-shot Pascal-[Formula: see text], as well as for 1-shot and 5-shot COCO-[Formula: see text].


Links

PubMed: https://pubmed.ncbi.nlm.nih.gov/38369571/

DOI: 10.1038/s41598-024-54640-6