ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Selective mixing and voting network for semi-supervised domain generalization

Arfeen, A and Dutta, T and Biswas, S (2021) Selective mixing and voting network for semi-supervised domain generalization. In: 12th Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP 2021, 20-22 Dec 2021, Virtual, Online.

[img] PDF
ICVGIP_2021.pdf - Published Version
Restricted to Registered users only

Download (683kB) | Request a copy
Official URL: https://doi.org/10.1145/3490035.3490303


Domain generalization (DG) addresses the problem of generalizing classification performance across any unknown domain, by leveraging training samples from multiple source domains. Currently, the training process of the state-of-the-art DG-methods is dependent on a large amount of labeled data. This restricts the application of the models in many real-world scenarios, where collecting and annotating a large dataset is an expensive and difficult task. Thus, in this paper, we address the problem of Semi-supervised Domain Generalization (SSDG), where the training set contains only a few labeled data, in addition to a large number of unlabeled data from multiple domains. This is relatively unexplored in literature and poses a considerable challenge to the state-of-the-art DG models, since their performance degrades under such condition. To address this scenario, we propose a novel Selective Mixing and Voting Network (SMV-Net), which effectively extracts useful knowledge from the set of unlabeled training data, available to the model. Specifically, we propose a mixing strategy on selected unlabeled samples on which the model is confident about their predicted class labels to achieve a domain-invariant representation of the data, which generalizes effectively across any unseen domain. Secondly, we also propose a voting module, which not only improves the generalization capability of the classifier, but can also comment on the prediction of the test samples, using references from a few labeled training examples, despite of their domain gap. Finally, we introduce a test-time mixing strategy to re-look at the top class-predictions and re-order them if required to further boost the classification performance. Extensive experiments on two popular DG-datasets demonstrate the usefulness of the proposed framework. © 2021 ACM.

Item Type: Conference Paper
Publication: ACM International Conference Proceeding Series
Publisher: Association for Computing Machinery
Additional Information: The copyright for this article belongs to Association for Computing Machinery
Keywords: Large dataset; Supervised learning, Classification performance; Domain generalization; Generalisation; Labeled data; Mix-up strategy; Multiple source; Selective mixing; Semi-supervised; State of the art; Training sample, Mixing
Department/Centre: Division of Electrical Sciences > Electrical Communication Engineering
Date Deposited: 20 Jan 2022 06:55
Last Modified: 20 Jan 2022 06:55
URI: http://eprints.iisc.ac.in/id/eprint/70998

Actions (login required)

View Item View Item