ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Teaching a GAN what not to learn

Asokan, S and Seelamantula, CS (2020) Teaching a GAN what not to learn. In: 34th Conference on Neural Information Processing Systems, NeurIPS 2020, 6- 12 December 2020, Virtual, Online.

[img] PDF
NeurIPS_2020.pdf - Published Version
Restricted to Registered users only

Download (8MB) | Request a copy
Official URL: https://doi.org/10.48550/arXiv.2010.15639

Abstract

Generative adversarial networks (GANs) were originally envisioned as unsupervised generative models that learn to follow a target distribution. Variants such as conditional GANs, auxiliary-classifier GANs (ACGANs) project GANs on to supervised and semi-supervised learning frameworks by providing labelled data and using multi-class discriminators. In this paper, we approach the supervised GAN problem from a different perspective, one that is motivated by the philosophy of the famous Persian poet Rumi who said, “The art of knowing is knowing what to ignore.” In the GAN framework, we not only provide the GAN positive data that it must learn to model, but also present it with so-called negative samples that it must learn to avoid — we call this the Rumi framework. This formulation allows the discriminator to represent the underlying target distribution better by learning to penalize generated samples that are undesirable — we show that this capability accelerates the learning process of the generator. We present a reformulation of the standard GAN (SGAN) and least-squares GAN (LSGAN) within the Rumi setting. The advantage of the reformulation is demonstrated by means of experiments conducted on MNIST, Fashion MNIST, CelebA, and CIFAR-10 datasets. Finally, we consider an application of the proposed formulation to address the important problem of learning an under-represented class in an unbalanced dataset. The Rumi approach results in substantially lower FID scores than the standard GAN frameworks while possessing better generalization capability.

Item Type: Conference Paper
Publication: Advances in Neural Information Processing Systems
Publisher: Neural information processing systems foundation
Additional Information: The copyright for this article belongs to Neural information processing systems foundation.
Keywords: Adversarial networks; Generalization capability; Generative model; Learning process; Least Square; Negative samples; Positive data; Under-represented, Semi-supervised learning
Department/Centre: Division of Electrical Sciences > Electrical Engineering
Division of Interdisciplinary Sciences > Robert Bosch Centre for Cyber Physical Systems
Date Deposited: 16 Feb 2023 10:51
Last Modified: 16 Feb 2023 10:51
URI: https://eprints.iisc.ac.in/id/eprint/79976

Actions (login required)

View Item View Item