ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations

Mopuri, Konda Reddy and Ganeshan, Aditya and Babu, R Venkatesh (2019) Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations. In: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 41 (10). pp. 2452-2465.

[img] PDF
Iee_Tra_Pat_41-10_2452.pdf - Published Version
Restricted to Registered users only

Download (3MB) | Request a copy
Official URL: http://doi.org/10.1109/TPAMI.2018.2861800


Machine learning models are susceptible to adversarial perturbations: small changes to input that can cause large changes in output. It is also demonstrated that there exist input-agnostic perturbations, called universal adversarial perturbations, which can change the inference of target model on most of the data samples. However, existing methods to craft universal perturbations are (i) task specific, (ii) require samples from the training data distribution, and (iii) perform complex optimizations. Additionally, because of the data dependence, fooling ability of the crafted perturbations is proportional to the available training data. In this paper, we present a novel, generalizable and data-free approach for crafting universal adversarial perturbations. Independent of the underlying task, our objective achieves fooling via corrupting the extracted features at multiple layers. Therefore, the proposed objective is generalizable to craft image-agnostic perturbations across multiple vision tasks such as object recognition, semantic segmentation, and depth estimation. In the practical setting of black-box attack scenario (when the attacker does not have access to the target model and it's training data), we show that our objective outperforms the data dependent objectives to fool the learned models. Further, via exploiting simple priors related to the data distribution, our objective remarkably boosts the fooling ability of the crafted perturbations. Significant fooling rates achieved by our objective emphasize that the current deep learning models are now at an increased risk, since our objective generalizes across multiple tasks without the requirement of training data for crafting the perturbations. To encourage reproducible research, we have released the codes for our proposed algorithm.(1)

Item Type: Journal Article
Additional Information: Copyright of this article belongs to IEEE COMPUTER SOC
Keywords: Adversarial perturbations; fooling CNNs; stability of neural networks; perturbations; universal; generalizable attacks; attacks on ML systems; data-free objectives; adversarial noise
Department/Centre: Division of Interdisciplinary Sciences > Computational and Data Sciences
Date Deposited: 16 Jan 2020 06:27
Last Modified: 16 Jan 2020 06:27
URI: http://eprints.iisc.ac.in/id/eprint/63847

Actions (login required)

View Item View Item