ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Scaling Adversarial Training to Large Perturbation Bounds

Addepalli, S and Jain, S and Sriramanan, G and Venkatesh Babu, R (2022) Scaling Adversarial Training to Large Perturbation Bounds. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 23 - 27 October 2022, Tel Aviv, pp. 301-316.

Full text not available from this repository.
Official URL: https://doi.org/10.1007/978-3-031-20065-6_18

Abstract

The vulnerability of Deep Neural Networks to Adversarial Attacks has fuelled research towards building robust models. While most Adversarial Training algorithms aim at defending attacks constrained within low magnitude Lp norm bounds, real-world adversaries are not limited by such constraints. In this work, we aim to achieve adversarial robustness within larger bounds, against perturbations that may be perceptible, but do not change human (or Oracle) prediction. The presence of images that flip Oracle predictions and those that do not makes this a challenging setting for adversarial robustness. We discuss the ideal goals of an adversarial defense algorithm beyond perceptual limits, and further highlight the shortcomings of naively extending existing training algorithms to higher perturbation bounds. In order to overcome these shortcomings, we propose a novel defense, Oracle-Aligned Adversarial Training (OA-AT), to align the predictions of the network with that of an Oracle during adversarial training. The proposed approach achieves state-of-the-art performance at large epsilon bounds (such as an L-inf bound of 16/255 on CIFAR-10) while outperforming existing defenses (AWP, TRADES, PGD-AT) at standard bounds (8/255) as well. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Item Type: Conference Paper
Publication: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Publisher: Springer Science and Business Media Deutschland GmbH
Additional Information: The copyright for this article belongs to the Authors.
Keywords: Deep neural networks; Network security, Lp-norm; Perturbation bounds; Real-world; Robust modeling; Scalings; State-of-the-art performance; Training algorithms, Forecasting
Department/Centre: Division of Interdisciplinary Sciences > Computational and Data Sciences
Date Deposited: 31 Jan 2023 06:35
Last Modified: 31 Jan 2023 06:35
URI: https://eprints.iisc.ac.in/id/eprint/79595

Actions (login required)

View Item View Item