ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

DAD: Data-free Adversarial Defense at Test Time

Nayak, GK and Rawal, R and Chakraborty, A (2022) DAD: Data-free Adversarial Defense at Test Time. In: 22nd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, 4 - 8 January 2022, Waikoloa, pp. 3788-3797.

IEEE-CVF_WACV 2022_3788-3797_2022.pdf - Published Version

Download (1MB) | Preview
Official URL: https://doi.org/10.1109/WACV51458.2022.00384


Deep models are highly susceptible to adversarial attacks. Such attacks are carefully crafted imperceptible noises that can fool the network and can cause severe consequences when deployed. To encounter them, the model requires training data for adversarial training or explicit regularization-based techniques. However, privacy has become an important concern, restricting access to only trained models but not the training data (e.g. biometric data). Also, data curation is expensive and companies may have proprietary rights over it. To handle such situations, we propose a completely novel problem of 'test-time adversarial defense in absence of training data and even their statistics'. We solve it in two stages: a) detection and b) correction of adversarial samples. Our adversarial sample detection framework is initially trained on arbitrary data and is subsequently adapted to the unlabelled test data through unsupervised domain adaptation. We further correct the predictions on detected adversarial samples by transforming them in Fourier domain and obtaining their low frequency component at our proposed suitable radius for model prediction. We demonstrate the efficacy of our proposed technique via extensive experiments against several adversarial attacks and for different model architectures and datasets. For a non-robust Resnet-18 model pretrained on CIFAR-10, our detection method correctly identifies 91.42 adversaries. Also, we significantly improve the adversarial accuracy from 0 to 37.37 with a minimal drop of 0.02 in clean accuracy on state-of-the-art 'Auto Attack' without having to retrain the model.

Item Type: Conference Paper
Publication: Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022
Publisher: Institute of Electrical and Electronics Engineers Inc.
Additional Information: The copyright for this article belongs to the Authors.
Keywords: Computer vision; Deep learning, Adversarial attack and defense method; Adversarial learning; Biometric data; Data curation; Deep learning deep learning; Detection framework; Proprietary rights; Regularisation; Test time; Training data, Network security
Department/Centre: Division of Interdisciplinary Sciences > Computational and Data Sciences
Date Deposited: 14 Jul 2022 04:32
Last Modified: 14 Jul 2022 04:32
URI: https://eprints.iisc.ac.in/id/eprint/74307

Actions (login required)

View Item View Item