Raj, R and Devi, VS (2023) Towards Robustness of Few-Shot Text Classifiers. In: 2023 International Joint Conference on Neural Networks, IJCNN 2023, 18-23 June 2023, Gold Coast, Australia.
PDF
2023-IJCNN_2023_2023.pdf - Published Version Restricted to Registered users only Download (781kB) | Request a copy |
Abstract
Few shot learning algorithms are designed to perform significantly well when the amount of annotated data is scarce. However, recent research shows that these algorithms are highly vulnerable to adversarial examples. Recent works have demonstrated the robustness of few-shot image classifiers. However, there is very little or no attention paid to the robustness of few-shot text classifiers. In this work, we highlight the vulnerability of few-shot text classifiers. We also provide an algorithm for adversarial training of few-shot classifiers that perform well in the presence of adversarial examples. We implemented this algorithm on existing state-of-the-art few-shot classifiers. Experimental results demonstrate that our algorithm resists adversarial attacks and performs better in the presence of adversarial examples. © 2023 IEEE.
Item Type: | Conference Paper |
---|---|
Publication: | Proceedings of the International Joint Conference on Neural Networks |
Publisher: | Institute of Electrical and Electronics Engineers Inc. |
Additional Information: | The copyright for this article belongs to the Institute of Electrical and Electronics Engineers Inc. |
Keywords: | Classification (of information); Computer vision; Learning systems; Text processing, Adversarial attack; Adversarial training; Few-shot learning; Image Classifiers; Recent researches; Robust modeling; State of the art; Text classification; Text classifiers, Learning algorithms |
Department/Centre: | Division of Electrical Sciences > Computer Science & Automation |
Date Deposited: | 04 Nov 2023 04:05 |
Last Modified: | 04 Nov 2023 04:05 |
URI: | https://eprints.iisc.ac.in/id/eprint/83165 |
Actions (login required)
View Item |