Mimic and fool: A task-agnostic adversarial attack
Article Type
Research Article
Publication Title
IEEE Transactions on Neural Networks and Learning Systems
Abstract
At present, adversarial attacks are designed in a task-specific fashion. However, for downstream computer vision tasks such as image captioning and image segmentation, the current deep-learning systems use an image classifier such as VGG16, ResNet50, and Inception-v3 as a feature extractor. Keeping this in mind, we propose Mimic and Fool (MaF), a task-agnostic adversarial attack. Given a feature extractor, the proposed attack finds an adversarial image, which can mimic the image feature of the original image. This ensures that the two images give the same (or similar) output regardless of the task. We randomly select 1000 MSCOCO validation images for experimentation. We perform experiments on two image captioning models, Show and Tell, Show Attend and Tell, and one visual question answering (VQA) model, namely, end-to-end neural module network (N2NMN). The proposed attack achieves a success rate of 74.0%, 81.0%, and 87.1% for Show and Tell, Show Attend and Tell, and N2NMN, respectively. We also propose a slight modification to our attack to generate natural-looking adversarial images. In addition, we also show the applicability of the proposed attack for invertible architecture. Since MaF only requires information about the feature extractor of the model, it can be considered as a gray-box attack.
First Page
1801
Last Page
1808
DOI
10.1109/TNNLS.2020.2984972
Publication Date
4-1-2021
Recommended Citation
Chaturvedi, Akshay and Garain, Utpal, "Mimic and fool: A task-agnostic adversarial attack" (2021). Journal Articles. 2006.
https://digitalcommons.isical.ac.in/journal-articles/2006
Comments
Open Access, Green