loading page

Semantic Stealth: Adversarial Text Attacks on NLP using Several Methods
  • +4
  • Jaydip Sen,
  • Roopkatha Dey,
  • Aivy Debnath,
  • Sayak Kumar Dutta,
  • Kaustav Ghosh,
  • Arijit Mitra,
  • Arghya Roychowdhury
Jaydip Sen

Corresponding Author:[email protected]

Author Profile
Roopkatha Dey
Aivy Debnath
Sayak Kumar Dutta
Kaustav Ghosh
Arijit Mitra
Arghya Roychowdhury

Abstract

In various real-world applications such as machine translation, sentiment analysis, and question answering, a pivotal role is played by NLP models, facilitating efficient communication and decision-making processes in domains ranging from healthcare to finance. However, a significant challenge is posed to the robustness of these natural language processing models by text adversarial attacks. These attacks involve the deliberate manipulation of input text to mislead the predictions of the model while maintaining human interpretability. Despite the remarkable performance achieved by state-of-the-art models like BERT in various natural language processing tasks, they are found to remain vulnerable to adversarial perturbations in the input text. In addressing the vulnerability of text classifiers to adversarial attacks, three distinct attack mechanisms are explored in this paper using the victim model BERT: BERT-on-BERT attack, PWWS attack, and Fraud Bargain’s Attack (FBA). Leveraging the IMDB, AG News, and SST2 datasets, a thorough comparative analysis is conducted to assess the effectiveness of these attacks on the BERT classifier model. It is revealed by the analysis that PWWS emerges as the most potent adversary, consistently outperforming other methods across multiple evaluation scenarios, thereby emphasizing its efficacy in generating adversarial examples for text classification. Through comprehensive experimentation, the performance of these attacks is assessed, and the findings indicate that the PWWS attack outperforms others, demonstrating lower runtime, higher accuracy, and favorable semantic similarity scores. The key insight of this paper lies in the assessment of the relative performances of three prevalent state-of-the-art attack mechanisms.
18 Apr 2024Submitted to TechRxiv
24 Apr 2024Published in TechRxiv