References & Citations
Computer Science > Computation and Language
Title: Adversarial Attacks and Defense for Conversation Entailment Task
(Submitted on 1 May 2024 (v1), last revised 2 May 2024 (this version, v2))
Abstract: As the deployment of NLP systems in critical applications grows, ensuring the robustness of large language models (LLMs) against adversarial attacks becomes increasingly important. Large language models excel in various NLP tasks but remain vulnerable to low-cost adversarial attacks. Focusing on the domain of conversation entailment, where multi-turn dialogues serve as premises to verify hypotheses, we fine-tune a transformer model to accurately discern the truthfulness of these hypotheses. Adversaries manipulate hypotheses through synonym swapping, aiming to deceive the model into making incorrect predictions. To counteract these attacks, we implemented innovative fine-tuning techniques and introduced an embedding perturbation loss method to significantly bolster the model's robustness. Our findings not only emphasize the importance of defending against adversarial attacks in NLP but also highlight the real-world implications, suggesting that enhancing model robustness is critical for reliable NLP applications.
Submission history
From: Zhenning Yang [view email][v1] Wed, 1 May 2024 02:49:18 GMT (8604kb,D)
[v2] Thu, 2 May 2024 03:37:08 GMT (8604kb,D)
Link back to: arXiv, form interface, contact.