Improving Intrusion Detection Robustness Through Adversarial Training Methods

Authors

  • Kenji Sato Department of Computer Science, City University of Hong Kong, Hong Kong, China Author
  • Priya Nair Department of Computer Science, City University of Hong Kong, Hong Kong, China Author

DOI:

https://doi.org/10.71465/fair423

Keywords:

Intrusion Detection Systems, Adversarial Training, Deep Neural Networks, Cybersecurity, Network Security, Robust Machine Learning, Recurrent Neural Networks

Abstract

Network Intrusion Detection Systems (NIDS) leveraging deep learning architectures have demonstrated exceptional performance in identifying cyber threats through automated feature learning and pattern recognition. However, recent investigations reveal critical vulnerabilities when these systems encounter adversarial attacks, where malicious actors introduce carefully crafted perturbations to evade detection mechanisms. This paper presents a comprehensive study of adversarial training methodologies specifically designed to enhance the robustness of deep neural network-based NIDS against sophisticated evasion techniques. We systematically investigate multiple adversarial training approaches, integrating both Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attack generation with deep learning architectures including fully-connected Deep Neural Networks (DNN) and Recurrent Neural Networks (RNN). Through extensive experimentation on benchmark intrusion detection datasets, our adversarially-trained models achieve detection accuracy exceeding 94 percent even under strong adversarial perturbations, while maintaining competitive performance on clean network traffic. The research demonstrates that incorporating adversarial examples during training fundamentally reshapes decision boundaries, enabling intrusion detection systems to maintain operational effectiveness in adversarial environments.

Downloads

Download data is not yet available.

Downloads

Published

2025-10-30