Improving Intrusion Detection Robustness Through Adversarial Training Methods
DOI:
https://doi.org/10.71465/fair423Keywords:
Intrusion Detection Systems, Adversarial Training, Deep Neural Networks, Cybersecurity, Network Security, Robust Machine Learning, Recurrent Neural NetworksAbstract
Network Intrusion Detection Systems (NIDS) leveraging deep learning architectures have demonstrated exceptional performance in identifying cyber threats through automated feature learning and pattern recognition. However, recent investigations reveal critical vulnerabilities when these systems encounter adversarial attacks, where malicious actors introduce carefully crafted perturbations to evade detection mechanisms. This paper presents a comprehensive study of adversarial training methodologies specifically designed to enhance the robustness of deep neural network-based NIDS against sophisticated evasion techniques. We systematically investigate multiple adversarial training approaches, integrating both Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attack generation with deep learning architectures including fully-connected Deep Neural Networks (DNN) and Recurrent Neural Networks (RNN). Through extensive experimentation on benchmark intrusion detection datasets, our adversarially-trained models achieve detection accuracy exceeding 94 percent even under strong adversarial perturbations, while maintaining competitive performance on clean network traffic. The research demonstrates that incorporating adversarial examples during training fundamentally reshapes decision boundaries, enabling intrusion detection systems to maintain operational effectiveness in adversarial environments.
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.