Adversarially Robust Sequence Models with Frequency-Domain Consistency Regularization
DOI:
https://doi.org/10.71465/fair539Keywords:
Adversarial Robustness, Sequence Modeling, Frequency Domain, Consistency RegularizationAbstract
The deployment of sequence models in safety-critical applications, ranging from automated financial trading to clinical narrative analysis, is currently hindered by their susceptibility to adversarial perturbations. While adversarial training has emerged as a rigorous defense mechanism, it predominantly operates in the time or token domain, often failing to account for the spectral characteristics of the input data where adversarial noise tends to concentrate. In this paper, we introduce Frequency-Domain Consistency Regularization (FDCR), a novel architectural constraint that enforces semantic invariance across the spectral decomposition of latent representations. By leveraging the Discrete Fourier Transform (DFT) within the training loop, FDCR penalizes discrepancies between the high-frequency components of clean and perturbed sequences, effectively filtering out non-robust features that contribute to model fragility. We provide a theoretical analysis demonstrating that spectral regularization tightens the generalization bound for recurrent architectures under min-max perturbation constraints. Extensive experiments on standard benchmarks verify that FDCR significantly outperforms state-of-the-art adversarial defense methods in maintaining robust accuracy while mitigating the trade-off with clean performance.
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.