Adversarially Robust Sequence Models with Frequency-Domain Consistency Regularization

Authors

  • Tao Mao Department of Computer Science, Cornell University, Ithaca, NY 14853, USA Author
  • Emily Wilson Department of Computer Science, Cornell University, Ithaca, NY 14853, USA Author

DOI:

https://doi.org/10.71465/fair539

Keywords:

Adversarial Robustness, Sequence Modeling, Frequency Domain, Consistency Regularization

Abstract

The deployment of sequence models in safety-critical applications, ranging from automated financial trading to clinical narrative analysis, is currently hindered by their susceptibility to adversarial perturbations. While adversarial training has emerged as a rigorous defense mechanism, it predominantly operates in the time or token domain, often failing to account for the spectral characteristics of the input data where adversarial noise tends to concentrate. In this paper, we introduce Frequency-Domain Consistency Regularization (FDCR), a novel architectural constraint that enforces semantic invariance across the spectral decomposition of latent representations. By leveraging the Discrete Fourier Transform (DFT) within the training loop, FDCR penalizes discrepancies between the high-frequency components of clean and perturbed sequences, effectively filtering out non-robust features that contribute to model fragility. We provide a theoretical analysis demonstrating that spectral regularization tightens the generalization bound for recurrent architectures under min-max perturbation constraints. Extensive experiments on standard benchmarks verify that FDCR significantly outperforms state-of-the-art adversarial defense methods in maintaining robust accuracy while mitigating the trade-off with clean performance.

Downloads

Download data is not yet available.

Downloads

Published

2025-12-30