Trustworthy API Traffic Management: Explainable RL for Anomaly Detection and Abuse Prevention

Authors

  • Emily Foster Department of Computer Science, University of Saskatchewan, Saskatoon, Canada Author
  • Daniel Murphy Department of Computer Science, University of Saskatchewan, Saskatoon, Canada Author

DOI:

https://doi.org/10.71465/fra273

Keywords:

API Traffic Management, Reinforcement Learning, Explainable AI, Anomaly Detection, Abuse Prevention, SHAP, Trustworthy AI

Abstract

As the digital economy increasingly relies on Application Programming Interfaces (APIs), ensuring the trustworthiness of API traffic management has become critical. Traditional rule-based systems often fail to adapt to complex and evolving patterns of API misuse, such as automated abuse, malicious bursts, and credential stuffing. This paper introduces an explainable reinforcement learning (RL) framework designed for real-time anomaly detection and proactive abuse prevention in API traffic. By integrating interpretability methods like SHAP (SHapley Additive exPlanations) into the RL loop, the framework offers both adaptive performance and transparent decision-making. The proposed method is evaluated on simulated and real-world traffic data, demonstrating improved accuracy in detecting abnormal behaviors and reducing false positives compared to baseline models. The results suggest that explainable RL can effectively balance security and reliability while preserving developer trust and regulatory compliance.

Downloads

Download data is not yet available.

Downloads

Published

2025-06-13