Trustworthy API Traffic Management: Explainable RL for Anomaly Detection and Abuse Prevention
DOI:
https://doi.org/10.71465/fra273Keywords:
API Traffic Management, Reinforcement Learning, Explainable AI, Anomaly Detection, Abuse Prevention, SHAP, Trustworthy AIAbstract
As the digital economy increasingly relies on Application Programming Interfaces (APIs), ensuring the trustworthiness of API traffic management has become critical. Traditional rule-based systems often fail to adapt to complex and evolving patterns of API misuse, such as automated abuse, malicious bursts, and credential stuffing. This paper introduces an explainable reinforcement learning (RL) framework designed for real-time anomaly detection and proactive abuse prevention in API traffic. By integrating interpretability methods like SHAP (SHapley Additive exPlanations) into the RL loop, the framework offers both adaptive performance and transparent decision-making. The proposed method is evaluated on simulated and real-world traffic data, demonstrating improved accuracy in detecting abnormal behaviors and reducing false positives compared to baseline models. The results suggest that explainable RL can effectively balance security and reliability while preserving developer trust and regulatory compliance.
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
 
							