Reinforcement Learning-Based Framework for Autonomous Optimization in Artificial Intelligence Systems
DOI:
https://doi.org/10.71465/fair329Keywords:
Reinforcement Learning, Autonomous Optimization, Adaptive Control, Resource Allocation, Latency Optimization, OpenAI Gym, Hyperparameter TuningAbstract
Modern artificial intelligence (AI) systems often operate under dynamic conditions where static configurations lead to suboptimal performance. This paper proposes a novel reinforcement learning (RL)-based framework for autonomous, real-time optimization of AI systems. The framework employs a deep RL agent to continuously adjust computational resource allocation, algorithm configurations, and hyperparameters in response to changing workloads and performance feedback. We implement the framework using the OpenAI Gym toolkit to simulate an AI environment, focusing on minimizing latency and efficient resource utilization. The RL agent learns an adaptive policy (using Proximal Policy Optimization) that tunes system parameters to balance low processing delay with low resource cost. Experiments show that the proposed approach significantly reduces end-to-end latency while improving resource usage compared to static and heuristic baselines. The agent autonomously adapts to workload variations, achieving up to 40% latency reduction and higher resource efficiency. These results demonstrate the potential of reinforcement learning for self-optimizing AI systems, enabling real-time adaptive control and improved performance in complex, dynamic environments.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Xing Chang (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.