Autonomous CPU Resource Allocation in Cloud Environments Using Reinforcement Learning

Authors

  • Miguel Alvarez University of Chile, Chile Author

DOI:

https://doi.org/10.71465/fair279

Keywords:

Cloud computing, CPU resource allocation, reinforcement learning, deep Q-learning, container orchestration, autoscaling, dynamic scheduling

Abstract

Efficient CPU resource allocation is essential for optimizing performance and cost in cloud environments, where workloads are dynamic and multi-tenant applications demand real-time adaptability. Traditional allocation strategies rely on static heuristics or rule-based scheduling, which often fail to scale or generalize under rapidly changing conditions. This paper proposes an autonomous CPU resource allocation framework based on reinforcement learning (RL), which dynamically learns optimal allocation policies by interacting with the cloud environment. We present a model-free deep reinforcement learning (DRL) agent capable of adjusting CPU shares across virtual machines (VMs) and containers based on workload patterns, performance feedback, and system constraints. Experimental results on both simulated and real cloud workloads demonstrate that the proposed method significantly outperforms baseline strategies in terms of utilization efficiency, task latency, and SLA compliance. The framework introduces a scalable, adaptive, and fully automated solution for CPU resource management in cloud computing.

Downloads

Download data is not yet available.

Downloads

Published

2025-06-18