Reinforcement Learning for Real-Time Energy Management in Electric Vehicles

Authors

  • Xin Yuan School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China Author
  • Yue Chen School of Mechanical Engineering, Southeast University, Nanjing 210094, China Author

DOI:

https://doi.org/10.71465/fess276

Keywords:

Electric Vehicles, Reinforcement Learning, Energy Management, Battery Optimization, Deep Q-Learning, Real-Time Control, Regenerative Braking, Adaptive Systems, SOC Management, Smart Mobility

Abstract

The increasing penetration of electric vehicles (EVs) in modern transportation requires advanced energy management strategies to optimize power distribution, enhance battery longevity, and improve driving range. Traditional rule-based or model predictive control systems often struggle with dynamic and uncertain driving conditions, limiting their adaptability. This paper explores the application of reinforcement learning (RL), particularly deep reinforcement learning (DRL), as a data-driven and adaptive solution for real-time energy management in EVs. By formulating energy control as a sequential decision-making problem, RL agents learn optimal policies through interaction with the EV environment, adjusting strategies based on speed, terrain, state-of-charge (SOC), and driver behavior. We present a hybrid RL framework that integrates battery aging models, regenerative braking, and thermal constraints. Simulation results show that our approach significantly outperforms traditional baselines in terms of energy efficiency, charge preservation, and system responsiveness. The paper also discusses challenges in real-world deployment, including safety, explainability, and transferability of learned policies.

Downloads

Published

2025-06-13