Dynamic Workflow Partitioning via Reinforcement Learning for Edge-Cloud Heterogeneous Systems
DOI:
https://doi.org/10.71465/fair455Keywords:
Edge computing, cloud computing, workflow partitioning, reinforcement learning, deep Q-network, heterogeneous systems, task scheduling, resource optimizationAbstract
The proliferation of Internet of Things devices and edge computing infrastructure has created unprecedented opportunities for distributed workflow execution across heterogeneous edge-cloud environments. However, optimal workflow partitioning in such dynamic systems remains a significant challenge due to the complexity of resource heterogeneity, network variability, and diverse application requirements. This paper proposes a novel Dynamic Workflow Partitioning framework leveraging Deep Reinforcement Learning to intelligently distribute workflow tasks between edge nodes and cloud data centers. The framework employs a Deep Q-Network architecture enhanced with a Graph Neural Network encoder to capture workflow dependencies and system state representations. Through comprehensive evaluation using real-world workflow applications including CyberShake, Epigenomics, Inspiral, Montage, and Sipht, our approach demonstrates superior performance in minimizing execution time, reducing network overhead, and maintaining quality of service guarantees compared to traditional heuristic-based methods. The experimental results show that the proposed approach achieves up to 32% reduction in average workflow completion time and 41% improvement in resource utilization efficiency across various heterogeneous edge-cloud configurations.
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.