TY - JOUR
T1 - Resource Offload Consolidation Based on Deep-Reinforcement Learning Approach in Cyber-Physical Systems
AU - Mekala, M. S.
AU - Jolfaei, Alireza
AU - Srivastava, Gautam
AU - Zheng, Xi
AU - Anvari-Moghaddam, Amjad
AU - Viswanathan, P.
PY - 2022/4/1
Y1 - 2022/4/1
N2 - In cyber-physical systems, it is advantageous to leverage cloud with edge resources to distribute the workload for processing and computing user data at the point of generation. Services offered by cloud are not flexible enough against variations in the size of underlying data, which leads to increased latency, violation of deadline and higher cost. On the other hand, resolving above-mentioned issues with edge devices with limited resources is also challenging. In this work, a novel reinforcement learning algorithm, Capacity-Cost Ratio-Reinforcement Learning (CCR-RL), is proposed which considers both resource utilization and cost for the target cyber-physical systems. In CCR-RL, the task offloading decision is made considering data arrival rate, edge device computation power, and underlying transmission capacity. Then, a deep learning model is created to allocate resources based on the underlying communication and computation rate. Moreover, new algorithms are proposed to regulate the allocation of communication and computation resources for the workload among edge devices and edge servers. The simulation results demonstrate that the proposed method can achieve a minimal latency and a reduced processing cost compared to the state-of-the-art schemes.
AB - In cyber-physical systems, it is advantageous to leverage cloud with edge resources to distribute the workload for processing and computing user data at the point of generation. Services offered by cloud are not flexible enough against variations in the size of underlying data, which leads to increased latency, violation of deadline and higher cost. On the other hand, resolving above-mentioned issues with edge devices with limited resources is also challenging. In this work, a novel reinforcement learning algorithm, Capacity-Cost Ratio-Reinforcement Learning (CCR-RL), is proposed which considers both resource utilization and cost for the target cyber-physical systems. In CCR-RL, the task offloading decision is made considering data arrival rate, edge device computation power, and underlying transmission capacity. Then, a deep learning model is created to allocate resources based on the underlying communication and computation rate. Moreover, new algorithms are proposed to regulate the allocation of communication and computation resources for the workload among edge devices and edge servers. The simulation results demonstrate that the proposed method can achieve a minimal latency and a reduced processing cost compared to the state-of-the-art schemes.
KW - Artificial intelligence
KW - deep-reinforcement learning
KW - edge computing
KW - game theory
KW - measurement systems
KW - resource provision
UR - http://www.scopus.com/inward/record.url?scp=85099093108&partnerID=8YFLogxK
U2 - 10.1109/TETCI.2020.3044082
DO - 10.1109/TETCI.2020.3044082
M3 - Article
AN - SCOPUS:85099093108
SN - 2471-285X
VL - 6
SP - 245
EP - 254
JO - IEEE Transactions on Emerging Topics in Computational Intelligence
JF - IEEE Transactions on Emerging Topics in Computational Intelligence
IS - 2
ER -