Wireless Personal Communications | 2021

Dynamic Auto Reconfiguration of Operator Placement in Wireless Distributed Stream Processing Systems

 
 

Abstract


The data is generated at significant speed and volume by devices in real-time. The data generation and the growth of fog and edge computing infrastructure have led to the noteworthy development of the corresponding distributed stream processing systems (DSPS). A DSPS application has Quality of Service (QoS) restrictions in terms of resource cost and time. The physical resources are distributed and heterogeneous. The resource-constrained scheduling problem has considerable implications on the performance of the system and QoS violations. The static deployment of applications in fog or edge scenario has to be monitored continuously for runtime issues, and actions have to be taken accordingly. In this paper, we propose an adaptation capability with reinforcement learning techniques to an existing stream processing framework scheduler. This functionality enables the scheduler to make decisions on its own when the system model or knowledge of the environment is not known upfront. The reinforcement learning methods adapt to the system when the system model for different states is not available. We consider applications whose workload cannot be characterized or predicted. In such applications, predictions of input load are not helpful for online scheduling. The Q-Learning based online scheduler learns to make dynamic scaling decisions at runtime when there is performance degradation. We validated the proposed approach with real-time and benchmark applications on a DSPS cluster. We obtained an average of 6% reduction in the response time and a 15% increase in the throughput when the Q Learning module is employed in the scheduler.

Volume None
Pages 1-26
DOI 10.1007/S11277-021-08264-Y
Language English
Journal Wireless Personal Communications

Full Text