Combining queueing theory with deep reinforcement learning for scheduling and power control

We propose a novel Multicast queue for wireless content centric networks such as Netflix, YouTube etc. We exploit the redundancy in request traffic and merge similar requests across users. We analyze this queue and show that our queue is always stable.  Also, it has significantly better delay performance than coded caching schemes, in the literature.  In addition to this, since our scheme uses a simple multicast queue and LRU caches at the users, it is simpler to implement.

Finally, we use Deep Reinforcement learning to obtain optimum power control and Deep assisted scheduling, to improve the performance further. Deep algorithms proposed in our work can be scaled to large networks and can track the performance even in a nonstationary environment.

References:

M Panju, R Raghu, V Sharma,  V Agarwal, R Ramachandran, “Queueing theoretic models for uncoded and coded multicast Wireless networks with caches”, To appear in: IEEE Transactions in Wireless Communications, 2021

Ramkumar Raghu, Pratheek Upadhyaya, Mahadesh Panju, Vaneet Aggarwal and Vinod Sharma, “Deep Reinforcement Learning Based Power Control for Wireless Multicast Systems,” 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2019, pp. 1168-1175

 

Faculty: Vinod Sharma, ECE
Click image to view enlarged version

Scroll Up