A cross-border community for researchers with openness, equality and inclusion

ABSTRACT LIBRARY

Q-Learning Driven Spectrum Prediction for Energy-Efficient RF-Powered D2D Communications

Publisher: IEEE

Authors: Banerjee Avik, RV College of Engineering Maity Santi P., Indian Institute of Engineering Science and Technology; Shibpur I. Ioannou Iacovos, ;University of Cyprus N Prabagarane, SSN Vassiliou Vasos, University of Cyprus

  • Favorite
  • Share:

Abstract:

Device-to-device (D2D) communications face challenges of spectrum scarcity and limited power, hence necessitating energy-efficient system design that also meets target data rate requirements. To address these issues, a reinforcement learning (RL)-based Q-learning scheme is proposed within a cognitive radio (CR) framework for primary user (PU) spectrum prediction (SP). This approach enables opportunistic data transmission and radio frequency (RF) energy harvesting (EH) for sustainable transmission of devices. The RL algorithm aims to maximize energy efficiency (EE) while satisfying constraints on target data transmission rate, energy harvesting requirements, and interference thresholds permissible at the PU receiver to protect it in the event of wrong prediction. A comprehensive set of simulations is conducted to evaluate the proposed method, reporting improvements in spectrum prediction accuracy, normalized energy efficiency, and residual energy. The results demonstrate a gain of 35% in EE, a 25% reduction in data collisions, and a 35% improvement in residual energy over the reported works at reduced trained parameters.

Keywords: Device-to-device, reinforcement learning, CRN, energy harvesting

Published in: 2024 Asian Conference on Communication and Networks (ASIANComNet)

Date of Publication: --

DOI: -

Publisher: IEEE

×

USS WeChat Official Account

USSsociety

Please scan the QR code to follow
the wechat official account.