Q-Learning Driven Spectrum Prediction for Energy-Efficient RF-Powered D2D Communications
ID:56
View protection:Participant Only
Updated time:2025-12-29 12:35:48
Views:169
Online
Abstract
Device-to-device (D2D) communications face challenges of spectrum scarcity and limited power, hence necessitating energy-efficient system design that also meets target data rate requirements. To address these issues, a reinforcement learning (RL)-based Q-learning scheme is proposed within a cognitive radio (CR) framework for primary user (PU) spectrum prediction (SP). This approach enables opportunistic data transmission and radio frequency (RF) energy harvesting (EH) for sustainable transmission of devices. The RL algorithm aims to maximize energy efficiency (EE) while satisfying constraints on target data transmission rate, energy harvesting requirements, and interference thresholds permissible at the PU receiver to protect it in the event of wrong prediction. A comprehensive set of simulations is conducted to evaluate the proposed method, reporting improvements in spectrum prediction accuracy, normalized energy efficiency, and residual energy. The results demonstrate a gain of 35% in EE, a 25% reduction in data collisions, and a 35% improvement in residual energy over the reported works at reduced trained parameters.
Keywords
Device-to-device, reinforcement learning, CRN, energy harvesting
Post comments