A cross-border community for researchers with openness, equality and inclusion

ABSTRACT LIBRARY

DQN on EE Maximization in RF-Powered D2D Communications through Spectrum Prediction

Publisher: IEEE

Authors: Banerjee Avik, RV College of Engineering Maity Santi P., Indian Institute of Engineering Science and Technology; Shibpur I. Ioannou Iacovos, University of Cyprus, N Prabagarane, SSN Vassiliou Vasos, University of Cyprus

  • Favorite
  • Share:

Abstract:

Device-to-device (D2D) communications face challenges of spectrum scarcity and limited power, hence necessitating energy-efficient system design that also meets target data rate requirements. To address these issues, a reinforcement learning (RL)-based Q-learning scheme deep Q-Networks (DQN) is proposed within a cognitive radio (CR) framework for primary user (PU) spectrum prediction (SP). This approach enables opportunistic data transmission and radio frequency (RF) energy harvesting (EH) for self-powering the transmitting device. The RL algorithm aims to maximize energy efficiency (EE) while satisfying constraints on target data transmission rate, energy harvesting requirements, and interference thresholds permissible at the PU receiver to protect it in the event of wrong prediction. A comprehensive set of simulations is conducted to evaluate the proposed method, reporting improvements in spectrum prediction accuracy, normalized energy efficiency, and residual energy. The results demonstrate a gain of 35% in EE, a 24% reduction in data collisions, and a 38% improvement in residual energy compared to existing techniques.

Keywords: Device-to-device, reinforcement learning, CRN, energy harvesting

Published in: 2024 Asian Conference on Communication and Networks (ASIANComNet)

Date of Publication: --

DOI: -

Publisher: IEEE

×

USS WeChat Official Account

USSsociety

Please scan the QR code to follow
the wechat official account.