← Back to Presentations
In-person

Topology-Aware Deep Reinforcement Learning for RIS Beamforming: A GNN-PPO and Risk-Sensitive Evaluation

Speakers: Ioannou Iacovos

Track: Track 1: Mobile computing, communications, 5G and beyond

📑 No Slides 🎬 No Video

Abstract

Reconfigurable intelligent surfaces (RIS) enable control of radio propagation via large arrays of passive reflecting elements. Optimizing RIS phase profiles for spectral efficiency is challenging due to high-dimensional continuous actions and non-convex channel coupling. We cast RIS beamforming as a sequential decision problem and evaluate four reinforcement-learning (RL) agents—A2C, Graph-Neural-Network Proximal Policy Optimization (GNN-PPO), Soft Actor–Critic (SAC), and Quantile-Regression PPO (QR-PPO)—in a realistic simulator with mobility, dual-slope log-distance path loss, shadowing, and Rician fading. Using a common protocol and PCA/GNN feature extraction, we compare agents on \textbf{rate} (mean and variability), \textbf{tail risk} via CVaR at 5\%, mean SNR, and wall-clock cost. \textbf{GNN-PPO} attains the best mean rate, the \emph{lowest} variability, the \emph{highest} CVaR at 5\% (strong tail performance), and the highest mean SNR. \textbf{A2C} is the compute-efficiency winner with the shortest total time, \textbf{SAC} provides a balanced compromise, while \textbf{QR-PPO} is cost-inefficient and underperforms in the tails under our configuration. We discuss design insights and directions for scalable, risk-aware RIS control.

Speakers

Ioannou Iacovos
Assistant Professor
CYENS;European University of Cyprus

Details

Type
In-person
Model
OFFLINE
Language
EN
Timezone
UTC+8
Views
202
Likes
34