← Back to Presentations
In-person

Topology-Aware Deep Reinforcement Learning for RIS Beamforming: A GNN-PPO and Risk-Sensitive Evaluation

Slide Cover

Conference: 2025 Asian Conference on Communication and Networks

Start Time: 2025-12-29 14:45:00

Duration: 15min

Session: Track 1: Mobile computing, communications, 5G and beyond » Track 1: Mobile computing, communications, 5G and beyond

Room: Conference Auditorium

View Slides No Video

Abstract

Reconfigurable intelligent surfaces (RIS) enable control of radio propagation via large arrays of passive reflecting elements. Optimizing RIS phase profiles for spectral efficiency is challenging due to high-dimensional continuous actions and non-convex channel coupling. We cast RIS beamforming as a sequential decision problem and evaluate four reinforcement-learning (RL) agents—A2C, Graph-Neural-Network Proximal Policy Optimization (GNN-PPO), Soft Actor–Critic (SAC), and Quantile-Regression PPO (QR-PPO)—in a realistic simulator with mobility, dual-slope log-distance path loss, shadowing, and Rician fading. Using a common protocol and PCA/GNN feature extraction, we compare agents on \textbf{rate} (mean and variability), \textbf{tail risk} via CVaR at 5\%, mean SNR, and wall-clock cost. \textbf{GNN-PPO} attains the best mean rate, the \emph{lowest} variability, the \emph{highest} CVaR at 5\% (strong tail performance), and the highest mean SNR. \textbf{A2C} is the compute-efficiency winner with the shortest total time, \textbf{SAC} provides a balanced compromise, while \textbf{QR-PPO} is cost-inefficient and underperforms in the tails under our configuration. We discuss design insights and directions for scalable, risk-aware RIS control.

Speakers

Ioannou Iacovos
Assistant Professor
CYENS;European University of Cyprus

Details

Type
In-person
Model
OFFLINE
Language
EN
Timezone
UTC+8
Views
203
Likes
34