Authors: P Dhivagar, Hindusthan College
The optimization of the resource allocation of energy systems is critical for improving operational efficiency, waste minimization, and reliable service provision. Growing complexity and demand require more effective and dynamic resource management methods. Existing techniques frequently depend on static optimization frameworks using a set of heuristics which do not cope with real-time variability in supply and demand, leading to misguided resource allocation bolstered by inefficiencies. To tackle these issues, we introduce a novel approach, Reinforcement Learning for Dynamic Resource Allocation (RL-DRA), which permits reasoning at all system levels and environment feedback-based resource allocation for the adaptive management of energy systems. RL-DRA works in real-time, interacts with the system and environment, and in the presence of noise, dictates an optimal policy for resource allocation. The method has been tested at the smart grid energy management system level and levelized renewable energy system models where changes in supply and demand protocols were both stochastic and unpredictable and approximated to real-world settings. The results suggest that the RL-DRA methodology improves efficiency, lowers operational costs, and enhances system reliability when compared to other approaches. The study demonstrates ways with which machine learning, particularly reinforcement learning, could be integrated into resource allocation problems in energy systems towards more robust infrastructure.
Keywords: Energy Systems Optimization, Resource Distribution, Machine Learning, Reinforcement Learning, Dynamic Resource Allocation.
Published in: 2024 Asian Conference on Communication and Networks (ASIANComNet)
Date of Publication: --
DOI: -
Publisher: IEEE