Skip to content

A Graph Reinforcement Learning model that combines RL and Graph Neural Networks to solve Dynamic Economic Power Dispatch

License

Notifications You must be signed in to change notification settings

antonoliv/grl-ded

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

94 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Graph Reinforcement Learning for Improving Smart Grid Services

Graph Reinforcement Learning (GRL) is a topic that has earned significant attention from academics in the last few years. By enabling Reinforcement Learning techniques to learn and optimize sequential decision-making processes in graph-based environments, systems can be improved and gain the ability to leverage graph topology features in network-oriented application domains. With the advancements in the late 2010s on Graph Neural Networks in learning how to extract efficient graph representations from a given scenario, more sophisticated methods of GRL were proposed, and the topic started to attract the curiosity of scholars. Although, in recent years, a lot of work has been done in this area, research on these techniques is still considered to be in an early stage.

Furthermore, considering the current global challenges associated with sustainability and energy systems, there is an increasing need for advancements in energy-focused intelligent systems to modernize the current power grids. In the present, renewable energy sources play a major role in reducing the reliance on fossil fuels, which changes the topology of energy distribution systems as consumers gain the capability to generate renewable power. With the improvements in Artificial Intelligence and Machine Learning, systems can be adapted to the decentralization of energy production and efficiently manage the monitoring, distribution and transmission of energy systems. In this work, the primary emphasis lies on improving GRL algorithms which will be applied to solve the dynamic economic power dispatch problem as its main application domain, considering renewable energy sources and energy storage systems.

In this manner, this dissertation aims to advance the existing research on Graph Reinforcement Learning techniques by: (1) conducting a thorough review of the recent literature regarding various proposed GRL approaches and Dynamic Economic Dispatch Systems to gain an overall perspective of the recent state-of-the-art techniques and their limitations; (2) implementing and calibrating the two most prominent approaches, resulting in the models of GCN-SAC and GAT-SAC (3) performing a comparative and systematic empirical study on both proposed GRL approaches against the SAC algorithm on the Dynamic Economic Power Dispatch problem.

Results ended up favouring the state-of-the-art SAC algorithm over the proposed implementations. The proposed models' feature extraction abilities failed to overcome the sample efficiency and stability of SAC, with the former model yielding a better performance in both scenarios. However, the GRL approaches, particularly GCN-SAC, showed promising results in the larger scenario, portraying good scalability to more complex environments.

Directory Structure

  • deliverables - Disseratation Deliverables
  • sources - Project Code
  • latex - Latex Sources
  • resources - Dissertation Resources

About

A Graph Reinforcement Learning model that combines RL and Graph Neural Networks to solve Dynamic Economic Power Dispatch

Topics

Resources

License

Stars

Watchers

Forks