Safe and Efficient Reinforcement Learning for Energy Systems

Prof. Baosen Zhang


Abstract

Inverter-based resources such as solar and storage provide us with more flexibility in the control of power systems. Through their power electronic interfaces, complex control functions can be implemented to quickly respond to changes in the system. Recently, reinforcement learning has emerged as a popular method to find these nonlinear controllers. The key challenge with a learning-based approach is that stability and safety constraints are difficult to enforce on the learned controllers. In this talk, we show how model-based control theory can be used as useful constraints on reinforcement learning, allowing us to explicitly engineer the structure of neural network controllers such that they guarantee system stability. The resulting controllers only use local information and outperform conventional droop as well as strategies learned purely by using reinforcement learning.