Embedding Temporal Awareness in Reinforcement Learning Models for Energy System Control

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

A reinforcement learning (RL) agent can capture energy system dynamics from historical data to improve its decision-making. However, the RL agent struggles to fully understand temporal patterns in scenarios with high fluctuations. This study addresses this gap by embedding temporal awareness into RL agents for energy system control. The proposed approach integrates generation and load time series into the state representation using a temporal embedding module based on long short-term memory (LSTM). The method is evaluated on highly variable patterns from the Netherlands to demonstrate the performance of temporal RL decision-making in different conditions.

Original languageEnglish
Title of host publicationE-ENERGY 2025 - Proceedings of the 2025 16th ACM International Conference on Future and Sustainable Energy Systems
PublisherAssociation for Computing Machinery
Pages1002-1004
Number of pages3
ISBN (Electronic)9798400711251
DOIs
Publication statusPublished - 16 Jun 2025
Event16th ACM International Conference on Future and Sustainable Energy Systems, E-Energy 2025 - Rotterdam, Netherlands
Duration: 17 Jun 202520 Jun 2025
Conference number: 16

Conference

Conference16th ACM International Conference on Future and Sustainable Energy Systems, E-Energy 2025
Abbreviated titleE-Energy
Country/TerritoryNetherlands
CityRotterdam
Period17/06/2520/06/25

Keywords

  • Energy System Control
  • Long Short-Term Memory
  • Reinforcement Learning
  • Sequential Decision-making
  • Temporal Awareness

Fingerprint

Dive into the research topics of 'Embedding Temporal Awareness in Reinforcement Learning Models for Energy System Control'. Together they form a unique fingerprint.

Cite this