On-line building energy optimization using deep reinforcement learning

Elena Mocanu, Decebal Constantin Mocanu, Phuong H. Nguyen, Antonio Liotta, Michael E. Webber, Madeleine Gibescu, J.G. Slootweg

Research output: Contribution to journalArticleAcademicpeer-review

20 Citations (Scopus)
1 Downloads (Pure)

Abstract

Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power system, and to help the customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using Deep Reinforcement Learning, a hybrid type of methods that combines Reinforcement Learning with Deep Learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and Deep Policy Gradient, both of them being extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly-dimensional database includes information about photovoltaic power generation, electric vehicles as well as buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide real-time feedback to consumers to encourage more efficient use of electricity.
Original languageEnglish
Article number8356086
Pages (from-to)3698-3708
Number of pages11
JournalIEEE transactions on smart grid
Volume10
Issue number4
DOIs
Publication statusE-pub ahead of print/First online - 8 May 2018
Externally publishedYes

Fingerprint

Reinforcement learning
Advanced metering infrastructures
Energy management systems
Electric vehicles
Power generation
Electricity
Scheduling
Feedback
Planning
Deep learning

Keywords

  • Buildings
  • Deep neural networks
  • Deep reinforcement learning
  • Demand response
  • Energy consumption
  • Learning (artificial intelligence)
  • Machine learning
  • Minimization
  • Optimization
  • Smart grid
  • Strategic optimization
  • Deep policy gradient

Cite this

Mocanu, E., Mocanu, D. C., Nguyen, P. H., Liotta, A., Webber, M. E., Gibescu, M., & Slootweg, J. G. (2018). On-line building energy optimization using deep reinforcement learning. IEEE transactions on smart grid, 10(4), 3698-3708. [8356086]. https://doi.org/10.1109/TSG.2018.2834219
Mocanu, Elena ; Mocanu, Decebal Constantin ; Nguyen, Phuong H. ; Liotta, Antonio ; Webber, Michael E. ; Gibescu, Madeleine ; Slootweg, J.G. / On-line building energy optimization using deep reinforcement learning. In: IEEE transactions on smart grid. 2018 ; Vol. 10, No. 4. pp. 3698-3708.
@article{d99d4220b23648bf9b43eccdc702af47,
title = "On-line building energy optimization using deep reinforcement learning",
abstract = "Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power system, and to help the customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using Deep Reinforcement Learning, a hybrid type of methods that combines Reinforcement Learning with Deep Learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and Deep Policy Gradient, both of them being extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly-dimensional database includes information about photovoltaic power generation, electric vehicles as well as buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide real-time feedback to consumers to encourage more efficient use of electricity.",
keywords = "Buildings, Deep neural networks, Deep reinforcement learning, Demand response, Energy consumption, Learning (artificial intelligence), Machine learning, Minimization, Optimization, Smart grid, Strategic optimization, Deep policy gradient",
author = "Elena Mocanu and Mocanu, {Decebal Constantin} and Nguyen, {Phuong H.} and Antonio Liotta and Webber, {Michael E.} and Madeleine Gibescu and J.G. Slootweg",
year = "2018",
month = "5",
day = "8",
doi = "10.1109/TSG.2018.2834219",
language = "English",
volume = "10",
pages = "3698--3708",
journal = "IEEE transactions on smart grid",
issn = "1949-3053",
publisher = "IEEE",
number = "4",

}

Mocanu, E, Mocanu, DC, Nguyen, PH, Liotta, A, Webber, ME, Gibescu, M & Slootweg, JG 2018, 'On-line building energy optimization using deep reinforcement learning', IEEE transactions on smart grid, vol. 10, no. 4, 8356086, pp. 3698-3708. https://doi.org/10.1109/TSG.2018.2834219

On-line building energy optimization using deep reinforcement learning. / Mocanu, Elena; Mocanu, Decebal Constantin; Nguyen, Phuong H.; Liotta, Antonio; Webber, Michael E.; Gibescu, Madeleine; Slootweg, J.G.

In: IEEE transactions on smart grid, Vol. 10, No. 4, 8356086, 08.05.2018, p. 3698-3708.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - On-line building energy optimization using deep reinforcement learning

AU - Mocanu, Elena

AU - Mocanu, Decebal Constantin

AU - Nguyen, Phuong H.

AU - Liotta, Antonio

AU - Webber, Michael E.

AU - Gibescu, Madeleine

AU - Slootweg, J.G.

PY - 2018/5/8

Y1 - 2018/5/8

N2 - Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power system, and to help the customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using Deep Reinforcement Learning, a hybrid type of methods that combines Reinforcement Learning with Deep Learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and Deep Policy Gradient, both of them being extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly-dimensional database includes information about photovoltaic power generation, electric vehicles as well as buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide real-time feedback to consumers to encourage more efficient use of electricity.

AB - Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power system, and to help the customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using Deep Reinforcement Learning, a hybrid type of methods that combines Reinforcement Learning with Deep Learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and Deep Policy Gradient, both of them being extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly-dimensional database includes information about photovoltaic power generation, electric vehicles as well as buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide real-time feedback to consumers to encourage more efficient use of electricity.

KW - Buildings

KW - Deep neural networks

KW - Deep reinforcement learning

KW - Demand response

KW - Energy consumption

KW - Learning (artificial intelligence)

KW - Machine learning

KW - Minimization

KW - Optimization

KW - Smart grid

KW - Strategic optimization

KW - Deep policy gradient

UR - http://www.scopus.com/inward/record.url?scp=85046827366&partnerID=8YFLogxK

U2 - 10.1109/TSG.2018.2834219

DO - 10.1109/TSG.2018.2834219

M3 - Article

VL - 10

SP - 3698

EP - 3708

JO - IEEE transactions on smart grid

JF - IEEE transactions on smart grid

SN - 1949-3053

IS - 4

M1 - 8356086

ER -

Mocanu E, Mocanu DC, Nguyen PH, Liotta A, Webber ME, Gibescu M et al. On-line building energy optimization using deep reinforcement learning. IEEE transactions on smart grid. 2018 May 8;10(4):3698-3708. 8356086. https://doi.org/10.1109/TSG.2018.2834219