Dynamic Sparse Training for Deep Reinforcement Learning

Ghada A.Z.N. Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, Peter Stone

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

12 Citations (Scopus)
106 Downloads (Pure)

Abstract

Deep reinforcement learning (DRL) agents are trained through trial-and-error interactions with the environment. This leads to a long training time
for dense neural networks to achieve good performance. Hence, prohibitive computation and memory resources are consumed. Recently, learning efficient DRL agents has received increasing attention. Yet, current methods focus on accelerating inference time. In this paper, we introduce for the first time a dynamic sparse training approach for deep reinforcement learning to accelerate the training process. The proposed approach trains a sparse neural network from scratch and dynamically adapts its topology to the changing data distribution during training. Experiments on continuous control tasks show that our dynamic sparse agents achieve higher performance than the equivalent dense methods, reduce the parameter count and floating-point operations (FLOPs) by 50%, and have a faster learning speed that enables reaching the performance of dense agents with 40 − 50% reduction in the training steps.
Original languageEnglish
Title of host publicationProceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022
EditorsLuc De Raedt
Pages3437-3443
Number of pages7
ISBN (Electronic)9781956792003
DOIs
Publication statusPublished - Jul 2022
Event31st International Joint Conference on Artificial Intelligence, IJCAI 2022 - Messe Wien, Vienna, Austria
Duration: 23 Jul 201829 Sept 2022
Conference number: 31
https://ijcai-22.org/

Conference

Conference31st International Joint Conference on Artificial Intelligence, IJCAI 2022
Abbreviated titleIJCAI 2022
Country/TerritoryAustria
CityVienna
Period23/07/1829/09/22
Internet address

Keywords

  • Machine Learning
  • Deep reinforcement learning
  • Learning Sparse Models
  • Representation learning

Fingerprint

Dive into the research topics of 'Dynamic Sparse Training for Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this