TY - GEN
T1 - Dynamic Sparse Training for Deep Reinforcement Learning
AU - Sokar, Ghada A.Z.N.
AU - Mocanu, Elena
AU - Mocanu, Decebal Constantin
AU - Pechenizkiy, Mykola
AU - Stone, Peter
N1 - Conference code: 31
PY - 2022/7
Y1 - 2022/7
N2 - Deep reinforcement learning (DRL) agents are trained through trial-and-error interactions with the environment. This leads to a long training timefor dense neural networks to achieve good performance. Hence, prohibitive computation and memory resources are consumed. Recently, learning efficient DRL agents has received increasing attention. Yet, current methods focus on accelerating inference time. In this paper, we introduce for the first time a dynamic sparse training approach for deep reinforcement learning to accelerate the training process. The proposed approach trains a sparse neural network from scratch and dynamically adapts its topology to the changing data distribution during training. Experiments on continuous control tasks show that our dynamic sparse agents achieve higher performance than the equivalent dense methods, reduce the parameter count and floating-point operations (FLOPs) by 50%, and have a faster learning speed that enables reaching the performance of dense agents with 40 − 50% reduction in the training steps.
AB - Deep reinforcement learning (DRL) agents are trained through trial-and-error interactions with the environment. This leads to a long training timefor dense neural networks to achieve good performance. Hence, prohibitive computation and memory resources are consumed. Recently, learning efficient DRL agents has received increasing attention. Yet, current methods focus on accelerating inference time. In this paper, we introduce for the first time a dynamic sparse training approach for deep reinforcement learning to accelerate the training process. The proposed approach trains a sparse neural network from scratch and dynamically adapts its topology to the changing data distribution during training. Experiments on continuous control tasks show that our dynamic sparse agents achieve higher performance than the equivalent dense methods, reduce the parameter count and floating-point operations (FLOPs) by 50%, and have a faster learning speed that enables reaching the performance of dense agents with 40 − 50% reduction in the training steps.
KW - Machine Learning
KW - Deep reinforcement learning
KW - Learning Sparse Models
KW - Representation learning
UR - https://arxiv.org/abs/2106.04217
UR - https://github.com/GhadaSokar/Dynamic-Sparse-Training-for-Deep-Reinforcement-Learning
U2 - 10.24963/ijcai.2022/477
DO - 10.24963/ijcai.2022/477
M3 - Conference contribution
SP - 3437
EP - 3443
BT - Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022
A2 - De Raedt, Luc
T2 - 31st International Joint Conference on Artificial Intelligence, IJCAI 2022
Y2 - 23 July 2018 through 29 September 2022
ER -