TY - UNPB
T1 - Sparse Training Theory for Scalable and Efficient Agents
T2 - Blue Sky Ideas Track
AU - Mocanu, Decebal Constantin
AU - Mocanu, Elena
AU - Pinto, Tiago
AU - Curci, Selima
AU - Nguyen, Phuong H.
AU - Gibescu, Madeleine
AU - Ernst, Damien
AU - Vale, Zita A.
PY - 2021/3/2
Y1 - 2021/3/2
N2 - A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope perfectly with all learning paradigms, i.e. supervised, unsupervised, and reinforcement learning. Nevertheless, traditional deep learning approaches make use of cloud computing facilities and do not scale well to autonomous agents with low computational resources. Even in the cloud, they suffer from computational and memory limitations, and they cannot be used to model adequately large physical worlds for agents which assume networks with billions of neurons. These issues are addressed in the last few years by the emerging topic of sparse training, which trains sparse networks from scratch. This paper discusses sparse training state-of-the-art, its challenges and limitations while introducing a couple of new theoretical research directions which has the potential of alleviating sparse training limitations to push deep learning scalability well beyond its current boundaries. Nevertheless, the theoretical advancements impact in complex multi-agents settings is discussed from a real-world perspective, using the smart grid case study.
AB - A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope perfectly with all learning paradigms, i.e. supervised, unsupervised, and reinforcement learning. Nevertheless, traditional deep learning approaches make use of cloud computing facilities and do not scale well to autonomous agents with low computational resources. Even in the cloud, they suffer from computational and memory limitations, and they cannot be used to model adequately large physical worlds for agents which assume networks with billions of neurons. These issues are addressed in the last few years by the emerging topic of sparse training, which trains sparse networks from scratch. This paper discusses sparse training state-of-the-art, its challenges and limitations while introducing a couple of new theoretical research directions which has the potential of alleviating sparse training limitations to push deep learning scalability well beyond its current boundaries. Nevertheless, the theoretical advancements impact in complex multi-agents settings is discussed from a real-world perspective, using the smart grid case study.
KW - cs.AI
KW - cs.LG
KW - cs.MA
KW - cs.NE
U2 - 10.48550/arXiv.2103.01636
DO - 10.48550/arXiv.2103.01636
M3 - Preprint
BT - Sparse Training Theory for Scalable and Efficient Agents
PB - ArXiv.org
ER -