A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope perfectly with all learning paradigms, i.e. supervised, unsupervised, and reinforcement learning. Nevertheless, traditional deep learning approaches make use of cloud computing facilities and do not scale well to autonomous agents with low computational resources. Even in the cloud, they suffer from computational and memory limitations. They cannot be used to model adequately large physical worlds for agents which assume networks with billions of neurons. These issues are addressed in the last few years by the emerging topic of scalable deep learning which makes use of static and adaptive sparse connectivity in neural networks before and throughout training (or, on short, sparse training). The tutorial covers these research directions focusing on theoretical advancements, practical applications, and hands-on experience.
7 Jul 2020
6th International Summer School on AI and Big Data