Sparsity in Neural Networks: Advancing Understanding and Practice 2022

  • Utku Evci (Organiser)
  • Zhangyang Wang (Organiser)
  • Mocanu, E. (Organiser)
  • Jonathan Frankle (Organiser)
  • Ari Marcos (Organiser)
  • Baharan Mirzasoleiman (Organiser)
  • Siddhant M. Jayakumar (Organiser)
  • Chang Xu (Organiser)
  • Trevor Gale (Organiser)
  • Decebal Constantin Mocanu (Organiser)

Activity: Participating in or organising an eventOrganising a conference, workshop, ...

Description

A neural network is sparse when a portion of its parameters or activations have been fixed to 0. Neural network sparsity is:
(1) A compelling practical opportunity to reduce the cost of training and inference (through applied work on algorithms, systems, and hardware);
(2) An important topic for understanding how neural networks train with/without overparameterization and the representations they learn (through theoretical and scientific work).

Research interest in sparsity in deep learning has exploded in recent years from both the academic and the industry, and we believe the community is now large and diverse enough to join together to discuss shared research priorities and cross-cutting issues. Currently, the communities working on aspects of sparsity and related problems are disparate, oftentimes presenting at separate venues for separate audiences.

This second edition of the workshop aims to bring together researchers working on problems related to the practical, theoretical, and scientific aspects of neural network sparsity, and members of adjacent communities, in order to build connections across different areas, create opportunities for new collaborations, and articulate shared challenges. We aspire to continue building a lasting, interdisciplinary research community among those who share an interest in neural network sparsity.
Period13 Jul 2022
Event typeConference
Conference number2
LocationVirtual + ICML meetupShow on map
Degree of RecognitionInternational

Keywords

  • Sparse Neural Networks