Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity

Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

275 Downloads (Pure)

Abstract

The success of deep ensembles on improving predictive performance, uncertainty estimation, and out-of-distribution robustness has been extensively studied in the machine learning literature. Albeit the promising results, naively training multiple deep neural networks and combining their predictions at inference leads to prohibitive computational costs and memory requirements. Recently proposed efficient ensemble approaches reach the performance of the traditional deep ensembles with significantly lower costs. However, the training resources required by these approaches are still at least the same as training a single dense model. In this work, we draw a unique connection between sparse neural network training and deep ensembles, yielding a novel efficient ensemble learning framework called FreeTickets. Instead of training multiple dense networks and averaging them, we directly train sparse subnetworks from scratch and extract diverse yet accurate subnetworks during this efficient, sparse-to-sparse training. Our framework, FreeTickets, is defined as the ensemble of these relatively cheap sparse subnetworks. Despite being an ensemble method, FreeTickets has even fewer parameters and training FLOPs than a single dense model. This seemingly counter-intuitive outcome is due to the ultra training/inference efficiency of dynamic sparse training. FreeTickets surpasses the dense baseline in all the following criteria: prediction accuracy, uncertainty estimation, out-of-distribution (OoD) robustness, as well as efficiency for both training and inference. Impressively, FreeTickets outperforms the naive deep ensemble with ResNet50 on ImageNet using around only 1/5 of the training FLOPs required by the latter. We have released our source code at this https URL.
Original languageEnglish
Title of host publicationThe Tenth International Conference on Learning Representations, ICLR 2022
PublisherOpenReview
Number of pages24
Publication statusPublished - 7 Feb 2022
Event10th International Conference on Learning Representations, ICLR 2022 - Virtual
Duration: 25 Apr 202229 Apr 2022
Conference number: 10
https://iclr.cc/

Conference

Conference10th International Conference on Learning Representations, ICLR 2022
Abbreviated titleICLR
Period25/04/2229/04/22
Internet address

Fingerprint

Dive into the research topics of 'Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity'. Together they form a unique fingerprint.

Cite this