Peeking inside Sparse Neural Networks using Multi-Partite Graph Representations

Elia Cunegatti, Doina Bucur, Giovanni Iacca

Research output: Working paperPreprintAcademic

39 Downloads (Pure)

Abstract

Modern Deep Neural Networks (DNNs) have achieved very high performance at the expense of computational resources. To decrease the computational burden, several techniques have proposed to extract, from a given DNN, efficient subnetworks which are able to preserve performance while reducing the number of network parameters. The literature provides a broad set of techniques to discover such subnetworks, but few works have studied the peculiar topologies of such pruned architectures. In this paper, we propose a novel \emph{unrolled input-aware} bipartite Graph Encoding (GE) that is able to generate, for each layer in an either sparse or dense neural network, its corresponding graph representation based on its relation with the input data. We also extend it into a multipartite GE, to capture the relation between layers. Then, we leverage on topological properties to study the difference between the existing pruning algorithms and algorithm categories, as well as the relation between topologies and performance.
Original languageEnglish
PublisherArXiv.org
DOIs
Publication statusPublished - 26 May 2023

Keywords

  • cs.LG
  • cs.AI

Fingerprint

Dive into the research topics of 'Peeking inside Sparse Neural Networks using Multi-Partite Graph Representations'. Together they form a unique fingerprint.

Cite this