Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science

Decebal Constantin Mocanu*, Elena Mocanu, Peter Stone, Phuong H. Nguyen, Madeleine Gibescu, Antonio Liotta

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

21 Citations (Scopus)
4 Downloads (Pure)

Abstract

Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdos-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.
Original languageEnglish
Article number2383
Number of pages12
JournalNature communications
Volume9
Issue number1
DOIs
Publication statusPublished - 19 Jun 2018
Externally publishedYes

Fingerprint

education
Learning
Neural networks
Neural Networks (Computer)
Artificial Intelligence
learning
General Practice
Neurons
Topology
topology
Unsupervised learning
inspiration
Supervised learning
artificial intelligence
self organizing systems
Multilayer neural networks
Artificial intelligence
neurons
Datasets

Keywords

  • Complex networks
  • Evolutionary algorithms
  • Deep learning
  • Sparse artificial neural networks
  • Restricted Boltzmann machine

Cite this

Mocanu, Decebal Constantin ; Mocanu, Elena ; Stone, Peter ; Nguyen, Phuong H. ; Gibescu, Madeleine ; Liotta, Antonio. / Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. In: Nature communications. 2018 ; Vol. 9, No. 1.
@article{b0bdb41019b74652a436704b2bdbad94,
title = "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science",
abstract = "Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdos-R{\'e}nyi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.",
keywords = "Complex networks, Evolutionary algorithms, Deep learning, Sparse artificial neural networks, Restricted Boltzmann machine",
author = "Mocanu, {Decebal Constantin} and Elena Mocanu and Peter Stone and Nguyen, {Phuong H.} and Madeleine Gibescu and Antonio Liotta",
year = "2018",
month = "6",
day = "19",
doi = "10.1038/s41467-018-04316-3",
language = "English",
volume = "9",
journal = "Nature communications",
issn = "2041-1723",
publisher = "Nature Publishing Group",
number = "1",

}

Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. / Mocanu, Decebal Constantin; Mocanu, Elena; Stone, Peter; Nguyen, Phuong H.; Gibescu, Madeleine; Liotta, Antonio.

In: Nature communications, Vol. 9, No. 1, 2383, 19.06.2018.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science

AU - Mocanu, Decebal Constantin

AU - Mocanu, Elena

AU - Stone, Peter

AU - Nguyen, Phuong H.

AU - Gibescu, Madeleine

AU - Liotta, Antonio

PY - 2018/6/19

Y1 - 2018/6/19

N2 - Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdos-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.

AB - Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdos-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.

KW - Complex networks

KW - Evolutionary algorithms

KW - Deep learning

KW - Sparse artificial neural networks

KW - Restricted Boltzmann machine

UR - http://www.scopus.com/inward/record.url?scp=85048843263&partnerID=8YFLogxK

UR - https://github.com/dcmocanu/sparse-evolutionary-artificial-neural-networks

U2 - 10.1038/s41467-018-04316-3

DO - 10.1038/s41467-018-04316-3

M3 - Article

C2 - 29921910

VL - 9

JO - Nature communications

JF - Nature communications

SN - 2041-1723

IS - 1

M1 - 2383

ER -