Throughput optimizations for FPGA-based deep neural network inference

Thorbjörn Posewsky, Daniel Ziener (Corresponding Author)

Research output: Contribution to journalArticleAcademicpeer-review

2 Citations (Scopus)
1 Downloads (Pure)

Abstract

Deep neural networks are an extremely successful and widely used technique for various pattern recognition and machine learning tasks. Due to power and resource constraints, these computationally intensive networks are difficult to implement in embedded systems. Yet, the number of applications that can benefit from the mentioned possibilities is rapidly rising. In this paper, we propose novel architectures for the inference of previously learned and arbitrary deep neural networks on FPGA-based SoCs that are able to overcome these limitations. Our key contributions include the reuse of previously transferred weight matrices across multiple input samples, which we refer to as batch processing, and the usage of compressed weight matrices, also known as pruning. An extensive evaluation of these optimizations is presented. Both techniques allow a significant mitigation of data transfers and speed-up the network inference by one order of magnitude. At the same time, we surpass the data throughput of fully-featured x86-based systems while only using a fraction of their energy consumption.

Original languageEnglish
Pages (from-to)151-161
Number of pages11
JournalMicroprocessors and microsystems
Volume60
DOIs
Publication statusPublished - 1 Jul 2018

Fingerprint

Field programmable gate arrays (FPGA)
Throughput
Data transfer
Embedded systems
Pattern recognition
Learning systems
Energy utilization
Deep neural networks

Keywords

  • Batch processing
  • Compression
  • Deep neural networks
  • FPGA
  • Fully-connected
  • Inference
  • Pruning
  • Throughput optimizations

Cite this

@article{515d440d97054d6082e6205b7498abeb,
title = "Throughput optimizations for FPGA-based deep neural network inference",
abstract = "Deep neural networks are an extremely successful and widely used technique for various pattern recognition and machine learning tasks. Due to power and resource constraints, these computationally intensive networks are difficult to implement in embedded systems. Yet, the number of applications that can benefit from the mentioned possibilities is rapidly rising. In this paper, we propose novel architectures for the inference of previously learned and arbitrary deep neural networks on FPGA-based SoCs that are able to overcome these limitations. Our key contributions include the reuse of previously transferred weight matrices across multiple input samples, which we refer to as batch processing, and the usage of compressed weight matrices, also known as pruning. An extensive evaluation of these optimizations is presented. Both techniques allow a significant mitigation of data transfers and speed-up the network inference by one order of magnitude. At the same time, we surpass the data throughput of fully-featured x86-based systems while only using a fraction of their energy consumption.",
keywords = "Batch processing, Compression, Deep neural networks, FPGA, Fully-connected, Inference, Pruning, Throughput optimizations",
author = "Thorbj{\"o}rn Posewsky and Daniel Ziener",
year = "2018",
month = "7",
day = "1",
doi = "10.1016/j.micpro.2018.04.004",
language = "English",
volume = "60",
pages = "151--161",
journal = "Microprocessors and microsystems",
issn = "0141-9331",
publisher = "Elsevier",

}

Throughput optimizations for FPGA-based deep neural network inference. / Posewsky, Thorbjörn; Ziener, Daniel (Corresponding Author).

In: Microprocessors and microsystems, Vol. 60, 01.07.2018, p. 151-161.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Throughput optimizations for FPGA-based deep neural network inference

AU - Posewsky, Thorbjörn

AU - Ziener, Daniel

PY - 2018/7/1

Y1 - 2018/7/1

N2 - Deep neural networks are an extremely successful and widely used technique for various pattern recognition and machine learning tasks. Due to power and resource constraints, these computationally intensive networks are difficult to implement in embedded systems. Yet, the number of applications that can benefit from the mentioned possibilities is rapidly rising. In this paper, we propose novel architectures for the inference of previously learned and arbitrary deep neural networks on FPGA-based SoCs that are able to overcome these limitations. Our key contributions include the reuse of previously transferred weight matrices across multiple input samples, which we refer to as batch processing, and the usage of compressed weight matrices, also known as pruning. An extensive evaluation of these optimizations is presented. Both techniques allow a significant mitigation of data transfers and speed-up the network inference by one order of magnitude. At the same time, we surpass the data throughput of fully-featured x86-based systems while only using a fraction of their energy consumption.

AB - Deep neural networks are an extremely successful and widely used technique for various pattern recognition and machine learning tasks. Due to power and resource constraints, these computationally intensive networks are difficult to implement in embedded systems. Yet, the number of applications that can benefit from the mentioned possibilities is rapidly rising. In this paper, we propose novel architectures for the inference of previously learned and arbitrary deep neural networks on FPGA-based SoCs that are able to overcome these limitations. Our key contributions include the reuse of previously transferred weight matrices across multiple input samples, which we refer to as batch processing, and the usage of compressed weight matrices, also known as pruning. An extensive evaluation of these optimizations is presented. Both techniques allow a significant mitigation of data transfers and speed-up the network inference by one order of magnitude. At the same time, we surpass the data throughput of fully-featured x86-based systems while only using a fraction of their energy consumption.

KW - Batch processing

KW - Compression

KW - Deep neural networks

KW - FPGA

KW - Fully-connected

KW - Inference

KW - Pruning

KW - Throughput optimizations

UR - http://www.scopus.com/inward/record.url?scp=85046746464&partnerID=8YFLogxK

U2 - 10.1016/j.micpro.2018.04.004

DO - 10.1016/j.micpro.2018.04.004

M3 - Article

VL - 60

SP - 151

EP - 161

JO - Microprocessors and microsystems

JF - Microprocessors and microsystems

SN - 0141-9331

ER -