Efficient deep neural network acceleration through FPGA-based batch processing

Thorbjorn Posewsky, Daniel Ziener

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

4 Citations (Scopus)

Abstract

Deep neural networks are an extremely successful and widely used technique for various pattern recognition and machine learning tasks. Due to power and resource constraints, these computationally intensive networks are difficult to implement in embedded systems. Yet, the number of applications that can benefit from the mentioned possibilities is rapidly rising. In this paper, we propose a novel architecture for processing previously learned and arbitrary deep neural networks on FPGA-based SoCs that is able to overcome these limitations. A key contribution of our approach, which we refer to as batch processing, achieves a mitigation of required weight matrix transfers from external memory by reusing weights across multiple input samples. This technique combined with a sophisticated pipelining and the usage of high performance interfaces accelerates the data processing compared to existing approaches on the same FPGA device by one order of magnitude. Furthermore, we achieve a comparable data throughput as a fully featured x86-based system at only a fraction of its energy consumption.

Original languageEnglish
Title of host publication2016 International Conference on Reconfigurable Computing and FPGAs, ReConFig 2016
PublisherIEEE
ISBN (Electronic)9781509037070
DOIs
Publication statusPublished - 2016
Externally publishedYes
Event2016 International Conference on Reconfigurable Computing and FPGAs - Cancun, Mexico
Duration: 3 Dec 20165 Dec 2016

Conference

Conference2016 International Conference on Reconfigurable Computing and FPGAs
Abbreviated titleReConFig 2016
CountryMexico
CityCancun
Period3/12/165/12/16

Fingerprint

Dive into the research topics of 'Efficient deep neural network acceleration through FPGA-based batch processing'. Together they form a unique fingerprint.

Cite this