TY - JOUR
T1 - Throughput optimizations for FPGA-based deep neural network inference
AU - Posewsky, Thorbjörn
AU - Ziener, Daniel
PY - 2018/7/1
Y1 - 2018/7/1
N2 - Deep neural networks are an extremely successful and widely used technique for various pattern recognition and machine learning tasks. Due to power and resource constraints, these computationally intensive networks are difficult to implement in embedded systems. Yet, the number of applications that can benefit from the mentioned possibilities is rapidly rising. In this paper, we propose novel architectures for the inference of previously learned and arbitrary deep neural networks on FPGA-based SoCs that are able to overcome these limitations. Our key contributions include the reuse of previously transferred weight matrices across multiple input samples, which we refer to as batch processing, and the usage of compressed weight matrices, also known as pruning. An extensive evaluation of these optimizations is presented. Both techniques allow a significant mitigation of data transfers and speed-up the network inference by one order of magnitude. At the same time, we surpass the data throughput of fully-featured x86-based systems while only using a fraction of their energy consumption.
AB - Deep neural networks are an extremely successful and widely used technique for various pattern recognition and machine learning tasks. Due to power and resource constraints, these computationally intensive networks are difficult to implement in embedded systems. Yet, the number of applications that can benefit from the mentioned possibilities is rapidly rising. In this paper, we propose novel architectures for the inference of previously learned and arbitrary deep neural networks on FPGA-based SoCs that are able to overcome these limitations. Our key contributions include the reuse of previously transferred weight matrices across multiple input samples, which we refer to as batch processing, and the usage of compressed weight matrices, also known as pruning. An extensive evaluation of these optimizations is presented. Both techniques allow a significant mitigation of data transfers and speed-up the network inference by one order of magnitude. At the same time, we surpass the data throughput of fully-featured x86-based systems while only using a fraction of their energy consumption.
KW - Batch processing
KW - Compression
KW - Deep neural networks
KW - FPGA
KW - Fully-connected
KW - Inference
KW - Pruning
KW - Throughput optimizations
UR - http://www.scopus.com/inward/record.url?scp=85046746464&partnerID=8YFLogxK
U2 - 10.1016/j.micpro.2018.04.004
DO - 10.1016/j.micpro.2018.04.004
M3 - Article
AN - SCOPUS:85046746464
SN - 0141-9331
VL - 60
SP - 151
EP - 161
JO - Microprocessors and microsystems
JF - Microprocessors and microsystems
ER -