A Parametrizable High-Level Synthesis Library for Accelerating Neural Networks on FPGAs

Research output: Contribution to journalResearch articleContributedpeer-review

Abstract

In recent years, Convolutional Neural Network CNN have been incorporated in a large number of applications, including multimedia retrieval and image classification. However, CNN based algorithms are computationally and resource intensive and therefore difficult to be used in embedded systems. FPGA based accelerators are becoming more and more popular in research and industry due to their flexibility and energy efficiency. However, the available resources and the size of the on-chip memory can limit the performance of the FPGA accelerator for CNN. This work proposes an High-Level Synthesis HLS library for CNN algorithms. It contains seven different streaming-capable CNN (plus two conversion) functions for creating large neural networks with deep pipelines. The different functions have many parameter settings (e.g. for resolution, feature maps, data types, kernel size, parallelilization, accuracy, etc.), which also enable compile-time optimizations. Our functions are integrated into the HiFlipVX library, which is an open source HLS FPGA library for image processing and object detection. This offers the possibility to implement different types of computer vision applications with one library. Due to the various configuration and parallelization possibilities of the library functions, it is possible to implement a high-performance, scalable and resource-efficient system, as our evaluation of the MobileNets algorithm shows.

Details

Original languageEnglish
Pages (from-to)513-529
Number of pages17
JournalJournal of Signal Processing Systems
Volume93
Issue number5
Publication statusPublished - May 2021
Peer-reviewedYes

External IDs

Scopus 85102810198
ORCID /0000-0003-2571-8441/work/142240528

Keywords

Sustainable Development Goals

Keywords

  • Hardware acceleration, High-level synthesis, FPGA, Neural networks, Library, FPGA, Library, Neural networks