Increasing Throughput of In-Memory DNN Accelerators by Flexible Layerwise DNN Approximation.
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
Approximate computing and mixed-signal in-memory accelerators are promising paradigms to significantly reduce computational requirements of deep neural network (DNN) inference without accuracy loss. In this work, we present a novel in-memory design for layerwise approximate computation at different approximation levels. A sensitivity-based high-dimensional search is performed to explore the optimal approximation level for each DNN layer. Our new methodology offers high flexibility and optimal tradeoff between accuracy and throughput, which we demonstrate by an extensive evaluation on various DNN benchmarks for medium- and large-scale image classification with CIFAR10, CIFAR100, and ImageNet. With our novel approach, we reach an average of 5× - and up to 8× - speedup without accuracy loss.
Details
Original language | English |
---|---|
Pages (from-to) | 17-24 |
Number of pages | 8 |
Journal | IEEE Micro |
Volume | 42 |
Issue number | 6 |
Publication status | Published - 2022 |
Peer-reviewed | Yes |
External IDs
Scopus | 85135767385 |
---|