Combining Gradients and Probabilities for Heterogeneous Approximation of Neural Networks.

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Contributors

  • Elias Trommer - , Infineon Technologies Dresden GmbH & Co. KG (Author)
  • Bernd Waschneck - , Infineon Technologies Dresden GmbH & Co. KG (Author)
  • Akash Kumar - , Chair of Processor Design (cfaed) (Author)

Abstract

This work explores the search for heterogeneous approximate multiplier configurations for neural networks that produce high accuracy and low energy consumption. We discuss the validity of additive Gaussian noise added to accurate neural network computations as a surrogate model for behavioral simulation of approximate multipliers. The continuous and differentiable properties of the solution space spanned by the additive Gaussian noise model are used as a heuristic that generates meaningful estimates of layer robustness without the need for combinatorial optimization techniques. Instead, the amount of noise injected into the accurate computations is learned during network training using backpropagation. A probabilistic model of the multiplier error is presented to bridge the gap between the domains; the model estimates the standard deviation of the approximate multiplier error, connecting solutions in the additive Gaussian noise space to actual hardware instances. Our experiments show that the combination of heterogeneous approximation and neural network retraining reduces the energy consumption for multiplications by 70% to 79% for different ResNet variants on the CIFAR-10 dataset with a Top-1 accuracy loss below one percentage point. For the more complex Tiny ImageNet task, our VGG16 model achieves a 53 % reduction in energy consumption with a drop in Top-5 accuracy of 0.5 percentage points. We further demonstrate that our error model can predict the parameters of an approximate multiplier in the context of the commonly used additive Gaussian noise (AGN) model with high accuracy. Our software implementation is available under https://github.com/etrommer/agn-approx.

Details

Original languageEnglish
Title of host publicationICCAD '22: Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design
Pages150:1-150:8
Number of pages8
ISBN (electronic)9781450392174
Publication statusPublished - 30 Oct 2022
Peer-reviewedYes

External IDs

Scopus 85145646827

Keywords

Research priority areas of TU Dresden

Sustainable Development Goals

Keywords

  • approximate computing, energy efficiency, neural networks