Efficient Accuracy Recovery in Approximate Neural Networks by Systematic Error Modelling.
Research output: Contribution to book/Conference proceedings/Anthology/Report › Conference contribution › Contributed › peer-review
Contributors
Abstract
Approximate Computing is a promising paradigm for mitigating the computational demands of Deep Neural Networks (DNNs), by leveraging DNN performance and area, throughput or power. The DNN accuracy, affected by such approximations, can be then effectively improved through retraining. In this paper,we present a novel methodology for modelling the approximation error introduced by approximate hardware in DNNs, which accelerates retraining and achieves negligible accuracy loss. To this end, we implement the behavioral simulation of several approximate multipliers and model the error generated by such approximations on pre-trained DNNs for image classification on CIFAR10 and ImageNet. Finally, we optimize the DNN parameters by applying our error model during DNN retraining, to recover the accuracy lost due to approximations. Experimental results demonstrate the efficiency of our proposed method for accelerated retraining (11 faster for CIFAR10 and 8 faster for ImageNet) for full DNN approximation, which allows us to deploy approximate multipliers with energy savings of up to 36% for 8-bit precision DNNs with an accuracy loss lower than 1%.
Details
Original language | English |
---|---|
Title of host publication | Proceedings of the 26th Asia and South Pacific Design Automation Conference, ASP-DAC 2021 |
Pages | 365-371 |
Number of pages | 7 |
Publication status | Published - 18 Jan 2021 |
Peer-reviewed | Yes |
External IDs
Scopus | 85100567177 |
---|