Efficient Accuracy Recovery in Approximate Neural Networks by Systematic Error Modelling.
Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/Gutachten › Beitrag in Konferenzband › Beigetragen › Begutachtung
Beitragende
Abstract
Approximate Computing is a promising paradigm for mitigating the computational demands of Deep Neural Networks (DNNs), by leveraging DNN performance and area, throughput or power. The DNN accuracy, affected by such approximations, can be then effectively improved through retraining. In this paper,we present a novel methodology for modelling the approximation error introduced by approximate hardware in DNNs, which accelerates retraining and achieves negligible accuracy loss. To this end, we implement the behavioral simulation of several approximate multipliers and model the error generated by such approximations on pre-trained DNNs for image classification on CIFAR10 and ImageNet. Finally, we optimize the DNN parameters by applying our error model during DNN retraining, to recover the accuracy lost due to approximations. Experimental results demonstrate the efficiency of our proposed method for accelerated retraining (11 faster for CIFAR10 and 8 faster for ImageNet) for full DNN approximation, which allows us to deploy approximate multipliers with energy savings of up to 36% for 8-bit precision DNNs with an accuracy loss lower than 1%.
Details
Originalsprache | Englisch |
---|---|
Titel | Proceedings of the 26th Asia and South Pacific Design Automation Conference, ASP-DAC 2021 |
Seiten | 365-371 |
Seitenumfang | 7 |
Publikationsstatus | Veröffentlicht - 18 Jan. 2021 |
Peer-Review-Status | Ja |
Externe IDs
Scopus | 85100567177 |
---|