Knowledge Distillation and Gradient Estimation for Active Error Compensation in Approximate Neural Networks.
Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/Gutachten › Beitrag in Konferenzband › Beigetragen › Begutachtung
Beitragende
Abstract
Approximate computing is a promising approach for optimizing computational resources of error-resilient applications such as Convolutional Neural Networks (CNNs). However, such approximations introduce an error that needs to be compensated by optimization methods, which typically include a retraining or fine-tuning stage. To efficiently recover from the introduced error, this fine-tuning process needs to be adapted to take CNN approximations into consideration. In this work, we present a novel methodology for fine-tuning approximate CNNs with ultralow bit-width quantization and large approximation error, which combines knowledge distillation and gradient estimation to recover the lost accuracy due to approximations. With our proposed methodology, we demonstrate energy savings of up to 38% in complex approximate CNNs with weights quantized to 4 bits and 8-bit activations, with less than 3% accuracy loss w.r.t. the full precision model.
Details
Originalsprache | Englisch |
---|---|
Titel | Proceedings of the 2021 Design, Automation and Test in Europe, DATE 2021 |
Seiten | 679-684 |
Seitenumfang | 6 |
ISBN (elektronisch) | 9783981926354 |
Publikationsstatus | Veröffentlicht - 1 Feb. 2021 |
Peer-Review-Status | Ja |
Externe IDs
Scopus | 85111030376 |
---|
Schlagworte
Forschungsprofillinien der TU Dresden
ASJC Scopus Sachgebiete
Schlagwörter
- Approximate Computing, Approximate multipliers, Neural Networks, Quantization