Special Session-Non-Volatile Memories: Challenges and Opportunities for Embedded System Architectures with Focus on Machine Learning Applications

Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/GutachtenBeitrag in KonferenzbandBeigetragenBegutachtung

Beitragende

  • Jorg Henkel - , Karlsruher Institut für Technologie (Autor:in)
  • Lokesh Siddhu - , Karlsruher Institut für Technologie (Autor:in)
  • Lars Bauer - , Karlsruher Institut für Technologie (Autor:in)
  • Jurgen Teich - , Friedrich-Alexander-Universität Erlangen-Nürnberg (Autor:in)
  • Stefan Wildermann - , Friedrich-Alexander-Universität Erlangen-Nürnberg (Autor:in)
  • Mehdi Tahoori - , Karlsruher Institut für Technologie (Autor:in)
  • Mahta Mayahinia - , Karlsruher Institut für Technologie (Autor:in)
  • Jeronimo Castrillon - , Professur für Compilerbau (cfaed) (Autor:in)
  • Asif Ali Khan - , Professur für Compilerbau (cfaed) (Autor:in)
  • Hamid Farzaneh - , Professur für Compilerbau (cfaed) (Autor:in)
  • Joao Paulo C. De Lima - , Professur für Compilerbau (cfaed) (Autor:in)
  • Jian Jia Chen - , Technische Universität (TU) Dortmund (Autor:in)
  • Christian Hakert - , Technische Universität (TU) Dortmund (Autor:in)
  • Kuan Hsun Chen - , University of Twente (Autor:in)
  • Chia Lin Yang - , National Taiwan University (Autor:in)
  • Hsiang Yun Cheng - , Academia Sinica Taiwan (Autor:in)

Abstract

This paper explores the challenges and opportunities of integrating non-volatile memories (NVMs) into embedded systems for machine learning. NVMs offer advantages such as increased memory density, lower power consumption, non-volatility, and compute-in-memory capabilities. The paper focuses on integrating NVMs into embedded systems, particularly in intermittent computing, where systems operate during periods of available energy. NVM technologies bring persistence closer to the CPU core, enabling efficient designs for energy-constrained scenarios. Next, computation in resistive NVMs is explored, highlighting its potential for accelerating machine learning algorithms. However, challenges related to reliability and device non-idealities need to be addressed. The paper also discusses memory-centric machine learning, leveraging NVMs to overcome the memory wall challenge. By optimizing memory layouts and utilizing probabilistic decision tree execution and neural network sparsity, NVM-based systems can improve cache behavior and reduce unnecessary computations. In conclusion, the paper emphasizes the need for further research and optimization for the widespread adoption of NVMs in embedded systems presenting relevant challenges, especially for machine learning applications.

Details

OriginalspracheEnglisch
TitelProceedings - 2023 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems, CASES 2023
Herausgeber (Verlag)Institute of Electrical and Electronics Engineers Inc.
Seiten11-20
Seitenumfang10
ISBN (elektronisch)979-8-4007-0290-7
ISBN (Print)979-8-3503-2514-0
PublikationsstatusVeröffentlicht - 13 Nov. 2023
Peer-Review-StatusJa

Publikationsreihe

ReiheProceedings of the International Conference on Compilers, Architectures and Synthesis for Embedded Systems (CASES)

Konferenz

Titel2023 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems
KurztitelCASES 2023
Dauer18 - 20 September 2023
StadtHamburg
LandDeutschland

Externe IDs

ORCID /0000-0002-5007-445X/work/160049115

Schlagworte

Schlagwörter

  • Compute In Memory, Design Space Exploration, Machine Learning, Non Volatile Memories