Special Session-Non-Volatile Memories: Challenges and Opportunities for Embedded System Architectures with Focus on Machine Learning Applications

Research output: Contribution to book/conference proceedings/anthology/reportConference contributionContributedpeer-review

Contributors

  • Jorg Henkel - , Karlsruhe Institute of Technology (Author)
  • Lokesh Siddhu - , Karlsruhe Institute of Technology (Author)
  • Lars Bauer - , Karlsruhe Institute of Technology (Author)
  • Jurgen Teich - , Friedrich-Alexander University Erlangen-Nürnberg (Author)
  • Stefan Wildermann - , Friedrich-Alexander University Erlangen-Nürnberg (Author)
  • Mehdi Tahoori - , Karlsruhe Institute of Technology (Author)
  • Mahta Mayahinia - , Karlsruhe Institute of Technology (Author)
  • Jeronimo Castrillon - , Chair of Compiler Construction (cfaed) (Author)
  • Asif Ali Khan - , Chair of Compiler Construction (cfaed) (Author)
  • Hamid Farzaneh - , Chair of Compiler Construction (cfaed) (Author)
  • Joao Paulo C. De Lima - , Chair of Compiler Construction (cfaed) (Author)
  • Jian Jia Chen - , Dortmund University of Technology (Author)
  • Christian Hakert - , Dortmund University of Technology (Author)
  • Kuan Hsun Chen - , University of Twente (Author)
  • Chia Lin Yang - , National Taiwan University (Author)
  • Hsiang Yun Cheng - , Academia Sinica Taiwan (Author)

Abstract

This paper explores the challenges and opportunities of integrating non-volatile memories (NVMs) into embedded systems for machine learning. NVMs offer advantages such as increased memory density, lower power consumption, non-volatility, and compute-in-memory capabilities. The paper focuses on integrating NVMs into embedded systems, particularly in intermittent computing, where systems operate during periods of available energy. NVM technologies bring persistence closer to the CPU core, enabling efficient designs for energy-constrained scenarios. Next, computation in resistive NVMs is explored, highlighting its potential for accelerating machine learning algorithms. However, challenges related to reliability and device non-idealities need to be addressed. The paper also discusses memory-centric machine learning, leveraging NVMs to overcome the memory wall challenge. By optimizing memory layouts and utilizing probabilistic decision tree execution and neural network sparsity, NVM-based systems can improve cache behavior and reduce unnecessary computations. In conclusion, the paper emphasizes the need for further research and optimization for the widespread adoption of NVMs in embedded systems presenting relevant challenges, especially for machine learning applications.

Details

Original languageEnglish
Title of host publicationProceedings - 2023 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems, CASES 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages11-20
Number of pages10
ISBN (electronic)9798400702907
Publication statusPublished - 2023
Peer-reviewedYes

Publication series

SeriesProceedings - 2023 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems, CASES 2023

Conference

Title2023 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems
Abbreviated titleCASES 2023
Duration18 - 20 September 2023
CityHamburg
CountryGermany

External IDs

ORCID /0000-0002-5007-445X/work/160049115

Keywords

Keywords

  • Compute In Memory, Design Space Exploration, Machine Learning, Non Volatile Memories