Capturing Uncertainty over Time for Spiking Neural Networks by Exploiting Conformal Prediction Sets

Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/GutachtenBeitrag in KonferenzbandBeigetragenBegutachtung

Beitragende

  • Daniel Scholz - , Infineon Technologies Dresden GmbH & Co. KG (Autor:in)
  • Oliver Emonds - , Technische Universität München (Autor:in)
  • Felix Kreutz - , Infineon Technologies Dresden GmbH & Co. KG (Autor:in)
  • Pascal Gerhards - , Infineon Technologies Dresden GmbH & Co. KG (Autor:in)
  • Jiaxin Huang - , Infineon Technologies Dresden GmbH & Co. KG (Autor:in)
  • Klaus Knobloch - , Infineon Technologies Dresden GmbH & Co. KG (Autor:in)
  • Alois Knoll - , Technische Universität München (Autor:in)
  • Christian Mayr - , Professur für Hochparallele VLSI-Systeme und Neuromikroelektronik (Autor:in)

Abstract

There is a great interest in harnessing the advantages of spiking neural networks. An increasing portion of research is focusing on the deployment of such models. The problem of safe decision making has similarities with classical networks. We apply spiking neural networks to time-series classification tasks where their stateful nature is beneficial. We show that the well-known method of Conformal Prediction (CP) is capable of distinguishing between wrong and correct decisions in this setting similar to but while being less expensive than Evidential Deep Learning and Neural Network Ensembles. In this work we argue that classification uncertainty in time should additionally be considered but is not captured by the length of prediction sets output from CP. Our main contribution addresses the issue that existing CP methods for classification do not consider the aforementioned problem. Our method takes as input the prediction sets which can be output from present conformal prediction and then extends these methods by a smoothed length and combined set algorithm. We apply our method to spiking neural network-based classifiers trained on four different time-series datasets. We show that our method outputs a more suitable uncertainty metric at a given point in time than just the unmodified set length of CP for classification.

Details

OriginalspracheEnglisch
TitelProceedings - 2024 International Conference on Machine Learning and Applications, ICMLA 2024
Redakteure/-innenM. Arif Wani, Plamen Angelov, Feng Luo, Mitsunori Ogihara, Xintao Wu, Radu-Emil Precup, Ramin Ramezani, Xiaowei Gu
Seiten107-114
Seitenumfang8
ISBN (elektronisch)979-8-3503-7488-9
PublikationsstatusVeröffentlicht - 2024
Peer-Review-StatusJa

Externe IDs

Scopus 105001001151

Schlagworte

Schlagwörter

  • Classification algorithms, Decision making, Deep learning, Focusing, Long short term memory, Neural networks, Prediction algorithms, Reproducibility of results, Spiking neural networks, Uncertainty, conformal prediction, safe AI, spiking neural networks, uncertainty quantification