Continuous Inference of Time Recurrent Neural Networks for Field Oriented Control
Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/Gutachten › Beitrag in Konferenzband › Beigetragen › Begutachtung
Beitragende
Abstract
Deep recurrent networks can be computed as an unrolled computation graph in a defined time window. In theory, the unrolled network and a continuous time recurrent computation are equivalent. However, we encountered a shift in accuracy for models based on LSTM-/GRU- and SNN-cells during the switch from unrolled computation during training towards a continuous stateful inference without state resets. In this work, we evaluate these time recurrent neural network approaches based on the error created by using a time continuous inference. This error would be small in case of good time domain generalization and we can show that some training setups are favourable for that with the chosen example use case. A real time critical motor position prediction use case is chosen as a reference. This task can be phrased as a time series regression problem. A time continuous stateful inference for time recurrent neural networks benefits an embedded systems by reduced need of compute resources.
Details
Originalsprache | Englisch |
---|---|
Titel | 2023 IEEE Conference on Artificial Intelligence (CAI) |
Herausgeber (Verlag) | Institute of Electrical and Electronics Engineers Inc. |
Seiten | 266-269 |
Seitenumfang | 4 |
ISBN (elektronisch) | 979-8-3503-3984-0 |
ISBN (Print) | 979-8-3503-3985-7 |
Publikationsstatus | Veröffentlicht - 6 Juni 2023 |
Peer-Review-Status | Ja |
Konferenz
Titel | 2023 IEEE Conference on Artificial Intelligence |
---|---|
Kurztitel | CAI 2023 |
Dauer | 5 - 6 Juni 2023 |
Webseite | |
Ort | Hyatt Regency Santa Clara |
Stadt | Santa Clara |
Land | USA/Vereinigte Staaten |
Externe IDs
Ieee | 10.1109/CAI54212.2023.00119 |
---|
Schlagworte
ASJC Scopus Sachgebiete
Schlagwörter
- Edge AI, Recurrent Neural Networks, Spiking Neural Networks