Continuous Inference of Time Recurrent Neural Networks for Field Oriented Control
Research output: Contribution to book/Conference proceedings/Anthology/Report › Conference contribution › Contributed › peer-review
Contributors
Abstract
Deep recurrent networks can be computed as an unrolled computation graph in a defined time window. In theory, the unrolled network and a continuous time recurrent computation are equivalent. However, we encountered a shift in accuracy for models based on LSTM-/GRU- and SNN-cells during the switch from unrolled computation during training towards a continuous stateful inference without state resets. In this work, we evaluate these time recurrent neural network approaches based on the error created by using a time continuous inference. This error would be small in case of good time domain generalization and we can show that some training setups are favourable for that with the chosen example use case. A real time critical motor position prediction use case is chosen as a reference. This task can be phrased as a time series regression problem. A time continuous stateful inference for time recurrent neural networks benefits an embedded systems by reduced need of compute resources.
Details
Original language | English |
---|---|
Title of host publication | 2023 IEEE Conference on Artificial Intelligence (CAI) |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 266-269 |
Number of pages | 4 |
ISBN (electronic) | 979-8-3503-3984-0 |
ISBN (print) | 979-8-3503-3985-7 |
Publication status | Published - 6 Jun 2023 |
Peer-reviewed | Yes |
Conference
Title | 2023 IEEE Conference on Artificial Intelligence |
---|---|
Abbreviated title | CAI 2023 |
Duration | 5 - 6 June 2023 |
Website | |
Location | Hyatt Regency Santa Clara |
City | Santa Clara |
Country | United States of America |
External IDs
Ieee | 10.1109/CAI54212.2023.00119 |
---|
Keywords
ASJC Scopus subject areas
Keywords
- Edge AI, Recurrent Neural Networks, Spiking Neural Networks