Formulation and validation of a car-following model based on deep reinforcement learning
Publikation: Vorabdruck/Dokumentation/Bericht › Vorabdruck (Preprint)
Beitragende
Abstract
We propose and validate a novel car following model based on deep reinforcement learning. Our model is trained to maximize externally given reward functions for the free and car-following regimes rather than reproducing existing follower trajectories. The parameters of these reward functions such as desired speed, time gap, or accelerations resemble that of traditional models such as the Intelligent Driver Model (IDM) and allow for explicitly implementing different driving styles. Moreover, they partially lift the black-box nature of conventional neural network models. The model is trained on leading speed profiles governed by a truncated Ornstein-Uhlenbeck process reflecting a realistic leader's kinematics. This allows for arbitrary driving situations and an infinite supply of training data. For various parameterizations of the reward functions, and for a wide variety of artificial and real leader data, the model turned out to be unconditionally string stable, comfortable, and crash-free. String stability has been tested with a platoon of five followers following an artificial and a real leading trajectory. A cross-comparison with the IDM calibrated to the goodness-of-fit of the relative gaps showed a higher reward compared to the traditional model and a better goodness-of-fit.
Details
Originalsprache | Englisch |
---|---|
Publikationsstatus | Veröffentlicht - 29 Sept. 2021 |
No renderer: customAssociatesEventsRenderPortal,dk.atira.pure.api.shared.model.researchoutput.WorkingPaper
Externe IDs
ORCID | /0000-0002-1730-0750/work/145696437 |
---|---|
ORCID | /0000-0002-8909-4861/work/149081747 |
Schlagworte
Schlagwörter
- cs.LG, cs.RO, eess.SP