Robust path following on rivers using bootstrapped reinforcement learning
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
This paper develops a Deep Reinforcement Learning (DRL)-based approach for the navigation and control of autonomous surface vessels (ASVs) on inland waterways, where spatial constraints and environmental challenges such as high flow velocities and shallow banks require precise maneuvering. By implementing a state-of-the-art bootstrapped deep Q-learning (DQN) algorithm alongside a novel, flexible training environment generator, we developed a robust and accurate rudder control system capable of adapting to the dynamic conditions of inland waterways. The effectiveness of our approach is validated through comparisons with a vessel-specific Proportional-Integral-Derivative (PID) and standard DQN controller using real-world data from the lower and middle Rhine. It was found that the DRL algorithm demonstrates superior adaptability and generalizability across previously unseen scenarios and achieves high navigational accuracy. Our findings highlight the limitations of traditional control methods like PID in complex river environments, as well as the importance of training in diverse and realistic environments.
Details
Original language | English |
---|---|
Article number | 117207 |
Journal | Ocean engineering |
Volume | 298 |
Publication status | Published - 15 Apr 2024 |
Peer-reviewed | Yes |
External IDs
ORCID | /0000-0002-8909-4861/work/171064873 |
---|
Keywords
ASJC Scopus subject areas
Keywords
- Autonomous surface vessel, Deep reinforcement learning, Path following, Restricted waterways