Robust path following on rivers using bootstrapped reinforcement learning

Research output: Contribution to journalResearch articleContributedpeer-review

Abstract

This paper develops a Deep Reinforcement Learning (DRL)-based approach for the navigation and control of autonomous surface vessels (ASVs) on inland waterways, where spatial constraints and environmental challenges such as high flow velocities and shallow banks require precise maneuvering. By implementing a state-of-the-art bootstrapped deep Q-learning (DQN) algorithm alongside a novel, flexible training environment generator, we developed a robust and accurate rudder control system capable of adapting to the dynamic conditions of inland waterways. The effectiveness of our approach is validated through comparisons with a vessel-specific Proportional-Integral-Derivative (PID) and standard DQN controller using real-world data from the lower and middle Rhine. It was found that the DRL algorithm demonstrates superior adaptability and generalizability across previously unseen scenarios and achieves high navigational accuracy. Our findings highlight the limitations of traditional control methods like PID in complex river environments, as well as the importance of training in diverse and realistic environments.

Details

Original languageEnglish
Article number117207
JournalOcean engineering
Volume298
Publication statusPublished - 15 Apr 2024
Peer-reviewedYes

External IDs

ORCID /0000-0002-8909-4861/work/171064873

Keywords

Keywords

  • Autonomous surface vessel, Deep reinforcement learning, Path following, Restricted waterways