Beyond Imitation: A Life-Long Policy Learning Framework for Path Tracking Control of Autonomous Driving

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

  • Cheng Gong - , Beijing Institute of Technology (Author)
  • Chao Lu - , Beijing Institute of Technology (Author)
  • Zirui Li - , Beijing Institute of Technology, TUD Dresden University of Technology (Author)
  • Zhe Liu - , Beijing Institute of Technology (Author)
  • Jianwei Gong - , Beijing Institute of Technology (Author)
  • Xuemei Chen - , Beijing Institute of Technology (Author)

Abstract

Model-free learning-based control methods have recently shown significant advantages over traditional control methods in avoiding complex vehicle characteristic estimation and parameter tuning. As a primary policy learning method, imitation learning (IL) is capable of learning control policies directly from expert demonstrations. However, the performance of IL policies is highly dependent on the data sufficiency and quality of the demonstrations. To alleviate the above problems of IL-based policies, a lifelong policy learning (LLPL) framework is proposed in this paper, which extends the IL scheme with lifelong learning (LLL). First, a novel IL-based model-free control policy learning method for path tracking is introduced. Even with imperfect demonstration, the optimal control policy can be learned directly from historical driving data. Second, by using the LLL method, the pre-trained IL policy can be safely updated and fine-tuned with incremental execution knowledge. Third, a knowledge evaluation method for policy learning is introduced to avoid learning redundant or inferior knowledge, thus ensuring the performance improvement of online policy learning. Experiments are conducted using a high-fidelity vehicle dynamic model in various scenarios to evaluate the performance of the proposed method. The results show that the proposed LLPL framework can continuously improve the policy performance with collected incremental driving data, and achieves the best accuracy and control smoothness compared to other baseline methods after evolving on a 7 km curved road. Through learning and evaluation with noisy real-life data collected in an off-road environment, the proposed LLPL framework also demonstrates its applicability in learning and evolving in real-life scenarios.

Details

Original languageEnglish
Pages (from-to)9786-9799
Number of pages14
JournalIEEE Transactions on Vehicular Technology
Volume73
Issue number7
Publication statusPublished - Jul 2024
Peer-reviewedYes

External IDs

Scopus 85190173760

Keywords

Keywords

  • Adaptation models, Autonomous driving, Data models, Optimal control, Task analysis, computational modeling, model-free control, path tracking, vehicle dynamics