An Intersection-Based Routing Scheme Using Q-Learning in Vehicular Ad Hoc Networks for Traffic Management in the Intelligent Transportation System

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

  • Muhammad Umair Khan - , Gachon University (Author)
  • Mehdi Hosseinzadeh - , Duy Tan University, University of Human Development (Author)
  • Amir Mosavi - , TUD Dresden University of Technology, Óbuda University (Author)

Abstract

Vehicular ad hoc networks (VANETs) create an advanced framework to support the intelligent transportation system and increase road safety by managing traffic flow and avoiding accidents. These networks have specific characteristics, including the high mobility of vehicles, dynamic topology, and frequent link failures. For this reason, providing an efficient and stable routing approach for VANET is a challenging issue. Reinforcement learning (RL) can solve the various challenges and issues of vehicular ad hoc networks, including routing. Most of the existing reinforcement learning-based routing methods are incompatible with the dynamic network environment and cannot prevent congestion in the network. Network congestion can be controlled by managing traffic flow. For this purpose, roadside units (RSUs) must monitor the road status to be informed about traffic conditions. In this paper, an intersection-based routing method using Q-learning (IRQ) is presented for VANETs. IRQ uses both global and local views in the routing process. For this reason, a dissemination mechanism of traffic information is introduced to create these global and local views. According to the global view, a Q-learning-based routing technique is designed for discovering the best routes between intersections. The central server continuously evaluates the created paths between intersections to penalize road segments with high congestion and improve the packet delivery rate. Finally, IRQ uses a greedy strategy based on a local view to find the best next-hop node in each road segment. NS2 software is used for analyzing the performance of the proposed routing approach. Then, IRQ is compared with three methods, including IV2XQ, QGrid, and GPSR. The simulation results demonstrate that IRQ has an acceptable performance in terms of packet delivery rate and delay. However, its communication overhead is higher than IV2XQ.

Details

Original languageEnglish
Article number3731
JournalMathematics
Volume10
Issue number20
Publication statusPublished - Oct 2022
Peer-reviewedYes

Keywords

Keywords

  • intelligent transportation system (ITS), machine learning (ML), reinforcement learning (RL), routing, vehicular ad hoc networks (VANETs)