Traffic congestion and vehicular emissions remain critical challenges in urban mobility. While reinforcement learning (RL) has shown promise in adaptive traffic signal control, conventional models may inadvertently encourage private vehicle use by merely reducing delay. In this study, we present a Q-learning-based traffic signal control framework enhanced with a vehicle prioritization mechanism for public transport and emergency vehicles. Implemented using the Simulation of Urban Mobility (SUMO), our approach is evaluated on a four-arm intersection scenario. Compared to fixed-time control, the standard Q-learning model achieves an 80% reduction in average vehicle delay and over 80% decrease in CO2 emissions. The prioritized Q-learning variant further improves delay and emissions metrics while providing preferential treatment to high-impact vehicle categories. Crucially, this prioritization strategy helps incentivize public transport usage, mitigating the risk of increased private car dependence that often follows general congestion reduction efforts. Our results demonstrate that integrating vehicle prioritization into RL-based traffic control supports both sustainability and modal shift goals in intelligent transportation systems.
Sustainable Mobility Through Intelligent Traffic Signals: A Reinforcement Learning Approach to Emission Reduction and Vehicle Prioritization / Idris, Hussaini Aliyu; Cabri, Giacomo. - (2025), pp. 1-6. ( 33rd IEEE International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises, WETICE 2025 University of Catania, at Benedictine Monastery of �San Nicolo�, Piazza Dante, 32, ita 2025) [10.1109/wetice67341.2025.11092093].
Sustainable Mobility Through Intelligent Traffic Signals: A Reinforcement Learning Approach to Emission Reduction and Vehicle Prioritization
Idris, Hussaini Aliyu;Cabri, Giacomo
2025
Abstract
Traffic congestion and vehicular emissions remain critical challenges in urban mobility. While reinforcement learning (RL) has shown promise in adaptive traffic signal control, conventional models may inadvertently encourage private vehicle use by merely reducing delay. In this study, we present a Q-learning-based traffic signal control framework enhanced with a vehicle prioritization mechanism for public transport and emergency vehicles. Implemented using the Simulation of Urban Mobility (SUMO), our approach is evaluated on a four-arm intersection scenario. Compared to fixed-time control, the standard Q-learning model achieves an 80% reduction in average vehicle delay and over 80% decrease in CO2 emissions. The prioritized Q-learning variant further improves delay and emissions metrics while providing preferential treatment to high-impact vehicle categories. Crucially, this prioritization strategy helps incentivize public transport usage, mitigating the risk of increased private car dependence that often follows general congestion reduction efforts. Our results demonstrate that integrating vehicle prioritization into RL-based traffic control supports both sustainability and modal shift goals in intelligent transportation systems.| File | Dimensione | Formato | |
|---|---|---|---|
|
WETICE25RL_preprint.pdf
Open access
Tipologia:
AO - Versione originale dell'autore proposta per la pubblicazione
Dimensione
400.79 kB
Formato
Adobe PDF
|
400.79 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate

I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris




