Optimization Regenerative Braking in Electric Vehicles Using Q-Learning for Improving Decision-Making in Smart Cities

dc.contributor.authorPannee Suanpang
dc.contributor.authorPitchaya Jamjuntr
dc.contributor.correspondenceP. Suanpang; Department of Information Technology, Faculty of Science & Technology, Suan Dusit University, Bangkok, Thailand; email: pannee_sua@dusit.ac.th
dc.date.accessioned2025-07-07T18:16:38Z
dc.date.available2025-07-07T18:16:38Z
dc.date.issued2025
dc.description.abstractThe growing prevalence of electric vehicles (EVs) in urban settings underscores the need for advanced decision-making frameworks designed to optimise energy efficiency and improve overall vehicle performance. Regenerative braking, a critical technology in EVs, facilitates energy recovery during deceleration, thereby enhancing efficiency and extending driving range. This study presents an innovative Q-learning-based approach to refine regenerative braking control strategies, aiming to maximise energy recovery, ensure passenger comfort through smooth braking, and maintain safe driving distances. The proposed system leverages real-time feedback on driving patterns, road conditions, and vehicle performance, enabling the Q-learning agent to autonomously adapt its braking strategy for optimal outcomes. By employing Q-learning, the system demonstrates the ability to learn and adjust to dynamic driving environments, progressively enhancing decision-making capabilities. Extensive simulations conducted within a smart city framework revealed substantial improvements in energy efficiency and notable reductions in energy consumption compared to conventional braking systems. The optimisation process incorporated a state space comprising vehicle speed, distance to the preceding vehicle, battery charge level, and road conditions, alongside an action space permitting dynamic braking adjustments. The reward function prioritised maximising energy recovery while minimising jerk and ensuring safety. Simulation outcomes indicated that the Q-learning-based system surpassed traditional control methods, achieving a 15.3% increase in total energy recovered (132.8 kWh), enhanced passenger comfort (jerk reduced to 7.6 m/s3), and a 13% reduction in braking distance. These findings underscore the system's adaptability across varied traffic scenarios. Broader implications include integration into smart city infrastructures, where the adaptive algorithm could enhance real-time traffic management, fostering sustainable urban mobility. Furthermore, the improved energy efficiency reduces overall energy consumption, extends EV range, and decreases charging frequency, aligning with global sustainability objectives. The framework also holds potential for future EV applications, such as adaptive cruise control, autonomous driving, and vehicle-to-grid (V2G) systems. © 2025 Regional Association for Security and crisis management. All rights reserved.
dc.identifier.citationDecision Making: Applications in Management and Engineering
dc.identifier.doi10.31181/dmame8120251329
dc.identifier.issn25606018
dc.identifier.scopus2-s2.0-105005780784
dc.identifier.urihttps://repository.dusit.ac.th/handle/123456789/7301
dc.languageEnglish
dc.publisherRegional Association for Security and crisis management
dc.rights.holderScopus
dc.subjectDecision Making
dc.subjectOptimization, Multi-Agent Reinforcement Learning
dc.subjectSmart Grid Management
dc.subjectVehicle-to-Grid Systems
dc.titleOptimization Regenerative Braking in Electric Vehicles Using Q-Learning for Improving Decision-Making in Smart Cities
dc.typeArticle
mods.location.urlhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-105005780784&doi=10.31181%2fdmame8120251329&partnerID=40&md5=ef19ca3ec9321fbf386a8232de0c6252
oaire.citation.endPage216
oaire.citation.issue1
oaire.citation.startPage182
oaire.citation.volume8
Files
Collections