|
Singh Atul Sudhakar
Keywords:
Physics-Informed Neural Networks, Reinforcement Learning, Autonomous Vehicles, Sim-To-Real Transfer, Control Barrier Functions, Vehicle Dynamics, Safe Learning, Policy Optimisation, Sensor Fusion, Domain Randomisation.
Abstract:
This review analyses a new theory using physics-informed reinforcement learning (PI-RL) to tackle ongoing issues when moving simulations to real-world use in self-driving cars. Instead of relying solely on data-driven methods, it combines Physics-Informed Neural Networks (PINNs) with policy gradient techniques, building core physical rules - such as Newton's laws, friction behavior, motion limits, and energy preservation - directly into the learning process[^1]. Unlike standard domain randomization strategies, this approach uses known physics as fixed structure rather than letting such patterns emerge indirectly[^2]. It introduces several elements: an overview of common breakdown points, equations for physics-guided policy updates, layered safety systems, and structured testing designs. Still, no experiments were run - the study stays purely conceptual - and demands high computing power, slowing training by 30-50%. There is also tension between accurate modeling and practical optimization[^3]. Despite these barriers, clear limitation disclosures and thorough method planning offer solid starting points for later hands-on research; however, actual implementation remains far off. This assessment explores underlying concepts, highlights missing components, compares suggested tools to existing top-tier solutions, and suggests steps toward experimental confirmation across various driving conditions.
|
|

International Journal of Recent Research and Review
ISSN: 2277-8322
Vol. XVIII, Issue 3
September 2025
|
PDF View
PUBLISHED
September 2025
ISSUE
Vol. XVIII, Issue 3
SECTION
Articles
|