Abstract
This paper introduces a learning-based solution tailored for the integrated motion planning and control of Multiple Autonomous Underwater Vehicles (AUVs). Tackling the complexities of cooperative motion planning, encompassing tasks such as waypoint tracking and self/obstacle collision avoidance, becomes challenging in a rule-based algorithmic paradigm due to the diverse and unpredictable situations encountered, necessitating a proliferation of if-then conditions in the implementation. Recognizing the limitations of traditional approaches that are heavily dependent on models and geometry of the system, our solution ofers an innovative paradigm shift. This study proposes an integrated motion planning and control strategy that leverages sensor and navigation outputs to generate longitudinal and lateral control outputs dynamically. At the heart of this cutting-edge methodology lies a continuous action Deep Reinforcement Learning (DRL) framework, specifically based on the Twin Delayed Deep Deterministic Policy Gradient (TD3). This algorithm surpasses traditional limitations by embodying an elaborated reward function, enabling the seamless execution of control actions essential for maneuvering multiple AUVs. Through simulation tests under both nominal and perturbed conditions, considering obstacles and underwater current disturbances, the obtained results demonstrate the feasibility and robustness of the proposed technique.
Original language | English |
---|---|
Pages (from-to) | 287-292 |
Number of pages | 6 |
Journal | IFAC-PapersOnLine |
Volume | 58 |
Issue number | 20 |
DOIs | |
Publication status | Published - 30 Oct 2024 |
Externally published | Yes |
Event | 15th IFAC Conference on Control Applications in Marine Systems, Robotics and Vehicles, CAMS 2024 - Blacksburg, United States Duration: 3 Sept 2024 → 5 Sept 2024 |
Keywords
- autonomous underwater vehicle
- cooperative motion
- DRL
- Integrated motion planning and control
- machine learning