A research team has shown for the first time that reinforcement learning—i.e., a neural network that learns the best action to perform at each moment based on a series of rewards—allows autonomous vehicles and underwater robots to locate and carefully track marine objects and animals.
The details are reported in a paper published in Science Robotics.
Currently, underwater robotics is emerging as a key tool for improving knowledge of the oceans in the face of the many difficulties in exploring them, with vehicles capable of descending to depths of up to 4,000 meters. In addition, the in-situ data they provide help to complement other data, such as that obtained from satellites. This technology makes it possible to study small-scale phenomena, such as CO2 capture by marine organisms, which helps to regulate climate change.
Specifically, this new work reveals that reinforcement learning, widely used in the field of control and robotics, as well as in the development of tools related to natural language processing such as ChatGPT, allows underwater robots to learn what actions to perform at any given time to achieve a specific goal. These action policies match, or even improve in certain circumstances, traditional methods based on analytical development.
“This type of learning allows us to train a neural network to optimize a specific task, which would be very difficult to achieve otherwise. For example, we have been able to demonstrate that it is possible to optimize the trajectory of a vehicle to locate and track objects moving underwater,” explains Ivan Masmitjà, the lead author of the study, who has worked between Institut de Ciències del Mar (ICM-CSIC) and the Monterey Bay Aquarium Research Institute (MBARI).
This “will allow us to deepen the study of ecological phenomena such as migration or movement at small and large scales of a multitude of marine species using autonomous robots. In addition, these advances will make it possible to monitor other oceanographic instruments in real time through a network of robots, where some can be on the surface monitoring and transmitting by satellite the actions performed by other robotic platforms on the seabed,” points out the ICM-CSIC researcher Joan Navarro, who also participated in the study.
To carry out this work, researchers used range acoustic techniques, which allow estimating the position of an object considering distance measurements taken at different points. However, this fact makes the accuracy in locating the object highly dependent on the place where the acoustic range measurements are taken.
And this is where the application of artificial intelligence and, specifically, reinforcement learning, which allows the identification of the best points and, therefore, the optimal trajectory to be performed by the robot, becomes important.
Neural networks were trained, in part, using the computer cluster at the Barcelona Supercomputing Center (BSC-CNS), where the most powerful supercomputer in Spain and one of the most powerful in Europe are located. “This made it possible to adjust the parameters of different algorithms much faster than using conventional computers,” indicates Prof. Mario Martin, from the Computer Science Department of the UPC and author of the study.
Once trained, the algorithms were tested on different autonomous vehicles, including the AUV Sparus II developed by VICOROB, in a series of experimental missions developed in the port of Sant Feliu de Guíxols, in the Baix Empordà, and in Monterey Bay (California), in collaboration with the principal investigator of the Bioinspiration Lab at MBARI, Kakani Katija.
“Our simulation environment incorporates the control architecture of real vehicles, which allowed us to implement the algorithms efficiently before going to sea,” explains Narcís Palomeras, from the UdG.
For future research, the team will study the possibility of applying the same algorithms to solve more complicated missions. For example, the use of multiple vehicles to locate objects, detect fronts and thermoclines or cooperative algae upwelling through multi-platform reinforcement learning techniques.
More information:
I. Masmitja et al, Dynamic robotic tracking of underwater targets using reinforcement learning, Science Robotics (2023). DOI: 10.1126/scirobotics.ade7811
Citation:
Reinforcement learning allows underwater robots to locate and track objects underwater (2023, July 28)
retrieved 28 July 2023
from https://techxplore.com/news/2023-07-underwater-robots-track.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.