Continuous control of an underground loader using deep reinforcement learning
S. Backman, D. Lindmark, K. Bodin, M. Servin, J. Mörk, and H. Löfgren. Continuous control of an underground loader using deep reinforcement learning. Machines 9(10):216 (2021). doi.org/10.3390/machines9100216 [pdf], arxiv:2011.00459.
Reinforcement learning control of an underground loader is investigated in simulated environment, using a multi-agent deep neural network approach. At the start of each loading cycle, one agent selects the dig position from a depth camera image of the pile of fragmented rock. A second agent is responsible for continuous control of the vehicle, with the goal of filling the bucket at the selected loading point, while avoiding collisions, getting stuck, or losing ground traction. It relies on motion and force sensors, as well as on camera and lidar. Using a soft actor-critic algorithm the agents learn policies for efficient bucket filling over many subsequent loading cycles, with clear ability to adapt to the changing environment. The best results, on average 75% of the max capacity, are obtained when including a penalty for energy usage in the reward.

Supplementary video

Work in part supported by VINNOVA (grant id 2019-04832) in the project Intregrated Test Environment for the Mining Industry (SMIG)

Algoryx Simulation AB