Josip Josifovski(1*), Sayantan Auddy (2*), Mohammadhossein Malmir(1),
Justus Piater(2,4), Alois Knoll(1) and Nicolás Navarro-Guerrero(3)


(1) School of Computation Information and Technology, Technical University of Munich, Germany
(2) Department of Computer Science, University of Innsbruck, Austria
(3) L3S Research Center, Leibniz Universität Hannover, Germany.
(4) Digital Science Center (DiSC), University of Innsbruck, Austria
* Equal contribution

Domain Randomization (DR) is commonly used for sim2real transfer of reinforcement learning (RL) policies in robotics. Most DR approaches require a simulator with a fixed set of tunable parameters from the start of the training, from which the parameters are randomized simultaneously to train a robust model for use in the real world. However, the combined randomization of many parameters increases the task difficulty and might result in sub-optimal policies. To address this problem and to provide a more flexible training process, we propose Continual Domain Randomization (CDR) for RL that combines domain randomization with continual learning to enable sequential training in simulation on a subset of randomization parameters at a time. Starting from a model trained in a non-randomized simulation where the task is easier to solve, the model is trained on a sequence of randomizations, and continual learning is employed to remember the effects of previous randomizations. Our robotic reaching and grasping tasks experiments show that the model trained in this fashion learns effectively in simulation and performs robustly on the real robot while matching or outperforming baselines that employ combined randomization or sequential randomization without continual learning.

Suplementary video:

… Under development …