Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay

Tianhong Dai, Hengyan Liu, Kai Arulkumaran, Guangyu Ren, Anil Anthony Bharath

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution


Hindsight experience replay (HER) is a goal relabelling technique typically used with off-policy deep reinforcement learning algorithms to solve goal-oriented tasks; it is well suited to robotic manipulation tasks that deliver only sparse rewards. In HER, both trajectories and transitions are sampled uniformly for training. However, not all of the agent’s experiences contribute equally to training, and so naive uniform sampling may lead to inefficient learning. In this paper, we propose diversity-based trajectory and goal selection with HER (DTGSH). Firstly, trajectories are sampled according to the diversity of the goal states as modelled by determinantal point processes (DPPs). Secondly, transitions with diverse goal states are selected from the trajectories by using k-DPPs. We evaluate DTGSH on five challenging robotic manipulation tasks in simulated robot environments, where we show that our method can learn more quickly and reach higher performance than other state-of-the-art approaches on all tasks.
Original languageEnglish
Title of host publicationPacific Rim International Conference on Artificial Intelligence
Subtitle of host publicationPRICAI 2021: Trends in Artificial Intelligence
Publication statusPublished - 2021

Publication series

NameLecture Notes in Computer Science


Dive into the research topics of 'Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay'. Together they form a unique fingerprint.

Cite this