Self-Supervised Adversarial Imitation Learning

Juarez Monteiro, Nathan Gavenski, Felipe Meneguzzi, Rodrigo C. Barros

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

Abstract

Behavioural cloning is an imitation learning technique that teaches an agent how to behave via expert demonstrations. Recent approaches use self-supervision of fully-observable unlabelled snapshots of the states to decode state pairs into actions. However, the iterative learning scheme employed by these techniques is prone to get trapped into bad local minima. Previous work uses goal-aware strategies to solve this issue. However, this requires manual intervention to verify whether an agent has reached its goal. We address this limitation by incorporating a discriminator into the original framework, offering two key advantages and directly solving a learning problem previous work had. First, it disposes of the manual intervention requirement. Second, it helps in learning by guiding function approximation based on the state transition of the expert's trajectories. Third, the discriminator solves a learning issue commonly present in the policy model, which is to sometimes perform a 'no action' within the environment until the agent finally halts.

Original languageEnglish
Title of host publication2023 International Joint Conference on Neural Networks (IJCNN)
Subtitle of host publication18-23 June 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665488679
ISBN (Print)978-1-6654-8868-6
DOIs
Publication statusPublished - 2 Aug 2023
Event2023 International Joint Conference on Neural Networks, IJCNN 2023 - Gold Coast, Australia
Duration: 18 Jun 202323 Jun 2023

Conference

Conference2023 International Joint Conference on Neural Networks, IJCNN 2023
Country/TerritoryAustralia
CityGold Coast
Period18/06/2323/06/23

Bibliographical note

This work was supported by UK Research and Innovation [grant number EP/S023356/1], in the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence (www.safeandtrustedai.org) and made possible via King’s Computational Research, Engineering and Technology Environment (CREATE) [27].

Keywords

  • Adversarial Learning
  • Imitation Learning
  • Learning from Observation
  • Self-Supervised Learning

Fingerprint

Dive into the research topics of 'Self-Supervised Adversarial Imitation Learning'. Together they form a unique fingerprint.

Cite this