LevDoom: A Benchmark for Generalization on Level Difficulty in Reinforcement Learning

Tristan Tomilin, Tianhong Dai, Meng Fang, Mykola Pechenizkiy

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

2 Citations (Scopus)

Abstract

Despite the recent success of deep reinforcement learning (RL), the generalization ability of RL agents remains an open problem for real-world applicability. RL agents trained on pixels may completely be derailed from achieving their objectives in unseen situations with different levels of visual changes. However, numerous existing RL suites do not address this as a primary objective or lack consistent level design of increased complexity. In this paper, we introduce the LevDoom benchmark, a suite containing semi-realistic 3D simulation environments with coherent levels of difficulty in the renowned video game Doom, designed to benchmark generalization in vision-based RL. We demonstrate how our benchmark reveals weaknesses of some popular Deep RL algorithms, which fail to prevail in modified environments. We further establish how our difficulty level design presents increasing complexity to these algorithms.
Original languageEnglish
Title of host publicationIEEE Conference on Games 2022
PublisherIEEE Explore
Pages72-79
Number of pages8
ISBN (Electronic)978-1-6654-5989-1
DOIs
Publication statusPublished - 20 Sept 2022
Event2022 IEEE Conference on Games - Beijing, China
Duration: 21 Aug 202224 Aug 2022
https://ieee-cog.org/2022/

Conference

Conference2022 IEEE Conference on Games
Abbreviated titleCoG 2022
Country/TerritoryChina
CityBeijing
Period21/08/2224/08/22
Internet address

Keywords

  • reinforcement learning
  • generalization
  • vizdoom

Fingerprint

Dive into the research topics of 'LevDoom: A Benchmark for Generalization on Level Difficulty in Reinforcement Learning'. Together they form a unique fingerprint.

Cite this