Abstract
Despite the recent success of deep reinforcement learning (RL), the generalization ability of RL agents remains an open problem for real-world applicability. RL agents trained on pixels may completely be derailed from achieving their objectives in unseen situations with different levels of visual changes. However, numerous existing RL suites do not address this as a primary objective or lack consistent level design of increased complexity. In this paper, we introduce the LevDoom benchmark, a suite containing semi-realistic 3D simulation environments with coherent levels of difficulty in the renowned video game Doom, designed to benchmark generalization in vision-based RL. We demonstrate how our benchmark reveals weaknesses of some popular Deep RL algorithms, which fail to prevail in modified environments. We further establish how our difficulty level design presents increasing complexity to these algorithms.
Original language | English |
---|---|
Title of host publication | IEEE Conference on Games 2022 |
Publisher | IEEE Explore |
Pages | 72-79 |
Number of pages | 8 |
ISBN (Electronic) | 978-1-6654-5989-1 |
DOIs | |
Publication status | Published - 20 Sept 2022 |
Event | 2022 IEEE Conference on Games - Beijing, China Duration: 21 Aug 2022 → 24 Aug 2022 https://ieee-cog.org/2022/ |
Conference
Conference | 2022 IEEE Conference on Games |
---|---|
Abbreviated title | CoG 2022 |
Country/Territory | China |
City | Beijing |
Period | 21/08/22 → 24/08/22 |
Internet address |
Keywords
- reinforcement learning
- generalization
- vizdoom