Q-Table compression for reinforcement learning

Leonardo Amado* (Corresponding Author), Felipe Meneguzzi* (Corresponding Author)

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Reinforcement learning (RL) algorithms are often used to compute agents capable of acting in environments without prior knowledge of the environment dynamics. However, these algorithms struggle to converge in environments with large branching factors and their large resulting state-spaces. In this work, we develop an approach to compress the number of entries in a Q-value table using a deep auto-encoder. We develop a set of techniques to mitigate the large branching factor problem. We present the application of such techniques in the scenario of a real-time strategy (RTS) game, where both state space and branching factor are a problem. We empirically evaluate an implementation of the technique to control agents in an RTS game scenario where classical RL fails and provide a number of possible avenues of further work on this problem.
Original languageEnglish
Article number e22
Pages (from-to)1-21
Number of pages21
JournalThe Knowledge Engineering Review
Volume33
DOIs
Publication statusPublished - Dec 2018

Bibliographical note

This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de NivelSuperior – Brasil (CAPES) – Finance Code 001

Fingerprint

Dive into the research topics of 'Q-Table compression for reinforcement learning'. Together they form a unique fingerprint.

Cite this