Evolution of Q Values for Deep Q Learning in Stable Baselines

Citation:

M. Andrews, C. Dibek, and K. Palyutina, “Evolution of Q Values for Deep Q Learning in Stable Baselines,” Submitted.

Abstract:

We investigate the evolution of the Q values for the implementation of Deep Q Learning (DQL) in the Stable Baselines library. Stable Baselines incorporates the latest Reinforcement Learning techniques and achieves superhuman performance in many game environments. However, for some simple non-game environments, the DQL in Stable Baselines can struggle to find the correct actions. In this paper we aim to understand the types of environment where this suboptimal behavior can happen, and also investigate the corresponding evolution of the Q values for individual states. We compare a smart TrafficLight environment (where performance is poor) with the AI Gym FrozenLake environment (where performance is perfect). We observe that DQL struggles with TrafficLight because actions are reversible and hence the Q values in a given state are closer than in FrozenLake. We then investigate the evolution of the Q values using a recent decomposition technique of Achiam et al.. We observe that for TrafficLight, the function approximation error and the complex relationships between the states lead to a situation where some Q values meander far from optimal.

Notes:

This work was partially performed during Cemil Dibek’s summer internship at Nokia Bell Labs.

[arXiv]

Last updated on 05/12/2020