A Hierarchical Approach to Efficient Reinforcement Learning in Deterministic Domains

Diuk, Carlos , Alexander Strehl, Michael Littman
A Hierarchical Approach to Efficient Reinforcement Learning in Deterministic Domains
AAMAS 2006 (PDF - 140KB)

Abstract: Factored representations, model-based learning, and hierar- chies are well-studied techniques for improving the learning efficiency of reinforcement-learning algorithms in large-scale state spaces. We bring these three ideas together in a new al- gorithm. Our algorithm tackles two open problems from the reinforcement-learning literature, and provides a solution to those problems in deterministic domains. First, it shows how models can improve learning speed in the hierarchy- based MaxQ framework without disrupting opportunities for state abstraction. Second, we show how hierarchies can augment existing factored exploration algorithms to achieve not only low sample complexity for learning, but provably efficient planning as well. We illustrate the resulting per- formance gains in example domains. We prove polynomial bounds on the computational effort needed to attain near optimal performance within the hierarchy.