Joint workshop at ICML, UAI, and COLT 2009
June 18, 2009, Montreal, Canada
OverviewAlthough reinforcement learning methods have been effectively applied to a number of problems of practical importance, successful large-scale applications remain the exception rather than the norm. Problems with large state spaces still pose considerable challenges to existing algorithms.
Abstraction is the process of factoring out irrelevant details, in other words, of focusing only on the information that is relevant for a particular purpose. For a number of years, the research community has been exploring various forms of abstraction as potential mechanisms for scaling up reinforcement learning algorithms to large, complex problems. State abstraction approaches and temporal abstraction methods have become well established, while recent representation- discovery methods have shown a great deal of promise.
The goal of this workshop is to promote interaction between researchers that work on various forms of abstraction in reinforcement learning, to explore possible areas of synergy between existing approaches, and to open up discussion on novel techniques that can harness the existing strengths of different types of abstractions.
The conference proceedings are now available for download, as are the individual papers. You can also view some of the talks online at videolectures.net (scroll down) or by following the individual links in the program.