Andrew Barto

Professor Emeritus
Retired Co-Director Autonomous Learning Laboratory

College of Information and Computer Sciences
272 Computer Science Building
University of Massachusetts Amherst
Amherst, MA 01003
email: barto @ cs .umass . edu

Note to Potential Grad-Student, Internship, and Post-Doc Applicants

Note that I am a Professor Emeritus and retired as co-director of the Autonomous Learning Laboratory. I am no longer taking on new students or interns. I encourage potential applicants to look for other opportunities in the College of Information and Computer Sciences by looking here .


Ph.D. students


Research Interests

My research interests center on learning in machines and animals. I have been working on developing learning algorithms that are useful for engineering applications but that also make contact with learning as studied by psychologists and neuroscientists. Although I make no claims to being either a psychologist or a neuroscientist, I spend a lot of my time interacting with psychologists and neuroscientists and reading literature in those fields. I feel strongly that new developments should be integrated as closely as possible with the state-of-the art in as many of the relevant fields as possible. It is also essential to understand how new developments relate to what others have done in the past.

In the case of reinforcement learning---whose main ideas go back a very long way---it has been immensely gratifying to participate in establishing new links between reinforcement learning and methods from the theory of stochastic optimal control. Especially exciting to me are the connections between temporal difference (TD) algorithms and the brain's dopamine system. These are partly responsible for rekindling my interest in reinforcement learning as an approach to building and understanding autonomous agents, rather than as a collection of methods for approximating solutions to large-scale engineering problems. (Though my interest in the latter is still strong!)

Most of my recent work has been about extending reinforcement learning methods so that they can work in real-time with real experience, rather than solely with simulated experience as in many of the most impressive applications to date. Of particular interest to me at present is what psychologists call intrinsically motivated behavior, meaning behavior that is done for its own sake rather than as a step toward solving a specific problem of clear practical value. What we learn during intrinsically motivated behavior is essential for our development as competent autonomous entities able to efficiently solve a wide range of practical problems as they arise. Recent work by my colleagues and me on what we call intrinsically motivated reinforcement learning is aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that form the building blocks for open-ended learning. Visit the Autonomous Learning Laboratory page for some more details.

Areas of interest

  • Mathematical theory of learning and planning in stochastic sequential decision problems
  • Methods for scaling-up reinforcement learning methods
  • Psychology, neuroscience, and computational theory of motivation, reward, and addiction
  • Computational models of learning and adaptation in animal motor control systems
  • Interaction of learning and evolution


Top of page