Transfer of experience between reinforcement learning environments with progressive difficulty

Michael G. Madden, Tom Howley

Research output: Contribution to a Journal (Peer & Non Peer)Articlepeer-review

49 Citations (Scopus)

Abstract

This paper describes an extension to reinforcement learning (RL), in which a standard RL algorithm is augmented with a mechanism for transferring experience gained in one problem to new but related problems. In this approach, named Progressive RL, an agent acquires experience of operating in a simple environment through experimentation, and then engages in a period of introspection, during which it rationalises the experience gained and formulates symbolic knowledge describing how to behave in that simple environment. When subsequently experimenting in a more complex but related environment, it is guided by this knowledge until it gains direct experience. A test domain with 15 maze environments, arranged in order of difficulty, is described. A range of experiments in this domain are presented, that demonstrate the benefit of Progressive RL relative to a basic RL approach in which each puzzle is solved from scratch. The experiments also analyse the knowledge formed during introspection, illustrate how domain knowledge may be incorporated, and show that Progressive Reinforcement Learning may be used to solve complex puzzles more quickly.

Original languageEnglish
Pages (from-to)375-398+188
JournalArtificial Intelligence Review
Volume21
Issue number3-4
DOIs
Publication statusPublished - Jun 2004

Keywords

  • C4.5
  • Experience transfer
  • Naive Bayes
  • PART
  • Progressive RL
  • Q-learning
  • Reinforcement learning
  • Rule learning

Fingerprint

Dive into the research topics of 'Transfer of experience between reinforcement learning environments with progressive difficulty'. Together they form a unique fingerprint.

Cite this