Inferring preferences from demonstrations in multi-objective reinforcement learning

Research output: Contribution to a Journal (Peer & Non Peer)Articlepeer-review

Abstract

Many decision-making problems feature multiple objectives where it is not always possible to know the preferences of a human or agent decision-maker for different objectives. However, demonstrated behaviors from the decision-maker are often available. This research proposes a dynamic weight-based preference inference (DWPI) algorithm that can infer the preferences of agents acting in multi-objective decision-making problems from demonstrations. The proposed algorithm is evaluated on three multi-objective Markov decision processes: Deep Sea Treasure, Traffic, and Item Gathering, and is compared to two existing preference inference algorithms. Empirical results demonstrate significant improvements compared to the baseline algorithms, in terms of both time efficiency and inference accuracy. The DWPI algorithm maintains its performance when inferring preferences for sub-optimal demonstrations. Moreover, the DWPI algorithm does not necessitate any interactions with the user during inference—only demonstrations are required. We provide a correctness proof and complexity analysis of the algorithm and statistically evaluate the performance under different representation of demonstrations.

Original languageEnglish
Pages (from-to)22845-22865
Number of pages21
JournalNeural Computing and Applications
Volume36
Issue number36
DOIs
Publication statusPublished - Dec 2024

Keywords

  • Dynamic weight multi-objective agent
  • Multi-objective reinforcement learning
  • Preference inference

Fingerprint

Dive into the research topics of 'Inferring preferences from demonstrations in multi-objective reinforcement learning'. Together they form a unique fingerprint.

Cite this