Partially Observable Markov Decision Processes with Behavioral Norms

Matthias Nickles, Achim Rettinger

Research output: Contribution to a Journal (Peer & Non Peer)Conference articlepeer-review

Abstract

This extended abstract discusses various approaches to the constraining of Partially Observable Markov Decision Processes (POMDPs) using social norms and logical assertions in a dynamic logic framework. Whereas the exploitation of synergies among formal logic on the one hand and stochastic approaches and machine learning on the other is gaining significantly increasing interest since several years, most of the respective approaches fall into the category of relational learning in the widest sense, including inductive (stochastic) logic programming. In contrast, the use of formal knowledge (including knowledge about social norms) for the provision of hard constraints and prior knowledge for some stochastic learning or modeling task is much less frequently approached. Although we do not propose directly implementable technical solutions, it is hoped that this work is a useful contribution to a discussion about the usefulness and feasibility of approaches from norm research and formal logic in the context of stochastic behavioral models, and vice versa.

Original languageEnglish
JournalDagstuhl Seminar Proceedings
Volume9121
Publication statusPublished - 2009
Externally publishedYes
EventNormative Multi-Agent Systems 2009 - Wadern, Germany
Duration: 15 Mar 200920 Mar 2009

Keywords

  • Deontic Logic
  • Norms
  • Partially Observable Markov Decision Processes
  • Propositional Dynamic Logic

Fingerprint

Dive into the research topics of 'Partially Observable Markov Decision Processes with Behavioral Norms'. Together they form a unique fingerprint.

Cite this