Answer set programming for non-stationary Markov decision processes

Leonardo A. Ferreira, Reinaldo A. Reinaldo, Paulo E. Santos, Ramon Lopez de Mantaras

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Non-stationary domains, where unforeseen changes happen, present a challenge for agents to find an optimal policy for a sequential decision making problem. This work investigates a solution to this problem that combines Markov Decision Processes (MDP) and Reinforcement Learning (RL) with Answer Set Programming (ASP) in a method we call ASP(RL). In this method, Answer Set Programming is used to find the possible trajectories of an MDP, from where Reinforcement Learning is applied to learn the optimal policy of the problem. Results show that ASP(RL) is capable of efficiently finding the optimal solution of an MDP representing non-stationary domains.

Original languageEnglish
Pages (from-to)993-1007
Number of pages15
JournalApplied Intelligence
Volume47
Issue number4
DOIs
Publication statusPublished - Dec 2017
Externally publishedYes

Keywords

  • Action languages
  • Answer set programming
  • Markov decision processes
  • Non-determinism

Fingerprint Dive into the research topics of 'Answer set programming for non-stationary Markov decision processes'. Together they form a unique fingerprint.

  • Cite this