Risk Aware Belief-dependent Constrained POMDP Planning

Andrey Zhitnikov, Vadim Indelman

Research output: Working paperPreprint


Risk awareness is fundamental to an online operating agent. However, it received less attention in the challenging continuous domain under partial observability. Existing constrained POMDP algorithms are typically designed for discrete state and observation spaces. In addition, current solvers for constrained formulations do not support general belief-dependent constraints. Crucially, in the POMDP setting, risk awareness in the context of a constraint was addressed in a limited way. This paper presents a novel formulation for risk-averse belief-dependent constrained POMDP. Our probabilistic constraint is general and belief-dependent, as is the reward function. The proposed universal framework applies to a continuous domain with nonparametric beliefs represented by particles or parametric beliefs. We show that our formulation better accounts for the risk than previous approaches.
Original languageUndefined/Unknown
StatePublished - 6 Sep 2022


  • cs.AI
  • cs.RO

Cite this