Bayesian exploration for approximate dynamic programming

Ilya O. Ryzhov, Martijn R.K. Mes, Warren B. Powell, Gerald van den Berg

Research output: Contribution to journalArticleAcademicpeer-review

5 Citations (Scopus)
138 Downloads (Pure)


Approximate dynamic programming (ADP) is a general methodological framework for multistage stochastic optimization problems in transportation, finance, energy, and other domains. We propose a new approach to the exploration/exploitation dilemma in ADP that leverages two important concepts from the optimal learning literature: first, we show how a Bayesian belief structure can be used to express uncertainty about the value function in ADP; second, we develop a new exploration strategy based on the concept of value of information and prove that it systematically explores the state space. An important advantage of our framework is that it can be integrated into both parametric and nonparametric value function approximations, which are widely used in practical implementations of ADP. We evaluate this strategy on a variety of distinct resource allocation problems and demonstrate that, although more computationally intensive, it is highly competitive against other exploration strategies.

Original languageEnglish
Pages (from-to)198-214
Number of pages17
JournalOperations research
Issue number1
Publication statusPublished - 18 Jan 2019


  • Approximate dynamic programming
  • Bayesian learning
  • Correlated beliefs
  • Optimal learning
  • Value of information
  • 22/4 OA procedure


Dive into the research topics of 'Bayesian exploration for approximate dynamic programming'. Together they form a unique fingerprint.

Cite this