Stratified breast cancer follow-up using a continuous state partially observable Markov decision process

Maarten Otten*, Judith Timmer, Annemieke Witteveen

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

12 Citations (Scopus)
54 Downloads (Pure)


Frequency and duration of follow-up for breast cancer patients is still under discussion. Currently, in the Netherlands follow-up consists of annual mammography for the first five years after treatment and does not depend on the personal risk of developing a locoregional recurrence or a second primary tumor. The aim of this study is to gain insight in how to allocate resources for optimal and personalized follow-up. We formulate a discrete-time Partially Observable Markov Decision Process (POMDP) over a finite horizon with both discrete and continuous states, in which the size of the tumor is modeled as a continuous state. Transition probabilities are obtained from data of the Netherlands Cancer Registry. We show that the optimal value function of the POMDP is piecewise linear and convex and provide an alternative representation for it. Under the assumption that the tumor growth follows an exponential distribution and that the model parameters can be described by exponential functions, the optimal value function can be obtained from the parameters of the underlying probability distributions only. Finally, we present results for a stratification of the patients based on their age to show how this model can be applied in practice.
Original languageEnglish
Pages (from-to)464-474
Number of pages11
JournalEuropean journal of operational research
Issue number2
Early online date12 Sept 2019
Publication statusPublished - 1 Mar 2020


  • Decision processes
  • Medical decision making
  • Partially observable Markov decision process (POMDP)
  • Markov decision process (MDP)
  • UT-Hybrid-D
  • 22/2 OA procedure


Dive into the research topics of 'Stratified breast cancer follow-up using a continuous state partially observable Markov decision process'. Together they form a unique fingerprint.

Cite this