26 Downloads (Pure)

Abstract

A sampling-based method is introduced to approximate the Gittins index for a general family of alternative bandit processes. The approximation consists of a truncation of the optimization horizon and support for the immediate rewards, an optimal stopping value approximation, and a stochastic approximation procedure. Finite-time error bounds are given for the three approximations, leading to a procedure to construct a confidence interval for the Gittins index using a finite number of Monte Carlo samples, as well as an epsilon-optimal policy for the Bayesian multi-armed bandit. Proofs are given for almost sure convergence and convergence in distribution for the sampling based Gittins index approximation. In a numerical study, the approximation quality of the proposed method is verified for the Bernoulli bandit and Gaussian bandit with known variance, and the method is shown to significantly outperform Thompson sampling and the Bayesian Upper Confidence Bound algorithms for a novel random effects multi-armed bandit.
Original languageEnglish
PublisherArXiv.org
Pages1-31
Number of pages31
DOIs
Publication statusPublished - 24 Jul 2023

Keywords

  • Stochastic approximation
  • Multi-armed bandits
  • Optimal stopping
  • Bayesian computation
  • Markov decision processes

Fingerprint

Dive into the research topics of 'A Sampling-Based Method for Gittins Index Approximation'. Together they form a unique fingerprint.

Cite this