Limiting dynamics for Q-learning with memory one in symmetric two-player, two-action games

Janusz M Meylahn, Lars Janssen

Research output: Working paperPreprintAcademic

707 Downloads (Pure)

Abstract

We develop a method based on computer algebra systems to represent the mutual pure strategy best-response dynamics of symmetric two-player, two-action repeated games played by players with a one-period memory. We apply this method to the iterated prisoner's dilemma, stag hunt and hawk-dove games and identify all possible equilibrium strategy pairs and the conditions for their existence. The only equilibrium strategy pair that is possible in all three games is the win-stay, lose-shift strategy. Lastly, we show that the mutual best-response dynamics are realized by a sample batch Q-learning algorithm in the infinite batch size limit.
Original languageEnglish
PublisherArXiv.org
DOIs
Publication statusPublished - 29 Jul 2021

Keywords

  • math.DS
  • nlin.AO

Fingerprint

Dive into the research topics of 'Limiting dynamics for Q-learning with memory one in symmetric two-player, two-action games'. Together they form a unique fingerprint.

Cite this