We present and analyze a stochastic game in which transition probabilities between states are not fixed as in standard stochastic games, but depend on the history of the play, i.e., the players’ past action choices. For the limiting average reward criterion we determine the set of jointly convergent pure-strategy rewards which can be supported by equilibria involving threats. For expository purposes we analyze a stylized fishery game. Each period, two agents choose between catching with restraint or without. The resource is in either of two states, High or Low. Restraint is harmless to the fish, but it is a dominated action at each stage. The less restraint shown during the play, the higher the probabilities that the system moves to or stays in Low. The latter state may even become “absorbing temporarily,”’ i.e., transition probabilities to High temporarily become zero while transition probabilities to Low remain nonzero.
|Title of host publication||Mathematical programming and game theory|
|Editors||S.K. Neogy, Ravandra B. Bapat, Dipti Dubey|
|Place of Publication||Singapore|
|Number of pages||22|
|Publication status||Published - 28 Nov 2018|
|Name||Indian Statistical Institute Series|