Abstract
Previous explanations for the persistence of polarization of opinions have typically included modelling assumptions that predispose the possibility of polarization (e.g.\ repulsive interactions). An exception is recent research showing that polarization is stable when agents form their opinions using reinforcement learning. We show that the polarization observed in this model is not stable, but exhibits consensus asymptotically with probability one. By constructing a link between the reinforcement learning model and the voter model, we argue that the observed polarization is metastable. Finally, we show that a slight modification in the learning process of the agents changes the model from being non-ergodic to being ergodic. Our results show that reinforcement learning may be a powerful method for modelling polarization in opinion dynamics, but that the tools appropriate for analysing such models crucially depend on the properties of the resulting systems. Properties which are determined by the details of the learning process.
Original language | English |
---|---|
Publisher | ArXiv.org |
DOIs | |
Publication status | Published - 12 Jun 2024 |
Keywords
- physics.soc-ph
- nlin.AO
- stat.ML
- 91-10, 91D15