How social reinforcement learning can lead to metastable polarisation and the voter model

Benedikt V. Meylahn, Janusz M. Meylahn

Research output: Working paperPreprintAcademic

13 Downloads (Pure)

Abstract

Previous explanations for the persistence of polarization of opinions have typically included modelling assumptions that predispose the possibility of polarization (e.g.\ repulsive interactions). An exception is recent research showing that polarization is stable when agents form their opinions using reinforcement learning. We show that the polarization observed in this model is not stable, but exhibits consensus asymptotically with probability one. By constructing a link between the reinforcement learning model and the voter model, we argue that the observed polarization is metastable. Finally, we show that a slight modification in the learning process of the agents changes the model from being non-ergodic to being ergodic. Our results show that reinforcement learning may be a powerful method for modelling polarization in opinion dynamics, but that the tools appropriate for analysing such models crucially depend on the properties of the resulting systems. Properties which are determined by the details of the learning process.
Original languageEnglish
PublisherArXiv.org
DOIs
Publication statusPublished - 12 Jun 2024

Keywords

  • physics.soc-ph
  • nlin.AO
  • stat.ML
  • 91-10, 91D15

Fingerprint

Dive into the research topics of 'How social reinforcement learning can lead to metastable polarisation and the voter model'. Together they form a unique fingerprint.

Cite this