Reinforcement learning in population games

B-Tier
Journal: Games and Economic Behavior
Year: 2013
Volume: 80
Issue: C
Pages: 10-38

Authors (2)

Lahkar, Ratul (Ashoka University) Seymour, Robert M. (not in RePEc)

Score contribution per author:

1.005 = (α=2.01 / 2 authors) × 1.0x B-tier

α: calibrated so average coauthorship-adjusted count equals average raw count

Abstract

We study reinforcement learning in a population game. Agents in a population game revise mixed strategies using the Cross rule of reinforcement learning. The population state—the probability distribution over the set of mixed strategies—evolves according to the replicator continuity equation which, in its simplest form, is a partial differential equation. The replicator dynamic is a special case in which the initial population state is homogeneous, i.e. when all agents use the same mixed strategy. We apply the continuity dynamic to various classes of symmetric games. Using 3×3 coordination games, we show that equilibrium selection depends on the variance of the initial strategy distribution, or initial population heterogeneity. We give an example of a 2×2 game in which heterogeneity persists even as the mean population state converges to a mixed equilibrium. Finally, we apply the dynamic to negative definite and doubly symmetric games.

Technical Details

RePEc Handle
repec:eee:gamebe:v:80:y:2013:i:c:p:10-38
Journal Field
Theory
Author Count
2
Added to Database
2026-01-25