Learning Equilibria in Matching Games with Bandit Feedback
Learning Equilibria in Matching Games with Bandit Feedback
We investigate the problem of learning an equilibrium in a generalized two-sided matching market, where agents can adaptively choose their actions based on their assigned matches. Specifically, we consider a setting in which matched agents engage in a zero-sum game with initially unknown payoff matrices, and we explore whether a centralized procedure can learn an equilibrium from bandit feedback. We adopt the solution concept of matching equilibrium, where a pair consisting of a matching $\mathfrak{m}$ and a set of agent strategies $X$ forms an equilibrium if no agent has the incentive to deviate from $(\mathfrak{m}, X)$. To measure the deviation of a given pair $(\mathfrak{m}, X)$ from the equilibrium pair $(\mathfrak{m}^\star, X^\star)$, we introduce matching instability that can serve as a regret measure for the corresponding learning problem. We then propose a UCB algorithm in which agents form preferences and select actions based on optimistic estimates of the game payoffs, and prove that it achieves sublinear, instance-independent regret over a time horizon $T$.
Andreas Athanasopoulos、Christos Dimitrakakis
计算技术、计算机技术
Andreas Athanasopoulos,Christos Dimitrakakis.Learning Equilibria in Matching Games with Bandit Feedback[EB/OL].(2025-06-04)[2025-06-17].https://arxiv.org/abs/2506.03802.点此复制
评论