Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions
Gabriele Farina, Christian Kroer, Tuomas Sandholm
Abstract
We study the performance of optimistic regret-minimization algorithms for both minimizing regret in, and computing Nash equilibria of, zero-sum extensive-form games. In order to apply these algorithms to extensive-form games, a distance-generating function is needed. We study the use of the dilated entropy and dilated Euclidean distance functions. For the dilated Euclidean distance function we prove the first explicit bounds on the strong-convexity parameter for general treeplexes. Furthermore, we show that the use of dilated distance-generating functions allow us to decompose the mirror descent algorithm, and its optimistic variant, into local mirror descent algorithms at each information set. This decomposition mirrors the structure of the counterfactual regret minimization framework. We experimentally compare our algorithms to the popular algorithm CFR+. CFR+ is known to often converge at a rate of $T^{-1}$, or better, in practice. We show an example matrix game where CFR+ converges at a relatively slow rate of $T^{-0.74}$, whereas optimistic methods converge faster than $T^{-1}$. We go on to show that this fast rate also holds in the Kuhn poker game, which is an extensive-form game. For games with deeper game trees however, we find that CFR+ is still faster. Finally we show that when the goal is minimizing regret, rather than computing a Nash equilibrium, optimistic methods can outperform CFR+, even in deep game trees.
Bibtex entry
@inproceedings{Farina19:Optimistic,
title={Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions},
author={Farina, Gabriele and Kroer, Christian and Sandholm, Tuomas},
booktitle={Conference on Neural Information Processing Systems (NeurIPS)},
year={2019}
}