Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning
Anton Bakhtin*, David J. Wu*, Adam Lerer*, Jonathan Gray*, Athul Paul Jacob*, Gabriele Farina*, Alexander H. Miller, Noam Brown
Abstract
No-press Diplomacy is a complex strategy game involving both cooperation and competition that has served as a benchmark for multi-agent AI research. While self-play reinforcement learning has resulted in numerous successes in purely adversarial games like chess, Go, and poker, self-play alone is insufficient for achieving optimal performance in domains involving cooperation with humans. We address this shortcoming by first introducing a planning algorithm we call DiL-piKL that regularizes a reward-maximizing policy toward a human imitation-learned policy. We prove that this is a no-regret learning algorithm under a modified utility function. We then show that DiL-piKL can be extended into a self-play reinforcement learning algorithm we call RL-DiL-piKL that provides a model of human play while simultaneously training an agent that responds well to this human model. We used RL-DiL-piKL to train an agent we name Diplodocus. In a 200-game no-press Diplomacy tournament involving 62 human participants spanning skill levels from beginner to expert, two Diplodocus agents both achieved a higher average score than all other participants who played more than two games, and ranked first and third according to an Elo ratings model.
Bibtex entry
@inproceedings{Bakhtin23:Mastering,
title={Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning},
author={Anton Bakhtin and David J Wu and Adam Lerer and Jonathan Gray
and Athul Paul Jacob and Gabriele Farina and Alexander H Miller and Noam Brown},
booktitle={International Conference on Learning Representations (ICLR)},
year={2023}
}