View a PDF of the paper titled Networked Communication for Decentralised Agents in Mean-Field Games, by Patrick Benjamin and Alessandro Abate
Abstract:We introduce networked communication to the mean-field game framework, in particular to oracle-free settings where $N$ decentralised agents learn along a single, non-episodic run of the empirical system. We prove that our architecture has sample guarantees bounded between those of the centralised- and independent-learning cases. We provide the order of the difference in these bounds in terms of network structure and number of communication rounds, and also contribute a policy-update stability guarantee. We discuss how the sample guarantees of the three theoretical algorithms do not actually result in practical convergence. We therefore show that in practical settings where the theoretical parameters are not observed (leading to poor estimation of the Q-function), our communication scheme significantly accelerates convergence over the independent case (and sometimes even the centralised case), without relying on the assumption of a centralised learner. We contribute further practical enhancements to all three theoretical algorithms, allowing us to present their first empirical demonstrations. Our experiments confirm that we can remove several of the theoretical assumptions of the algorithms, and display the empirical convergence benefits brought by our new networked communication. We additionally show that the networked approach has significant advantages, over both the centralised and independent alternatives, in terms of robustness to unexpected learning failures and to changes in population size.
Submission history
From: Patrick Benjamin [view email]
[v1]
Mon, 5 Jun 2023 10:45:39 UTC (809 KB)
[v2]
Fri, 26 Jan 2024 14:24:32 UTC (3,921 KB)
[v3]
Fri, 28 Jun 2024 11:39:10 UTC (7,510 KB)
[v4]
Thu, 10 Oct 2024 09:09:43 UTC (3,714 KB)
Source link
lol