[Submitted on 22 Oct 2024]
View a PDF of the paper titled Cooperative Multi-Agent Constrained Stochastic Linear Bandits, by Amirhossein Afsharrad and 3 other authors
Abstract:In this study, we explore a collaborative multi-agent stochastic linear bandit setting involving a network of $N$ agents that communicate locally to minimize their collective regret while keeping their expected cost under a specified threshold $tau$. Each agent encounters a distinct linear bandit problem characterized by its own reward and cost parameters, i.e., local parameters. The goal of the agents is to determine the best overall action corresponding to the average of these parameters, or so-called global parameters. In each round, an agent is randomly chosen to select an action based on its current knowledge of the system. This chosen action is then executed by all agents, then they observe their individual rewards and costs. We propose a safe distributed upper confidence bound algorithm, so called textit{MA-OPLB}, and establish a high probability bound on its $T$-round regret. MA-OPLB utilizes an accelerated consensus method, where agents can compute an estimate of the average rewards and costs across the network by communicating the proper information with their neighbors. We show that our regret bound is of order $ mathcal{O}left(frac{d}{tau-c_0}frac{log(NT)^2}{sqrt{N}}sqrt{frac{T}{log(1/|lambda_2|)}}right)$, where $lambda_2$ is the second largest (in absolute value) eigenvalue of the communication matrix, and $tau-c_0$ is the known cost gap of a feasible action. We also experimentally show the performance of our proposed algorithm in different network structures.
Submission history
From: Amirhossein Afsharrad [view email]
[v1]
Tue, 22 Oct 2024 19:34:53 UTC (2,125 KB)
Source link
lol