View a PDF of the paper titled In-Trajectory Inverse Reinforcement Learning: Learn Incrementally Before An Ongoing Trajectory Terminates, by Shicheng Liu and 1 other authors
Abstract:Inverse reinforcement learning (IRL) aims to learn a reward function and a corresponding policy that best fit the demonstrated trajectories of an expert. However, current IRL works cannot learn incrementally from an ongoing trajectory because they have to wait to collect at least one complete trajectory to learn. To bridge the gap, this paper considers the problem of learning a reward function and a corresponding policy while observing the initial state-action pair of an ongoing trajectory and keeping updating the learned reward and policy when new state-action pairs of the ongoing trajectory are observed. We formulate this problem as an online bi-level optimization problem where the upper level dynamically adjusts the learned reward according to the newly observed state-action pairs with the help of a meta-regularization term, and the lower level learns the corresponding policy. We propose a novel algorithm to solve this problem and guarantee that the algorithm achieves sub-linear local regret $O(sqrt{T}+log T+sqrt{T}log T)$. If the reward function is linear, we prove that the proposed algorithm achieves sub-linear regret $O(log T)$. Experiments are used to validate the proposed algorithm.
Submission history
From: Shicheng Liu [view email]
[v1]
Mon, 21 Oct 2024 03:16:32 UTC (1,581 KB)
[v2]
Tue, 12 Nov 2024 19:21:24 UTC (822 KB)
[v3]
Thu, 2 Jan 2025 17:29:43 UTC (824 KB)
[v4]
Sat, 18 Jan 2025 16:22:10 UTC (823 KB)
Source link
lol