View a PDF of the paper titled From Imitation to Refinement — Residual RL for Precise Assembly, by Lars Ankile and 4 other authors
Abstract:Advances in behavior cloning (BC), like action-chunking and diffusion, have enabled impressive capabilities. Still, imitation alone remains insufficient for learning reliable policies for tasks requiring precise aligning and inserting of objects, like assembly. Our key insight is that chunked BC policies effectively function as trajectory planners, enabling long-horizon tasks. Conversely, as they execute action chunks open-loop, they lack the fine-grained reactivity necessary for reliable execution. Further, we find that the performance of BC policies saturates despite increasing data. Reinforcement learning (RL) is a natural way to overcome BC’s limitations, but it is not straightforward to apply directly to action-chunked models like diffusion policies. We present a simple yet effective method, ResiP (Residual for Precise Manipulation), that sidesteps these challenges by augmenting a frozen, chunked BC model with a fully closed-loop residual policy trained with RL. The residual policy is trained via on-policy RL, addressing distribution shifts and introducing reactive control without altering the BC trajectory planner. Evaluation on high-precision manipulation tasks demonstrates strong performance of ResiP over BC methods and direct RL fine-tuning. Videos, code, and data are available at this https URL.
Submission history
From: Lars Ankile M.Sc. [view email]
[v1]
Tue, 23 Jul 2024 17:44:54 UTC (14,576 KB)
[v2]
Mon, 4 Nov 2024 18:54:23 UTC (41,235 KB)
[v3]
Thu, 14 Nov 2024 16:54:02 UTC (38,595 KB)
Source link
lol