Efficient Diversity-based Experience Replay for Deep Reinforcement Learning

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Efficient Diversity-based Experience Replay for Deep Reinforcement Learning, by Kaiyan Zhao and 5 other authors

View PDF
HTML (experimental)

Abstract:Experience replay is widely used to improve learning efficiency in reinforcement learning by leveraging past experiences. However, existing experience replay methods, whether based on uniform or prioritized sampling, often suffer from low efficiency, particularly in real-world scenarios with high-dimensional state spaces. To address this limitation, we propose a novel approach, Efficient Diversity-based Experience Replay (EDER). EDER employs a deterministic point process to model the diversity between samples and prioritizes replay based on the diversity between samples. To further enhance learning efficiency, we incorporate Cholesky decomposition for handling large state spaces in realistic environments. Additionally, rejection sampling is applied to select samples with higher diversity, thereby improving overall learning efficacy. Extensive experiments are conducted on robotic manipulation tasks in MuJoCo, Atari games, and realistic indoor environments in Habitat. The results demonstrate that our approach not only significantly improves learning efficiency but also achieves superior performance in high-dimensional, realistic environments.

Submission history

From: Kaiyan Zhao [view email]
[v1]
Sun, 27 Oct 2024 15:51:27 UTC (12,906 KB)
[v2]
Wed, 22 Jan 2025 01:24:40 UTC (6,374 KB)
[v3]
Thu, 23 Jan 2025 07:39:58 UTC (6,374 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.