[Submitted on 29 May 2024]
View a PDF of the paper titled Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning, by Alexander Politowicz and 2 other authors
Abstract:Designing Reinforcement Learning (RL) solutions for real-life problems remains a significant challenge. A major area of concern is safety. “Shielding” is a popular technique to enforce safety in RL by turning user-defined safety specifications into safe agent behavior. However, these methods either suffer from extreme learning delays, demand extensive human effort in designing models and safe domains in the problem, or require pre-computation. In this paper, we propose a new permissibility-based framework to deal with safety and shield construction. Permissibility was originally designed for eliminating (non-permissible) actions that will not lead to an optimal solution to improve RL training efficiency. This paper shows that safety can be naturally incorporated into this framework, i.e. extending permissibility to include safety, and thereby we can achieve both safety and improved efficiency. Experimental evaluation using three standard RL applications shows the effectiveness of the approach.
Submission history
From: Alexander Politowicz [view email]
[v1]
Wed, 29 May 2024 18:00:21 UTC (1,550 KB)
Source link
lol