View a PDF of the paper titled Methodology for Interpretable Reinforcement Learning for Optimizing Mechanical Ventilation, by Joo Seung Lee and 1 other authors
Abstract:Mechanical ventilation is a critical life support intervention that delivers controlled air and oxygen to a patient’s lungs, assisting or replacing spontaneous breathing. While several data-driven approaches have been proposed to optimize ventilator control strategies, they often lack interpretability and alignment with domain knowledge, hindering clinical adoption. This paper presents a methodology for interpretable reinforcement learning (RL) aimed at improving mechanical ventilation control as part of connected health systems. Using a causal, nonparametric model-based off-policy evaluation, we assess RL policies for their ability to enhance patient-specific outcomes-specifically, increasing blood oxygen levels (SpO2), while avoiding aggressive ventilator settings that may cause ventilator-induced lung injuries and other complications. Through numerical experiments on real-world ICU data from the MIMIC-III database, we demonstrate that our interpretable decision tree policy achieves performance comparable to state-of-the-art deep RL methods while outperforming standard behavior cloning approaches. The results highlight the potential of interpretable, data-driven decision support systems to improve safety and efficiency in personalized ventilation strategies, paving the way for seamless integration into connected healthcare environments.
Submission history
From: Joo Seung Lee [view email]
[v1]
Wed, 3 Apr 2024 23:07:24 UTC (100 KB)
[v2]
Thu, 9 Jan 2025 11:24:56 UTC (513 KB)
Source link
lol