arXiv:2407.21051v1 Announce Type: new
Abstract: Continuing advances in Large Language Models (LLMs) in artificial intelligence offer important capacities in intuitively accessing and using medical knowledge in many contexts, including education and training as well as assessment and treatment. Most of the initial literature on LLMs in medicine has emphasized that LLMs are unsuitable for medical use because they are non-deterministic, may provide incorrect or harmful responses, and cannot be regulated to assure quality control. If these issues could be corrected, optimizing LLM technology could benefit patients and physicians by providing affordable, point-of-care medical knowledge. Our proposed framework refines LLM responses by restricting their primary knowledge base to domain-specific datasets containing validated medical information. Additionally, we introduce an actor-critic LLM prompting protocol based on active inference principles of human cognition, where a Therapist agent initially responds to patient queries, and a Supervisor agent evaluates and adjusts responses to ensure accuracy and reliability. We conducted a validation study where expert cognitive behaviour therapy for insomnia (CBT-I) therapists evaluated responses from the LLM in a blind format. Experienced human CBT-I therapists assessed responses to 100 patient queries, comparing LLM-generated responses with appropriate and inappropriate responses crafted by experienced CBT-I therapists. Results showed that LLM responses received high ratings from the CBT-I therapists, often exceeding those of therapist-generated appropriate responses. This structured approach aims to integrate advanced LLM technology into medical applications, meeting regulatory requirements for establishing the safe and effective use of special purpose validated LLMs in medicine.
Source link
lol