Federated Learning and RAG Integration: A Scalable Approach for Medical Large Language Models

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Federated Learning and RAG Integration: A Scalable Approach for Medical Large Language Models, by Jincheol Jung and 2 other authors

View PDF
HTML (experimental)

Abstract:This study analyzes the performance of domain-specific Large Language Models (LLMs) for the medical field by integrating Retrieval-Augmented Generation (RAG) systems within a federated learning framework. Leveraging the inherent advantages of federated learning, such as preserving data privacy and enabling distributed computation, this research explores the integration of RAG systems with models trained under varying client configurations to optimize performance. Experimental results demonstrate that the federated learning-based models integrated with RAG systems consistently outperform their non-integrated counterparts across all evaluation metrics. This study highlights the potential of combining federated learning and RAG systems for developing domain-specific LLMs in the medical field, providing a scalable and privacy-preserving solution for enhancing text generation capabilities.

Submission history

From: Jincheol Jung [view email]
[v1]
Wed, 18 Dec 2024 11:00:58 UTC (417 KB)
[v2]
Wed, 8 Jan 2025 07:03:42 UTC (544 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.