Pretraining and Updates of Domain-Specific LLM: A Case Study in the Japanese Business Domain

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Pretraining and Updates of Domain-Specific LLM: A Case Study in the Japanese Business Domain, by Kosuke Takahashi and 3 other authors

View PDF
HTML (experimental)

Abstract:The development of Large Language Models (LLMs) in various languages has been advancing, but the combination of non-English languages with domain-specific contexts remains underexplored. This paper presents our findings from training and evaluating a Japanese business domain-specific LLM designed to better understand business-related documents, such as the news on current affairs, technical reports, and patents. Additionally, LLMs in this domain require regular updates to incorporate the most recent knowledge. Therefore, we also report our findings from the first experiments and evaluations involving updates to this LLM using the latest article data, which is an important problem setting that has not been addressed in previous research. From our experiments on a newly created benchmark dataset for question answering in the target domain, we found that (1) our pretrained model improves QA accuracy without losing general knowledge, and (2) a proper mixture of the latest and older texts in the training data for the update is necessary. Our pretrained model and business domain benchmark are publicly available to support further studies.

Submission history

From: Kosuke Takahashi [view email]
[v1]
Fri, 12 Apr 2024 06:21:48 UTC (64 KB)
[v2]
Tue, 16 Apr 2024 02:24:00 UTC (64 KB)
[v3]
Wed, 6 Nov 2024 16:19:24 UTC (561 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.