View a PDF of the paper titled Exploring Fine-tuned Generative Models for Keyphrase Selection: A Case Study for Russian, by Anna Glazkova and Dmitry Morozov
Abstract:Keyphrase selection plays a pivotal role within the domain of scholarly texts, facilitating efficient information retrieval, summarization, and indexing. In this work, we explored how to apply fine-tuned generative transformer-based models to the specific task of keyphrase selection within Russian scientific texts. We experimented with four distinct generative models, such as ruT5, ruGPT, mT5, and mBART, and evaluated their performance in both in-domain and cross-domain settings. The experiments were conducted on the texts of Russian scientific abstracts from four domains: mathematics & computer science, history, medicine, and linguistics. The use of generative models, namely mBART, led to gains in in-domain performance (up to 4.9% in BERTScore, 9.0% in ROUGE-1, and 12.2% in F1-score) over three keyphrase extraction baselines for the Russian language. Although the results for cross-domain usage were significantly lower, they still demonstrated the capability to surpass baseline performances in several cases, underscoring the promising potential for further exploration and refinement in this research field.
Submission history
From: Anna Glazkova [view email]
[v1]
Mon, 16 Sep 2024 18:15:28 UTC (239 KB)
[v2]
Wed, 18 Sep 2024 07:35:46 UTC (239 KB)
Source link
lol