View a PDF of the paper titled Unsupervised Summarization Re-ranking, by Mathieu Ravaut and 2 other authors
Abstract:With the rise of task-specific pre-training objectives, abstractive summarization models like PEGASUS offer appealing zero-shot performance on downstream summarization tasks. However, the performance of such unsupervised models still lags significantly behind their supervised counterparts. Similarly to the supervised setup, we notice a very high variance in quality among summary candidates from these models while only one candidate is kept as the summary output. In this paper, we propose to re-rank summary candidates in an unsupervised manner, aiming to close the performance gap between unsupervised and supervised models. Our approach improves the unsupervised PEGASUS by up to 7.27% and ChatGPT by up to 6.86% relative mean ROUGE across four widely-adopted summarization benchmarks ; and achieves relative gains of 7.51% (up to 23.73% from XSum to WikiHow) averaged over 30 zero-shot transfer setups (finetuning on a dataset, evaluating on another).
Submission history
From: Mathieu Ravaut [view email]
[v1]
Mon, 19 Dec 2022 16:29:26 UTC (7,415 KB)
[v2]
Sun, 14 May 2023 08:23:08 UTC (7,520 KB)
[v3]
Fri, 26 May 2023 05:26:23 UTC (7,522 KB)
[v4]
Thu, 14 Nov 2024 06:00:39 UTC (7,520 KB)
Source link
lol