View a PDF of the paper titled Enhancing Relation Extraction via Supervised Rationale Verification and Feedback, by Yongqi Li and 5 other authors
Abstract:Despite the rapid progress that existing automated feedback methods have made in correcting the output of large language models (LLMs), these methods cannot be well applied to the relation extraction (RE) task due to their designated feedback objectives and correction manner. To address this problem, we propose a novel automated feedback framework for RE, which presents a rationale supervisor to verify the rationale and provides re-selected demonstrations as feedback to correct the initial prediction. Specifically, we first design a causal intervention and observation method to collect biased/unbiased rationales for contrastive training the rationale supervisor. Then, we present a verification-feedback-correction procedure to iteratively enhance LLMs’ capability of handling the RE task. Extensive experiments prove that our proposed framework significantly outperforms existing methods.
Submission history
From: Yongqi Li [view email]
[v1]
Tue, 10 Dec 2024 08:18:29 UTC (2,145 KB)
[v2]
Wed, 11 Dec 2024 02:31:45 UTC (2,145 KB)
Source link
lol