View a PDF of the paper titled Bayesian WeakS-to-Strong from Text Classification to Generation, by Ziyun Cui and 4 other authors
Abstract:Advances in large language models raise the question of how alignment techniques will adapt as models become increasingly complex and humans will only be able to supervise them weakly. Weak-to-Strong mimics such a scenario where weak model supervision attempts to harness the full capabilities of a much stronger model. This work extends Weak-to-Strong to WeakS-to-Strong by exploring an ensemble of weak models which simulate the variability in human opinions. Confidence scores are estimated using a Bayesian approach to guide the WeakS-to-Strong generalization. Furthermore, we extend the application of WeakS-to-Strong from text classification tasks to text generation tasks where more advanced strategies are investigated for supervision. Moreover, direct preference optimization is applied to advance the student model’s preference learning, beyond the basic learning framework of teacher forcing. Results demonstrate the effectiveness of the proposed approach for the reliability of a strong student model, showing potential for superalignment.
Submission history
From: Ziyun Cui [view email]
[v1]
Fri, 24 May 2024 13:33:11 UTC (393 KB)
[v2]
Wed, 2 Oct 2024 08:45:32 UTC (450 KB)
Source link
lol