View a PDF of the paper titled Reducing Variance in Meta-Learning via Laplace Approximation for Regression Tasks, by Alfredo Reichlin and 4 other authors
Abstract:Given a finite set of sample points, meta-learning algorithms aim to learn an optimal adaptation strategy for new, unseen tasks. Often, this data can be ambiguous as it might belong to different tasks concurrently. This is particularly the case in meta-regression tasks. In such cases, the estimated adaptation strategy is subject to high variance due to the limited amount of support data for each task, which often leads to sub-optimal generalization performance. In this work, we address the problem of variance reduction in gradient-based meta-learning and formalize the class of problems prone to this, a condition we refer to as emph{task overlap}. Specifically, we propose a novel approach that reduces the variance of the gradient estimate by weighing each support point individually by the variance of its posterior over the parameters. To estimate the posterior, we utilize the Laplace approximation, which allows us to express the variance in terms of the curvature of the loss landscape of our meta-learner. Experimental results demonstrate the effectiveness of the proposed method and highlight the importance of variance reduction in meta-learning.
Submission history
From: Alfredo Reichlin [view email]
[v1]
Wed, 2 Oct 2024 12:30:05 UTC (9,325 KB)
[v2]
Wed, 23 Oct 2024 12:53:49 UTC (9,326 KB)
Source link
lol