[Submitted on 28 May 2024]
View a PDF of the paper titled Improving Linear System Solvers for Hyperparameter Optimisation in Iterative Gaussian Processes, by Jihao Andreas Lin and Shreyas Padhy and Bruno Mlodozeniec and Javier Antor’an and Jos’e Miguel Hern’andez-Lobato
Abstract:Scaling hyperparameter optimisation to very large datasets remains an open problem in the Gaussian process community. This paper focuses on iterative methods, which use linear system solvers, like conjugate gradients, alternating projections or stochastic gradient descent, to construct an estimate of the marginal likelihood gradient. We discuss three key improvements which are applicable across solvers: (i) a pathwise gradient estimator, which reduces the required number of solver iterations and amortises the computational cost of making predictions, (ii) warm starting linear system solvers with the solution from the previous step, which leads to faster solver convergence at the cost of negligible bias, (iii) early stopping linear system solvers after a limited computational budget, which synergises with warm starting, allowing solver progress to accumulate over multiple marginal likelihood steps. These techniques provide speed-ups of up to $72times$ when solving to tolerance, and decrease the average residual norm by up to $7times$ when stopping early.
Submission history
From: Jihao Andreas Lin [view email]
[v1]
Tue, 28 May 2024 16:58:37 UTC (6,008 KB)
Source link
lol