View a PDF of the paper titled Testably Learning Polynomial Threshold Functions, by Lucas Slot and 2 other authors
Abstract:Rubinfeld & Vasilyan recently introduced the framework of testable learning as an extension of the classical agnostic model. It relaxes distributional assumptions which are difficult to verify by conditions that can be checked efficiently by a tester. The tester has to accept whenever the data truly satisfies the original assumptions, and the learner has to succeed whenever the tester accepts. We focus on the setting where the tester has to accept standard Gaussian data. There, it is known that basic concept classes such as halfspaces can be learned testably with the same time complexity as in the (distribution-specific) agnostic model. In this work, we ask whether there is a price to pay for testably learning more complex concept classes. In particular, we consider polynomial threshold functions (PTFs), which naturally generalize halfspaces. We show that PTFs of arbitrary constant degree can be testably learned up to excess error $varepsilon > 0$ in time $n^{mathrm{poly}(1/varepsilon)}$. This qualitatively matches the best known guarantees in the agnostic model. Our results build on a connection between testable learning and fooling. In particular, we show that distributions that approximately match at least $mathrm{poly}(1/varepsilon)$ moments of the standard Gaussian fool constant-degree PTFs (up to error $varepsilon$). As a secondary result, we prove that a direct approach to show testable learning (without fooling), which was successfully used for halfspaces, cannot work for PTFs.
Submission history
From: Manuel Wiedmer [view email]
[v1]
Mon, 10 Jun 2024 08:42:48 UTC (50 KB)
[v2]
Wed, 6 Nov 2024 12:40:53 UTC (51 KB)
Source link
lol