Gradient-Variation Online Learning under Generalized Smoothness

Pile-T5



arXiv:2408.09074v1 Announce Type: new
Abstract: Gradient-variation online learning aims to achieve regret guarantees that scale with the variations in the gradients of online functions, which has been shown to be crucial for attaining fast convergence in games and robustness in stochastic optimization, hence receiving increased attention. Existing results often require the smoothness condition by imposing a fixed bound on the gradient Lipschitzness, but this may not hold in practice. Recent efforts in neural network optimization suggest a generalized smoothness condition, allowing smoothness to correlate with gradient norms. In this paper, we systematically study gradient-variation online learning under generalized smoothness. To this end, we extend the classic optimistic mirror descent algorithm to derive gradient-variation bounds by conducting stability analysis over the optimization trajectory and exploiting smoothness locally. Furthermore, we explore universal online learning, designing a single algorithm enjoying optimal gradient-variation regrets for convex and strongly convex functions simultaneously without knowing curvature information. The algorithm adopts a two-layer structure with a meta-algorithm running over a group of base-learners. To ensure favorable guarantees, we have designed a new meta-algorithm that is Lipschitz-adaptive to handle potentially unbounded gradients and meanwhile ensures second-order regret to cooperate with base-learners. Finally, we provide implications of our findings and obtain new results in fast-rate games and stochastic extended adversarial optimization.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.