View a PDF of the paper titled Extended convexity and smoothness and their applications in deep learning, by Binchuan Qi and 2 other authors
Abstract:This paper introduces an optimization framework aimed at providing a theoretical foundation for a class of composite optimization problems, particularly those encountered in deep learning. In this framework, we introduce $mathcal{H}(phi)$-convexity and $mathcal{H}(Phi)$-smoothness to generalize the existing concepts of Lipschitz smoothness and strong convexity. Furthermore, we analyze and establish the convergence of both gradient descent and stochastic gradient descent methods for objective functions that are $mathcal{H}(Phi)$-smooth. We prove that the optimal convergence rates of these methods depend solely on the homogeneous degree of $Phi$. Based on these findings, we construct two types of non-convex and non-smooth optimization problems: deterministic composite and stochastic composite optimization problems, which encompass the majority of optimization problems in deep learning. To address these problems, we develop the gradient structure control algorithm and prove that it can locate approximate global optima. This marks a significant departure from traditional non-convex analysis framework, which typically settle for stationary points. Therefore, with the introduction of $mathcal{H}(phi)$-convexity and $mathcal{H}(Phi)$-smoothness, along with the GSC algorithm, the non-convex optimization mechanisms in deep learning can be theoretically explained and supported. Finally, the effectiveness of the proposed framework is substantiated through empirical experimentation.
Submission history
From: Binchuan Qi [view email]
[v1]
Tue, 8 Oct 2024 08:40:07 UTC (148 KB)
[v2]
Wed, 15 Jan 2025 09:53:49 UTC (401 KB)
Source link
lol