11
Jul
[Submitted on 9 Jul 2024] View a PDF of the paper titled It's Our Loss: No Privacy Amplification for Hidden State DP-SGD With Non-Convex Loss, by Meenatchi Sundaram Muthu Selva Annamalai View PDF HTML (experimental) Abstract:Differentially Private Stochastic Gradient Descent (DP-SGD) is a popular iterative algorithm used to train machine learning models while formally guaranteeing the privacy of users. However the privacy analysis of DP-SGD makes the unrealistic assumption that all intermediate iterates (aka internal state) of the algorithm are released since in practice, only the final trained model, i.e., the final iterate of the algorithm is released. In this…