Robustness of Selected Learning Models under Label-Flipping Attack

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning



arXiv:2501.12516v1 Announce Type: new
Abstract: In this paper we compare traditional machine learning and deep learning models trained on a malware dataset when subjected to adversarial attack based on label-flipping. Specifically, we investigate the robustness of Support Vector Machines (SVM), Random Forest, Gaussian Naive Bayes (GNB), Gradient Boosting Machine (GBM), LightGBM, XGBoost, Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), MobileNet, and DenseNet models when facing varying percentages of misleading labels. We empirically assess the the accuracy of each of these models under such an adversarial attack on the training data. This research aims to provide insights into which models are inherently more robust, in the sense of being better able to resist intentional disruptions to the training data. We find wide variation in the robustness of the models tested to adversarial attack, with our MLP model achieving the best combination of initial accuracy and robustness.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.