View a PDF of the paper titled Efficient Model-Stealing Attacks Against Inductive Graph Neural Networks, by Marcin Podhajski and 5 other authors
Abstract:Graph Neural Networks (GNNs) are recognized as potent tools for processing real-world data organized in graph structures. Especially inductive GNNs, which allow for the processing of graph-structured data without relying on predefined graph structures, are becoming increasingly important in a wide range of applications. As such these networks become attractive targets for model-stealing attacks where an adversary seeks to replicate the functionality of the targeted network. Significant efforts have been devoted to developing model-stealing attacks that extract models trained on images and texts. However, little attention has been given to stealing GNNs trained on graph data. This paper identifies a new method of performing unsupervised model-stealing attacks against inductive GNNs, utilizing graph contrastive learning and spectral graph augmentations to efficiently extract information from the targeted model. The new type of attack is thoroughly evaluated on six datasets and the results show that our approach outperforms the current state-of-the-art by Shen et al. (2021). In particular, our attack surpasses the baseline across all benchmarks, attaining superior fidelity and downstream accuracy of the stolen model while necessitating fewer queries directed toward the target model.
Submission history
From: Jan Dubiński [view email]
[v1]
Mon, 20 May 2024 18:01:15 UTC (1,531 KB)
[v2]
Tue, 4 Jun 2024 22:08:09 UTC (798 KB)
[v3]
Mon, 26 Aug 2024 17:10:41 UTC (531 KB)
[v4]
Tue, 19 Nov 2024 20:37:54 UTC (531 KB)
Source link
lol