View a PDF of the paper titled GLBench: A Comprehensive Benchmark for Graph with Large Language Models, by Yuhan Li and 7 other authors
Abstract:The emergence of large language models (LLMs) has revolutionized the way we interact with graphs, leading to a new paradigm called GraphLLM. Despite the rapid development of GraphLLM methods in recent years, the progress and understanding of this field remain unclear due to the lack of a benchmark with consistent experimental protocols. To bridge this gap, we introduce GLBench, the first comprehensive benchmark for evaluating GraphLLM methods in both supervised and zero-shot scenarios. GLBench provides a fair and thorough evaluation of different categories of GraphLLM methods, along with traditional baselines such as graph neural networks. Through extensive experiments on a collection of real-world datasets with consistent data processing and splitting strategies, we have uncovered several key findings. Firstly, GraphLLM methods outperform traditional baselines in supervised settings, with LLM-as-enhancers showing the most robust performance. However, using LLMs as predictors is less effective and often leads to uncontrollable output issues. We also notice that no clear scaling laws exist for current GraphLLM methods. In addition, both structures and semantics are crucial for effective zero-shot transfer, and our proposed simple baseline can even outperform several models tailored for zero-shot scenarios. The data and code of the benchmark can be found at this https URL.
Submission history
From: Yuhan Li [view email]
[v1]
Wed, 10 Jul 2024 08:20:47 UTC (150 KB)
[v2]
Thu, 11 Jul 2024 06:06:33 UTC (150 KB)
[v3]
Tue, 22 Oct 2024 10:54:15 UTC (162 KB)
[v4]
Tue, 29 Oct 2024 08:49:11 UTC (162 KB)
Source link
lol