View a PDF of the paper titled ExtractGPT: Exploring the Potential of Large Language Models for Product Attribute Value Extraction, by Alexander Brinkmann and 2 other authors
Abstract:E-commerce platforms require structured product data in the form of attribute-value pairs to offer features such as faceted product search or attribute-based product comparison. However, vendors often provide unstructured product descriptions, necessitating the extraction of attribute-value pairs from these texts. BERT-based extraction methods require large amounts of task-specific training data and struggle with unseen attribute values. This paper explores using large language models (LLMs) as a more training-data efficient and robust alternative. We propose prompt templates for zero-shot and few-shot scenarios, comparing textual and JSON-based target schema representations. Our experiments show that GPT-4 achieves the highest average F1-score of 85% using detailed attribute descriptions and demonstrations. Llama-3-70B performs nearly as well, offering a competitive open-source alternative. GPT-4 surpasses the best PLM baseline by 5% in F1-score. Fine-tuning GPT-3.5 increases the performance to the level of GPT-4 but reduces the model’s ability to generalize to unseen attribute values.
Submission history
From: Alexander Brinkmann [view email]
[v1]
Thu, 19 Oct 2023 07:39:00 UTC (4,439 KB)
[v2]
Fri, 26 Jan 2024 09:07:59 UTC (4,139 KB)
[v3]
Mon, 2 Sep 2024 12:36:06 UTC (4,142 KB)
[v4]
Wed, 18 Sep 2024 12:28:16 UTC (1,510 KB)
Source link
lol