LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs, by Tongshuang Wu and 23 other authors

View PDF
HTML (experimental)

Abstract:LLMs have shown promise in replicating human-like behavior in crowdsourcing tasks that were previously thought to be exclusive to human abilities. However, current efforts focus mainly on simple atomic tasks. We explore whether LLMs can replicate more complex crowdsourcing pipelines. We find that modern LLMs can simulate some of crowdworkers’ abilities in these “human computation algorithms,” but the level of success is variable and influenced by requesters’ understanding of LLM capabilities, the specific skills required for sub-tasks, and the optimal interaction modality for performing these sub-tasks. We reflect on human and LLMs’ different sensitivities to instructions, stress the importance of enabling human-facing safeguards for LLMs, and discuss the potential of training humans and LLMs with complementary skill sets. Crucially, we show that replicating crowdsourcing pipelines offers a valuable platform to investigate 1) the relative LLM strengths on different tasks (by cross-comparing their performances on sub-tasks) and 2) LLMs’ potential in complex tasks, where they can complete part of the tasks while leaving others to humans.

Submission history

From: Tongshuang Wu [view email]
[v1]
Wed, 19 Jul 2023 17:54:43 UTC (2,043 KB)
[v2]
Thu, 20 Jul 2023 02:29:25 UTC (2,043 KB)
[v3]
Thu, 9 Jan 2025 04:13:41 UTC (3,111 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.