[Submitted on 15 Nov 2024]
View a PDF of the paper titled A dataset of questions on decision-theoretic reasoning in Newcomb-like problems, by Caspar Oesterheld and Emery Cooper and Miles Kodama and Linh Chi Nguyen and Ethan Perez
Abstract:We introduce a dataset of natural-language questions in the decision theory of so-called Newcomb-like problems. Newcomb-like problems include, for instance, decision problems in which an agent interacts with a similar other agent, and thus has to reason about the fact that the other agent will likely reason in similar ways. Evaluating LLM reasoning about Newcomb-like problems is important because interactions between foundation-model-based agents will often be Newcomb-like. Some ways of reasoning about Newcomb-like problems may allow for greater cooperation between models.
Our dataset contains both capabilities questions (i.e., questions with a unique, uncontroversially correct answer) and attitude questions (i.e., questions about which decision theorists would disagree). We use our dataset for an investigation of decision-theoretical capabilities and expressed attitudes and their interplay in existing models (different models by OpenAI, Anthropic, Meta, GDM, Reka, etc.), as well as models under simple prompt-based interventions. We find, among other things, that attitudes vary significantly between existing models; that high capabilities are associated with attitudes more favorable toward so-called evidential decision theory; and that attitudes are consistent across different types of questions.
Submission history
From: Caspar Oesterheld [view email]
[v1]
Fri, 15 Nov 2024 21:19:04 UTC (3,384 KB)
Source link
lol