Deanthropomorphising NLP: Can a Language Model Be Conscious?

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Deanthropomorphising NLP: Can a Language Model Be Conscious?, by Matthew Shardlow and Piotr Przyby{l}a

View PDF
HTML (experimental)

Abstract:This work is intended as a voice in the discussion over previous claims that a pretrained large language model (LLM) based on the Transformer model architecture can be sentient. Such claims have been made concerning the LaMDA model and also concerning the current wave of LLM-powered chatbots, such as ChatGPT. This claim, if confirmed, would have serious ramifications in the Natural Language Processing (NLP) community due to wide-spread use of similar models. However, here we take the position that such a large language model cannot be sentient, or conscious, and that LaMDA in particular exhibits no advances over other similar models that would qualify it. We justify this by analysing the Transformer architecture through Integrated Information Theory of consciousness. We see the claims of sentience as part of a wider tendency to use anthropomorphic language in NLP reporting. Regardless of the veracity of the claims, we consider this an opportune moment to take stock of progress in language modelling and consider the ethical implications of the task. In order to make this work helpful for readers outside the NLP community, we also present the necessary background in language modelling.

Submission history

From: Matthew Shardlow [view email]
[v1]
Mon, 21 Nov 2022 14:18:25 UTC (6,758 KB)
[v2]
Thu, 18 May 2023 13:53:56 UTC (6,765 KB)
[v3]
Thu, 31 Aug 2023 15:43:56 UTC (6,767 KB)
[v4]
Wed, 15 Nov 2023 16:21:58 UTC (6,765 KB)
[v5]
Mon, 25 Nov 2024 10:55:29 UTC (6,662 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.