What makes a language easy to deep-learn? Deep neural networks and humans similarly benefit from compositional structure

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled What makes a language easy to deep-learn? Deep neural networks and humans similarly benefit from compositional structure, by Lukas Galke and 2 other authors

View PDF
HTML (experimental)

Abstract:Deep neural networks drive the success of natural language processing. A fundamental property of language is its compositional structure, allowing humans to systematically produce forms for new meanings. For humans, languages with more compositional and transparent structures are typically easier to learn than those with opaque and irregular structures. However, this learnability advantage has not yet been shown for deep neural networks, limiting their use as models for human language learning. Here, we directly test how neural networks compare to humans in learning and generalizing different languages that vary in their degree of compositional structure. We evaluate the memorization and generalization capabilities of a large language model and recurrent neural networks, and show that both deep neural networks exhibit a learnability advantage for more structured linguistic input: neural networks exposed to more compositional languages show more systematic generalization, greater agreement between different agents, and greater similarity to human learners.

Submission history

From: Lukas Galke [view email]
[v1]
Thu, 23 Feb 2023 18:57:34 UTC (2,268 KB)
[v2]
Fri, 22 Sep 2023 15:02:07 UTC (19,616 KB)
[v3]
Thu, 4 Apr 2024 08:26:54 UTC (18,365 KB)
[v4]
Thu, 10 Oct 2024 11:43:58 UTC (17,774 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.