View a PDF of the paper titled Basic syntax from speech: Spontaneous concatenation in unsupervised deep neural networks, by Gav{s}per Beguv{s} and Thomas Lu and Zili Wang
Abstract:Computational models of syntax are predominantly text-based. Here we propose that the most basic first step in the evolution of syntax can be modeled directly from raw speech in a fully unsupervised way. We focus on one of the most ubiquitous and elementary suboperation of syntax — concatenation. We introduce spontaneous concatenation: a phenomenon where convolutional neural networks (CNNs) trained on acoustic recordings of individual words start generating outputs with two or even three words concatenated without ever accessing data with multiple words in the input. We replicate this finding in several independently trained models with different hyperparameters and training data. Additionally, networks trained on two words learn to embed words into novel unobserved word combinations. We also show that the concatenated outputs contain precursors to compositionality. To our knowledge, this is a previously unreported property of CNNs trained in the ciwGAN/fiwGAN setting on raw speech and has implications both for our understanding of how these architectures learn as well as for modeling syntax and its evolution in the brain from raw acoustic inputs. We also propose a potential neural mechanism called disinhibition that outlines a possible neural pathway towards concatenation and compositionality and suggests our modeling is useful for generating testable prediction for biological and artificial neural processing of speech.
Submission history
From: Gasper Begus [view email]
[v1]
Tue, 2 May 2023 17:38:21 UTC (3,735 KB)
[v2]
Fri, 12 Jul 2024 10:30:23 UTC (7,128 KB)
[v3]
Wed, 20 Nov 2024 18:30:49 UTC (12,081 KB)
Source link
lol