What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages, by Nadav Borenstein and 7 other authors

View PDF
HTML (experimental)

Abstract:What can large language models learn? By definition, language models (LM) are distributions over strings. Therefore, an intuitive way of addressing the above question is to formalize it as a matter of learnability of classes of distributions over strings. While prior work in this direction focused on assessing the theoretical limits, in contrast, we seek to understand the empirical learnability. Unlike prior empirical work, we evaluate neural LMs on their home turf-learning probabilistic languages-rather than as classifiers of formal languages. In particular, we investigate the learnability of regular LMs (RLMs) by RNN and Transformer LMs. We empirically test the learnability of RLMs as a function of various complexity parameters of the RLM and the hidden state size of the neural LM. We find that the RLM rank, which corresponds to the size of linear space spanned by the logits of its conditional distributions, and the expected length of sampled strings are strong and significant predictors of learnability for both RNNs and Transformers. Several other predictors also reach significance, but with differing patterns between RNNs and Transformers.

Submission history

From: Nadav Borenstein [view email]
[v1]
Thu, 6 Jun 2024 17:34:24 UTC (8,395 KB)
[v2]
Fri, 7 Jun 2024 08:30:02 UTC (8,366 KB)
[v3]
Mon, 10 Jun 2024 21:53:32 UTC (8,366 KB)
[v4]
Thu, 21 Nov 2024 14:27:01 UTC (8,366 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.