On Affine Homotopy between Language Encoders

A Spark of the Anti-AI Butlerian Jihad (on Bluesky)


View a PDF of the paper titled On Affine Homotopy between Language Encoders, by Robin SM Chan and 10 other authors

View PDF
HTML (experimental)

Abstract:Pre-trained language encoders — functions that represent text as vectors — are an integral component of many NLP tasks. We tackle a natural question in language encoder analysis: What does it mean for two encoders to be similar? We contend that a faithful measure of similarity needs to be emph{intrinsic}, that is, task-independent, yet still be informative of emph{extrinsic} similarity — the performance on downstream tasks. It is common to consider two encoders similar if they are emph{homotopic}, i.e., if they can be aligned through some transformation. In this spirit, we study the properties of emph{affine} alignment of language encoders and its implications on extrinsic similarity. We find that while affine alignment is fundamentally an asymmetric notion of similarity, it is still informative of extrinsic similarity. We confirm this on datasets of natural language representations. Beyond providing useful bounds on extrinsic similarity, affine intrinsic similarity also allows us to begin uncovering the structure of the space of pre-trained encoders by defining an order over them.

Submission history

From: Robin Chan [view email]
[v1]
Tue, 4 Jun 2024 13:58:28 UTC (7,215 KB)
[v2]
Wed, 18 Dec 2024 08:56:43 UTC (7,625 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.