Words in Motion: Extracting Interpretable Control Vectors for Motion Transformers

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Words in Motion: Extracting Interpretable Control Vectors for Motion Transformers, by Omer Sahin Tas and Royden Wagner

View PDF

Abstract:Transformer-based models generate hidden states that are difficult to interpret. In this work, we aim to interpret these hidden states and control them at inference, with a focus on motion forecasting. We use linear probes to measure neural collapse towards interpretable motion features in hidden states. High probing accuracy implies meaningful directions and distances between hidden states of opposing features, which we use to fit interpretable control vectors for activation steering at inference. To optimize our control vectors, we use sparse autoencoders with fully-connected, convolutional, MLPMixer layers and various activation functions. Notably, we show that enforcing sparsity in hidden states leads to a more linear relationship between control vector temperatures and forecasts. Our approach enables mechanistic interpretability and zero-shot generalization to unseen dataset characteristics with negligible computational overhead. Our implementation is available at this https URL

Submission history

From: Ömer Şahin Taş [view email]
[v1]
Mon, 17 Jun 2024 15:07:55 UTC (938 KB)
[v2]
Mon, 14 Oct 2024 22:39:55 UTC (718 KB)
[v3]
Thu, 5 Dec 2024 11:47:49 UTC (2,764 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.