View a PDF of the paper titled Mechanistic Permutability: Match Features Across Layers, by Nikita Balagansky and 2 other authors
Abstract:Understanding how features evolve across layers in deep neural networks is a fundamental challenge in mechanistic interpretability, particularly due to polysemanticity and feature superposition. While Sparse Autoencoders (SAEs) have been used to extract interpretable features from individual layers, aligning these features across layers has remained an open problem. In this paper, we introduce SAE Match, a novel, data-free method for aligning SAE features across different layers of a neural network. Our approach involves matching features by minimizing the mean squared error between the folded parameters of SAEs, a technique that incorporates activation thresholds into the encoder and decoder weights to account for differences in feature scales. Through extensive experiments on the Gemma 2 language model, we demonstrate that our method effectively captures feature evolution across layers, improving feature matching quality. We also show that features persist over several layers and that our approach can approximate hidden states across layers. Our work advances the understanding of feature dynamics in neural networks and provides a new tool for mechanistic interpretability studies.
Submission history
From: Nikita Balagansky [view email]
[v1]
Thu, 10 Oct 2024 06:55:38 UTC (1,733 KB)
[v2]
Tue, 15 Oct 2024 08:47:46 UTC (1,733 KB)
Source link
lol