Topology of Out-of-Distribution Examples in Deep Neural Networks

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning



arXiv:2501.12522v1 Announce Type: new
Abstract: As deep neural networks (DNNs) become increasingly common, concerns about their robustness do as well. A longstanding problem for deployed DNNs is their behavior in the face of unfamiliar inputs; specifically, these models tend to be overconfident and incorrect when encountering out-of-distribution (OOD) examples. In this work, we present a topological approach to characterizing OOD examples using latent layer embeddings from DNNs. Our goal is to identify topological features, referred to as landmarks, that indicate OOD examples. We conduct extensive experiments on benchmark datasets and a realistic DNN model, revealing a key insight for OOD detection. Well-trained DNNs have been shown to induce a topological simplification on training data for simple models and datasets; we show that this property holds for realistic, large-scale test and training data, but does not hold for OOD examples. More specifically, we find that the average lifetime (or persistence) of OOD examples is statistically longer than that of training or test examples. This indicates that DNNs struggle to induce topological simplification on unfamiliar inputs. Our empirical results provide novel evidence of topological simplification in realistic DNNs and lay the groundwork for topologically-informed OOD detection strategies.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.