Where We Have Arrived in Proving the Emergence of Sparse Symbolic Concepts in AI Models

Architecture of OpenAI


View a PDF of the paper titled Where We Have Arrived in Proving the Emergence of Sparse Symbolic Concepts in AI Models, by Qihan Ren and 3 other authors

View PDF
HTML (experimental)

Abstract:This study aims to prove the emergence of symbolic concepts (or more precisely, sparse primitive inference patterns) in well-trained deep neural networks (DNNs). Specifically, we prove the following three conditions for the emergence. (i) The high-order derivatives of the network output with respect to the input variables are all zero. (ii) The DNN can be used on occluded samples and when the input sample is less occluded, the DNN will yield higher confidence. (iii) The confidence of the DNN does not significantly degrade on occluded samples. These conditions are quite common, and we prove that under these conditions, the DNN will only encode a relatively small number of sparse interactions between input variables. Moreover, we can consider such interactions as symbolic primitive inference patterns encoded by a DNN, because we show that inference scores of the DNN on an exponentially large number of randomly masked samples can always be well mimicked by numerical effects of just a few interactions.

Submission history

From: Quanshi Zhang [view email] [via Quanshi Zhang as proxy]
[v1]
Wed, 3 May 2023 07:32:28 UTC (247 KB)
[v2]
Fri, 13 Sep 2024 09:22:38 UTC (2,570 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.