Comb Tensor Networks vs. Matrix Product States: Enhanced Efficiency in High-Dimensional Spaces

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning



arXiv:2412.06857v1 Announce Type: new
Abstract: Modern approaches to generative modeling of continuous data using tensor networks incorporate compression layers to capture the most meaningful features of high-dimensional inputs. These methods, however, rely on traditional Matrix Product States (MPS) architectures. Here, we demonstrate that beyond a certain threshold in data and bond dimensions, a comb-shaped tensor network architecture can yield more efficient contractions than a standard MPS. This finding suggests that for continuous and high-dimensional data distributions, transitioning from MPS to a comb tensor network representation can substantially reduce computational overhead while maintaining accuracy.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.