View a PDF of the paper titled miniCTX: Neural Theorem Proving with (Long-)Contexts, by Jiewen Hu and 2 other authors
Abstract:Real-world formal theorem proving often depends on a wealth of context, including definitions, lemmas, comments, file structure, and other information. We introduce miniCTX, which tests a model’s ability to prove formal mathematical theorems that depend on new context that is not seen during training. miniCTX contains theorems sourced from real Lean projects and textbooks, each associated with a context that can span tens of thousands of tokens. Models are tasked with proving a theorem given access to code from the theorem’s repository, which contains context that is needed for the proof. As a baseline for miniCTX, we tested fine-tuning and prompting methods that condition theorem proving on preceding context. Both approaches substantially outperform traditional methods that rely solely on state information. We found that this ability to use context is not captured by previous benchmarks such as miniF2F. Alongside miniCTX, we offer ntp-toolkit for automatically extracting and annotating theorem proving data, making it easy to add new projects into miniCTX to ensure that contexts are not seen during training. miniCTX offers a challenging and realistic evaluation of neural theorem provers.
Submission history
From: Sean Welleck [view email]
[v1]
Mon, 5 Aug 2024 20:19:18 UTC (2,197 KB)
[v2]
Thu, 3 Oct 2024 14:20:40 UTC (1,980 KB)
Source link
lol