# 1 Introduction
marginparsep has been altered. topmargin has been altered. marginparpush has been altered. The page layout violates the ICML style. Please do not change the page layout, or include packages like geometry, savetrees, or fullpage, which change it for you. Weâre not able to reliably undo arbitrary changes to the style. Please remove the offending package(s), or layout-changing commands and try again.
On Multi-Step Theorem Prediction via Non-Parametric Structural Priors
Junbo Zhao * 1 2 3 Ting Zhang * 1 2 3 Can Li 1 2 3 Wei He 4 Jingdong Wang 4 Hua Huang 1 2 3 footnotetext: 1 School of Artificial Intelligence, Beijing Normal University, Beijing, China. 2 Beijing Key Laboratory of Artificial Intelligence for Education, Beijing, China. 3 Engineering Research Center of Intelligent Technology and Educational Application, Ministry of Education, Beijing, China. 4 Baidu Inc., Beijing, China. Correspondence to: Hua Huang <huahuang@bnu.edu.cn>.
## Abstract
Multi-step theorem prediction is a central challenge in automated reasoning. Existing neuralâsymbolic approaches rely heavily on supervised parametric models, which exhibit limited generalization to evolving theorem libraries. In this work, we explore training-free theorem prediction through the lens of in-context learning (ICL). We identify a critical scalability bottleneck, termed Structural Drift: as reasoning depth increases, the performance of vanilla ICL degrades sharply, often collapsing to near zero. We attribute this failure to the LLMâs inability to recover latent topological dependencies, leading to unstructured exploration. To address this issue, we propose Theorem Precedence Graphs, which encode temporal dependencies from historical solution traces as directed graphs, and impose explicit topological constraints that effectively prune the search space during inference. Coupled with retrieval-augmented graph construction and a stepwise symbolic executor, our approach enables LLMs to act as structured planners without any gradient-based optimization. Experiments on the FormalGeo7k benchmark show that our method achieves 89.29% accuracy, substantially outperforming ICL baselines and matching state-of-the-art supervised models. These results indicate that explicit structural priors offer a promising direction for scaling LLM-based symbolic reasoning.
*[Error downloading image: 2603.04852v1/x1.png]*
Figure 1: Geometry problem solving accuracy (%) across reasoning depths ( $L1$ â $L6$ ), defined by the number of invoked theorems $l$ . Vanilla ICL rapidly degrades with increasing depth due to structural drift, whereas our method maintains robust performance by leveraging structural priors.
Multi-step theorem prediction is a cornerstone of automated reasoning Chou et al. (1993); Chou and Gao (2001), requiring an agent to navigate complex search spaces by selecting a sequence of valid rules to achieve a goal Irving et al. (2016). Unlike unconstrained text generation, this task is governed by strict symbolic constraints: the applicability of each theorem is explicitly conditioned on the current set of derived facts, and any invalid step immediately breaks the reasoning chain. As the reasoning depth increases, the combinatorial explosion of potential theorem sequences makes finding a valid path exceptionally difficult Silver et al. (2017); Lample et al. (2022).
Geometry Problem Solving (GPS) serves as a canonical and highly demanding testbed for this challenge Trinh et al. (2024); Chervonyi et al. (2025). In a typical neuralâsymbolic pipeline, a neural model acts as a policy to propose candidate theorems Wu et al. (2024); Peng et al. (2023), while a symbolic solver executes these steps and maintains the evolving state Lu et al. (2021). Recent approaches predominantly rely on supervised parametric models Zou et al. (2024); Zhang et al. (2025); Xia et al. (2024), achieving strong in-distribution accuracy but exhibiting limited flexibility. Their dependence on task-specific training over fixed theorem sets hampers generalization to unseen or expanded libraries without a costly re-training cycle.
In this work, we examine whether effective theorem prediction can be achieved without task-specific training. A natural candidate is the in-context learning (ICL) capability of LLMs Brown et al. (2020); Wei et al. (2022). However, we observe a phenomenon termed as structural drift. As reasoning chains lengthen, the performance of vanilla ICL deteriorates sharply, in some cases approaching zero, as illustrated in Figure 1. Our analysis reveals a fundamental challenge: theorem prediction is governed by strong structural dependencies, as many theorems become applicable only after specific geometric properties have been established by earlier steps. Vanilla ICL tends to induce near-uniform action distributions over the theorem space Baldassini et al. (2024), leading to unstructured exploration and compounding errors over long reasoning horizons Dziri et al. (2023); Valmeekam et al. (2023); Liu et al. (2024); Berglund et al. (2023). Effective navigation of the theorem library therefore requires not only semantic understanding, but also an implicit grasp of the latent topological order underlying geometric reasoning Kipf et al. (2019).
To this end, we propose Pri-TPG (Pri or-guided multi-step theorem prediction via T heorem P recedence G raphs). The core innovation of Pri-TPG is the introduction of an explicit structural guidance mechanism that augments vanilla prompts with latent topological constraints. Specifically, Pri-TPG develops Theorem Precedence Graphs to distill the latent temporal dependencies inherent in solution traces into a directed graph that explicitly encodes permissible orders of theorem application. Furthermore, We adopt a retrieval-augmented strategy Lewis et al. (2020) to construct these graphs on the fly, conditioned on the input problem. In this way, Pri-TPG provides the LLM with a topological constraint that effectively prunes the combinatorial search space while requiring no gradient-based optimization.
We formulate Pri-TPG as a step-wise symbolic execution framework. In this architecture, the LLM functions as a planner that proposes the next theorem conditioned on the dynamic TPG, while a symbolic solver serves as the executor. Experiments on the FormalGeo7k benchmark Zhang et al. (2024) show that our method substantially outperforms standard in-context learning baselines Wei et al. (2022) and achieves performance comparable to supervised neural approaches Zhang et al. (2025), while remaining entirely training-free at the theorem prediction stage.
Our contributions are summarized as follows:
- Structural Drift: We identify the phenomenon of structural drift, where ICL degrades significantly as the theorem search space grows, highlighting the need for explicit structural priors.
- Structural Prior: We propose Pri-TPG, a non-parametric approach that extracts query-specific structural priors from historical solution traces, providing training-free guidance for theorem prediction via Theorem Precedence Graphs.
- Empirical Performance: Pri-TPG achieves 89.29% accuracy on the FormalGeo7k benchmark, significantly outperforming ICL-based baselines and rivaling state-of-the-art supervised models.
## 2 Related Work
Neural-Symbolic Geometry Problem Solving. The evolution of GPS reflects the broader shift from expert-driven symbolic engines to large-scale neural architectures. Early systems, such as Wuâs method Wu (1986) and the Area Method Chou et al. (1994), offered rigorous soundness but lacked the heuristic intuition required to navigate the combinatorial explosion of auxiliary constructions. More recent work has shifted toward data-driven and neuralâsymbolic hybrids Li et al. (2024); Alvin et al. (2017); Peng et al. (2023); Zou et al. (2024), including inter-GPS Lu et al. (2021) and E-GPS Wu et al. (2024). In parallel, large-scale formal datasets, including GeoQA Chen et al. (2021) and FormalGeo Zhang et al. (2024), have been introduced to support supervised learning and benchmarking of formal geometric reasoning Zhu et al. (2024); Zhang et al. (2025). However, these approaches are highly parametric, with reasoning knowledge embedded in network weights, making adaptation to new theorem libraries costly. In contrast, our method introduces a non-parametric structural prior, enabling immediate adaptation to new theorem sets without gradient-based optimization.
LLMs as Planners in Formal Domains. The emergence of LLMs has shifted the focus toward agents that use formal languages (e.g., Lean, Isabelle, or domain-specific DSLs) as âtoolsâ for reasoning Yang et al. (2023); Huang et al. (2020); Trinh et al. (2024); Chervonyi et al. (2025). While LLMs excel at generating localized reasoning steps, they struggle with long-horizon consistency and geometric grounding, often proposing theorems that are logically valid but contextually inapplicable Zhang et al. (2025). Current state-of-the-art methods typically rely on Reinforcement Learning (RL) to align LLM planners with symbolic verifiers Ping et al. (2025); Xin et al. (2024). Our work diverges from this by demonstrating that the necessary âplanning intuitionâ can be extracted directly from historical solution traces via dynamic theorem precedence graphs, bypassing the need for RL-based alignment.
Retrieval-Augmented Reasoning. Retrieval-Augmented Generation (RAG) is traditionally used to provide LLMs with factual context to mitigate hallucinations Lewis et al. (2020); Guu et al. (2020). While early methods focused on static document retrieval, recent extensions incorporate tool-mediated outputs to support complex reasoning tasks Paranjape et al. (2023). To better capture interdependent concepts, GraphRAG frameworks Edge et al. (2024); Sarthi et al. (2024); Jimenez Gutierrez et al. (2024); Kabir et al. (2025) have transitioned from flat vector search to relational retrieval via pre-constructed knowledge graphs. However, these methods primarily structure query intent and still inject retrieved information as unstructured prompt context. In contrast, we target the downstream reasoning process by imposing structure on the retrieved content itself. By formalizing retrieved solution traces into a directed precedence graph, we introduce an explicit structural prior that constrains the LLMâs action space, marking a shift from content-augmented RAG to structure-augmented reasoning.
## 3 Methodology
In this section, we formalize multi-step geometry theorem prediction as a constrained symbolic planning problem and present a training-free neural-symbolic framework. Our approach introduces a precedence-induced structural prior over theorem usage, instantiated as a Theorem Precedence Graph, and combines it with multimodal retrieval and state-aware symbolic execution to guide LLMs toward efficient and valid problem solving.
### 3.1 Problem Formulation
A geometry problem is defined as a tuple $\mathcal{P}=(\mathcal{T},\mathcal{D},\mathcal{S}_{0},g)$ , where $\mathcal{T}$ denotes the textual description, $\mathcal{D}$ the diagram, $\mathcal{S}_{0}$ the initial symbolic state representing a set of predicates, and $g$ the target goal. Consistent with prior work Zhang et al. (2025), we focus on the theorem prediction task, assuming the initial formal state $\mathcal{S}_{0}$ is provided.
Let $\mathcal{L}$ be a finite theorem library consisting of hundreds of theorems and $\text{Solver}(\cdot)$ a symbolic executor. Multi-step theorem prediction seeks a sequence of theorem applications $\mathcal{A}=\{a_{1},\dots,a_{T}\}$ , $a_{t}\in\mathcal{L}$ , that induces a valid state transition sequence,
$$
\mathcal{S}_{0}\xrightarrow{a_{1}}\mathcal{S}_{1}\xrightarrow{a_{2}}\cdots\xrightarrow{a_{T}}\mathcal{S}_{T},\quad\text{such that }g\subseteq\mathcal{S}_{T}.
$$
Each transition is governed by symbolic preconditions,
$$
\mathcal{S}_{t}=\begin{cases}\text{Solver}(\mathcal{S}_{t-1},a_{t}),&\text{if }\text{Pre}(a_{t})\subseteq\mathcal{S}_{t-1},\\
\perp,&\text{otherwise},\end{cases}
$$
where $\text{Pre}(a_{t})$ denotes the logical prerequisites of theorem $a_{t}$ and $\perp$ represents an invalid transition.
This formulation highlights the core challenge: while the symbolic executor enforces correctness, the space of theorem sequences grows combinatorially with proof length, making prediction from flat theorem distribution ineffective.
### 3.2 Precedence-Induced Structural Prior
Geometric proofs exhibit strong causal dependencies: certain theorems must precede others to establish necessary constructions or relations. To encode this inductive bias, we introduce the Theorem Precedence Graph, a directed graph ${G}=(V,E)$ , where each node $v\in V\subseteq\mathcal{L}$ represents a theorem. A directed edge $(u\rightarrow v)\in E$ exists if the conclusion of theorem $u$ provides a necessary prerequisite for applying theorem $v$ in a problem.
Instead of casting theorem selection as an unstructured classification problem, the TPG imposes a topological prior that converts search into a guided traversal over admissible partial orders. For each solved instance, the TPG is induced from the solution trace by extracting symbolic dependencies among theorem applications. Rather than constraining the model to fixed paths, the TPG serves as a global structural reference, aligning local theorem predictions with the overarching logic of geometric deduction.
#### Search Space Factorization.
In a straightforward search setting, the action space at each reasoning step spans the entire library $|\mathcal{L}|$ , leading to an intractable combinatorial explosion $|\mathcal{L}|^{H}$ where $H$ is the number of reasoning steps. We introduce a structure-dependent prior derived from the TPG to prune the search volume. Crucially, instead of applying this prior as a static global constraint, we achieve precise guidance by transforming this structure into a query-adaptive prior that captures problem-contextual dependencies, and a state-aware prior that provides granular guidance at each prediction step. In the following sections, we formally detail the formulation of query-adaptive and state-aware priors. *[Error downloading image: 2603.04852v1/x2.png]*
Figure 2: Overview of our Pri-TPG workflow, where we successively refine the structural prior to provide precise guidance.
### 3.3 Query-Adaptive Prior via Multimodal Retrieval
Recognizing that theorem precedence is inherently context-sensitive, we propose to induce a query-specific TPG by leveraging historical reasoning patterns from semantically similar problems. Given a target query $P_{q}$ , we employ a multimodal encoder $\phi(\cdot)$ GĂźnther et al. (2025) to map its textual description, visual diagrams, and symbolic initial states into a joint embedding space. To identify relevant structural priors, we perform a nearest-neighbor search over a problem database $\mathcal{B}=\{P_{i}\}_{i=1}^{N}$ , retrieving the top- $K$ most analogous instances,
$$
\mathcal{N}_{K}(P_{q})=\arg\text{ top-}K_{P_{i}\in\mathcal{B}}\ \text{sim}(\phi(P_{q}),\phi(P_{i})). \tag{1}
$$
The retrieved set provides a curated candidate space,
$$
\mathcal{L}_{q}=\bigcup_{P_{i}\in\mathcal{N}_{K}(P_{q})}\text{Theorems}(P_{i}), \tag{2}
$$
where $\text{Theorems}(P_{i})$ denotes the set of theorems used to solve $P_{i}$ . Further, we synthesize the local structures into a unified query-specific graph,
$$
G_{q}=\bigcup_{P_{i}\in\mathcal{N}_{K}(P_{q})}\text{TPG}(P_{i}), \tag{3}
$$
resulting in $G_{q}=(V,E)$ , where $V\subseteq\mathcal{L}_{q}$ , and each directed edge $(u\rightarrow v)$ is assigned a normalized weight $w(u\rightarrow v)$ according to its frequency,
$$
w(u\rightarrow v)=\frac{1}{K}\sum_{i=1}^{K}\mathbb{I}\big[(u\rightarrow v)\in E_{i}\big]. \tag{4}
$$
Finally, we augment $G_{q}$ with a virtual START node, which encodes the prior probabilities for initiating theorem sequences, thereby providing a grounded entry point for the subsequent state-aware reasoning.
### 3.4 State-Aware Prior via Symbolic Validation
Instead of generating an entire theorem sequence in a single pass, an approach susceptible to error accumulation, we formulate theorem selection as an interleaved, iterative decision process. By tightly coupling the LLM with the symbolic solver, the framework exploits real-time execution feedback to construct a state-aware prior that progressively refines the search space. Specifically, at each reasoning step $t$ , we apply two complementary filtering mechanisms,
- Symbolic Pruning of $\mathcal{L}_{q}$ to $\mathcal{L}_{qt}$ : Each candidate theorem $v\in\mathcal{L}_{q}$ is verified by the symbolic solver against the current symbolic state $\mathcal{S}_{t}$ . Only theorems whose prerequisites are type-consistent are retained, thereby eliminating mathematically invalid actions.
- Structural Localization of $G_{q}$ to $G_{qt}$ . Let $a_{t-1}$ be the theorem applied at the previous step (or the START node when $t=1$ ). We only retain the descendants of $a_{t-1}$ / START in the query-specific graph $G_{q}$ .
#### Candidate Prioritization.
To further steer the LLMâs decision process, the remaining valid candidates are prioritized using a composite scoring function,
$$
\Psi(v)=\alpha s_{\text{goal}}(v)+\beta s_{\text{graph}}(v)-\gamma s_{\text{hist}}(v), \tag{5}
$$
where (i) $s_{\text{goal}}$ measures the semantic similarity between theorem $v$ âs expected conclusion and the target goal $g$ , encouraging backward-oriented reasoning, (ii) $s_{\text{graph}}$ exploits the edge weights in $G_{q}$ to explicitly promote the immediate successors of the previous action $a_{t-1}$ , including both first-order and second-order child nodes, by assigning them additional scores proportional to their corresponding edge weights, thereby biasing the selection toward locally coherent and empirically supported theorem transitions, and (iii) $s_{\text{hist}}$ penalizes theorems that have been invoked, serving as a critical anti-looping mechanism to prevent redundant reasoning trajectories.
### 3.5 Structural Prior-Augmented LLM Prediction
We formulate each reasoning step as a constrained generation task. The LLM prompt is augmented with the top-ranked candidate set $\mathcal{L}_{qt}$ , the pruned TPG $G_{qt}$ , and the execution history from the previous five steps. This structured context enables the LLM to act as a high-level reasoner, selecting the most promising action $a_{t}$ from a verified and prioritized subset, thereby bridging flexible neural generation and rigorous symbolic deduction. Figure 2 presents the overall pipeline, with additional implementation details and prompt templates provided in Appendix.
### 3.6 Discussion
Structural Induction over Parametric Heuristics. State-of-the-art solvers such as AlphaGeometry Trinh et al. (2024) rely on parametric strategy functions trained over millions of synthetic proofs. In contrast, our framework adopts a non-parametric approach grounded in structural induction. While still leveraging a corpus of historical solutions, it entirely avoids gradient-based optimization. Our central insight is that theorem applications exhibit strong local structural regularities, which we explicitly capture through a precedence graph formulation.
Space Contraction over Unconstrained Search. The proposed framework fundamentally reconfigures the search dynamics of symbolic solvers. Supplying the LLM with a pruned and ranked candidate set reduces the per-step selection complexity from $O(|\mathcal{L}|)$ to $O(|\mathcal{L}_{qt}|)$ , where $|\mathcal{L}_{qt}|\ll|\mathcal{L}|$ . Taking our experiments on FormalGeo7K as an example, where we take $|\mathcal{L}_{qt}|\approx 30$ , compared to $|\mathcal{L}|=300$ , the instantaneous search space is pruned by 90% in a single reasoning step. This contraction compounds over longer derivations, effectively mitigating the âsearch-depth bottleneckâ that typically plagues long-chain symbolic reasoning.
Table 1: Comparison results between different methods on FormalGeo7K (accuracy, %). All methods are evaluated on the standard test set (1400 problems) under the same per-problem timeout (600s) and take ground-truth parsed formal inputs as input. Our proposed method, Pri-TPG (GPT-5.2), achieves state-of-the-art performance, significantly outperforms the strongest LLM-only baseline, and even surpasses the best training-based neural-symbolic solver.
## 4 Experiments
### 4.1 Settings
Datasets. We evaluate on three formal geometry benchmarks: FormalGeo7K Zhang et al. (2024), Geometry3K Lu et al. (2021), and GeoQA Chen et al. (2021). Geometry3K contains three thousand diagram-text geometry problems with formal-language parsing annotations and a standard train/validation/test split; it covers diverse geometric primitives (e.g., lines, triangles, circles, quadrilaterals). GeoQA provides near five thousand geometry problems with annotated programs that describe explicit solving procedures, serving as a testbed for structured numerical reasoning. FormalGeo7K is a larger-scale formal geometry dataset with natural language descriptions, geometric diagrams, ground-truth formal language annotations, and theorem-sequence supervision, designed to support scalable theorem prediction under a consistent formal system. Our main experiments are on FormalGeo7K using its standard test split of 1,400 problems. Following Zhang et al. Zhang et al. (2025), we stratify test problems into six difficulty levels (L1âL6): L1 ( $l\leq 2$ ), L2 ( $3\leq l\leq 4$ ), L3 ( $5\leq l\leq 6$ ), L4 ( $7\leq l\leq 8$ ), L5 ( $9\leq l\leq 10$ ), and L6 ( $l\geq 11$ ), where $l$ is the number of theorems invoked in the problem.
Evaluation. We report accuracy (%) as the proportion of problems solved within a fixed per-problem timeout of 600 seconds. A problem is considered solved if the solverâs final formal conclusion exactly matches the ground-truth target under symbolic verification. To isolate the reasoning and search capabilities, we intentionally exclude the evaluation of upstream parsing and formalization components. Accordingly, all solvers are provided with ground-truth formal inputs, including the textual problem specification and the corresponding diagram representation. This evaluation protocol is applied uniformly across all baselines to ensure a fair and controlled comparison.
Baselines. We compare our method against representative baselines. LLM-only baselines solve geometry problems in an end-to-end fashion using LLMs. These approaches generate answers solely through natural language reasoning, without invoking a symbolic solver, and thus rely entirely on the modelâs implicit reasoning capability. Neural-symbolic solvers integrate neural models with a symbolic reasoning engine to perform theorem-driven search. We further divide them into training-based and training-free variants. Training-based neural-symbolic solvers fine-tune a pre-trained neural network to learn a policy for theorem selection, which is then combined with graph- or tree-based symbolic search to heuristically explore the solution space. Training-free neural-symbolic solvers leverage pre-trained LLMs for theorem prediction and interface them with a symbolic solver at inference time, avoiding any task-specific training. Our direct baseline, Vanilla ICL, retains the step-wise executor interaction protocol but removes the structural prior, requiring the planner to select candidate theorems from the full library at each step.
### 4.2 Results on FormalGeo7K
Table 3.6 presents the performance of our proposed framework compared to various baselines on the FormalGeo7K dataset Zhang et al. (2024). Our proposed method, particularly Pri-TPG (GPT-5.2), achieves state-of-the-art performance with an overall accuracy of 89.29%. This significantly outperforms the strongest LLM-only baseline, Claude 4.5 Sonnet (75.79%), and surpasses the best training-based neural-symbolic solver, FGeo-HyperGNet (88.36%). Notably, our framework achieves these results in a training-free manner, demonstrating the superior generalization ability of our state-aware prior guidance.
Comparison with LLM-only Direct Solving. The results highlight a substantial âreasoning gapâ in the generative capabilities of LLMs. While GPT-5.2 achieves 73.14% through direct solving, our framework boosts its performance to 89.29% (+16.15%). The improvement is even more pronounced for smaller models. Pri-TPG (GPT-5 mini) reaches 84.42%, which is nearly 20% higher than its direct-solving counterpart (64.79%). This suggests that without external symbolic scaffolding, LLMs are prone to error accumulation, leading to invalid reasoning paths.
Table 2: Comparison results between different methods on Geometry3K and GeoQA (accuracy, %). Our method achieves the best reported accuracy on Geometry3K, while remaining highly competitive on GeoQA. â indicates results not reported. â denotes evaluation on the subset parsed into the required formal language.
Comparison with Training-based Solvers. Our training-free framework demonstrates superior zero-shot generalization and robustness in mid-range complexity tasks. Specifically, Pri-TPG (GPT-5.2) achieves near-perfect performance in levels L1âL3 (e.g., 99.16% at L1), consistently outperforming fine-tuned baselines by leveraging the broad reasoning priors of LLMs constrained by the structural prior. Although a performance gap emerges in the highest difficulty tiers (L4âL6), our approach offers a more scalable and resource-efficient alternative that bypasses the intensive data dependency and computational overhead of task-specific training.
Performance of Difficulty Levels (L1-L6). As problem complexity increases, all methods exhibit a consistent performance degradation. Our approach remains robust through L5, where Pri-TPG (GPT-5.2) attains 66.13%, surpassing the strongest training-based baseline by nearly 10%. At the most challenging level (L6), our performance declines relative to FGeo-HyperGNet, indicating that specialized architectures may retain advantages in extremely long-horizon reasoning. Notably, the collapse of Vanilla ICL at L5-L6 demonstrates that exemplar-based prompting alone is insufficient for complex formal proofs, underscoring the necessity of the TPGâs structured guidance to effectively constrain the exponentially expanding search space.
### 4.3 Results on Geometry3K and GeoQA
Table 2 reports comparison with representative training-based neural-symbolic geometry solvers on Geometry3K Lu et al. (2021) and GeoQA Chen et al. (2021). In particular, our method achieves the best reported accuracy on Geometry3K, while remaining highly competitive on GeoQA. This performance gap is notable given that most competing methods rely on dataset-specific training, whereas our framework operates in a training-free manner.
### 4.4 Analysis
We conduct comprehensive ablation studies to systematically analyze the contribution of each component in our framework. To balance clarity and completeness while avoiding wide tables, we report macro-averaged accuracy on Easy (L1âL2), Medium (L3âL4), and Hard (L5âL6). Macro-average gives equal weight to each difficulty level. For cost-efficiency, all ablations are performed using GPT-5 mini within our framework. *[Error downloading image: 2603.04852v1/x3.png]*
Figure 3: Illustrating the effect of symboic feedback. Single-pass predicts all theorems in one pass with query-adaptive prior. While it outperforms Vanilla ICL, it substantially underperforms Pri-TPG, demonstrating the importance of iterative symbolic feedback.
Symbolic Feedback is Essential for Logical Error Correction. To evaluate the role of symbolic feedback, we compare our framework with a single-pass variant in which the LLM generates the entire theorem sequence in a single forward pass, while still conditioned on the query-adaptive TPG, and the symbolic executor verifies it only post hoc. This design eliminates intermediate feedback and thus precludes mid-course correction. As shown in Figure 3, removing iterative interaction causes a catastrophic performance drop: overall accuracy falls to 34.3% and collapses to 0 on high-difficulty problems. These results demonstrate that formal reasoning is inherently a closed-loop process, in which real-time symbolic feedback is indispensable for successful theorem proving.
RAG and TPG are Effective Non-parametric Structural Priors. We evaluate the impact of non-parametric structural priors by ablating key components of our framework. The results are shown in Table 3. In Vanilla ICL, the iterative symbolic feedback is retained, but both RAG and TPG are removed, forcing the planner to select from the entire theorem library at each step. Accuracy under this setting drops sharply to 26.29% overall and 0% on Hard problems, indicating that iterative verification alone cannot overcome the combinatorial complexity of a large, weakly structured action space. Introducing RAG, which provides solver-validated candidate theorems at each step, markedly improves performance to 72.64%, highlighting the importance of narrowing the action space using RAG. Incorporating our TPG further guides theorem selection, raising accuracy to 84.42%. This result demonstrates that candidate narrowing alone is insufficient: explicit precedence priors encoded by TPG are essential for directing reasoning trajectories, preventing unproductive detours, and sustaining efficiency as proof length increases.
Increasingly Precise Structural Priors Systematically Improve Multi-Step Reasoning. We analyze how progressively refined structural priors enhance multi-step theorem reasoning, with quantitative results summarized in Table 4. Starting from Vanilla ICL, which lacks explicit prior guidance, we first introduce a global prior that encodes all theorem precedence patterns, providing overall structural regularization and yielding a clear performance gain. We then incorporate a query-adaptive prior via multimodal retrieval over semantically similar problems, which further improves accuracy by conditioning the prior on problem-specific semantics. Finally, we introduce a state-aware prior through tight integration with symbolic validation, enabling step-wise refinement aligned with both the query context and the evolving reasoning state. As shown in Table 4, each successive refinement consistently improves performance, demonstrating that increasingly precise priors effectively constrain the reasoning trajectory, reduce unproductive exploration, and guide the model toward valid proofs. Overall, these results highlight that, in reasoning systems with large action spaces, precise prior guidance is essential to harness the modelâs full potential.
Table 3: Illustrating the effect of non-parametric structural priors on multi-step theorem reasoning. Accuracy drops sharply when both RAG and TPG are removed; introducing RAG and TPG progressively improves performance, highlighting the importance of candidate narrowing and explicit precedence priors.
| Vanilla ICL | â | â | â | 26.29 | 0.00 |
| --- | --- | --- | --- | --- | --- |
| w/o TPG | â | â | â | 72.64 | 22.95 |
| [] Pri-TPG | â | â | â | 84.42 | 40.98 |
Table 4: Illustrating the effect of increasingly precise structural priors, showing that each successive refinement consistently improves performance.
| Vanilla ICL Global prior Query-adaptive prior | 26.29 29.71 58.43 | 39.65 40.70 72.40 | 6.86 14.89 41.61 | 0.00 4.10 18.85 |
| --- | --- | --- | --- | --- |
| [] Pri-TPG (state-aware prior) | 84.42 | 96.14 | 73.05 | 40.98 |
Table 5: Illustrating the effect of different LLMs, showing robust performance and highlighting our methodâs plug-and-play nature.
Robust Performance Gains Across Different LLMs. To demonstrate that our framework serves as a general-purpose reasoning scaffold rather than a model-specific optimization, we evaluate it across a diverse set of LLM backbones under identical agent configurations. As shown in Table 5, the resulting performance gains are highly consistent across models. Although absolute accuracy scales with the intrinsic capacity of the base model, as expected for LLM-driven planners, the framework consistently achieves strong success rates. These results demonstrate the robustness and scalability of the proposed approach, highlighting its plug-and-play nature and its ability to seamlessly benefit from future advances in LLM capabilities without retraining.
Table 6: Illustrating the effect of retrieval pool size $K$ , showing that accuracy increases monotonically with larger $K$ .
| 15 30 100 | 71.86 79.00 80.29 | 83.37 94.39 95.32 | 52.96 61.64 64.30 | 28.69 30.33 30.33 |
| --- | --- | --- | --- | --- |
| [] 200 | 84.42 | 96.14 | 73.05 | 40.98 |
The Effect of Retrieval Pool Size $K$ . We examine the effect of retrieval pool size by varying the number of retrieved neighbors $K$ while holding all other components fixed. (1) As shown in Table 6, overall accuracy increases monotonically with larger $K$ , with the largest gains observed on the Medium and Hard subsets. This pattern reflects a clear recall effect: harder problems typically require longer and more specialized reasoning chains. When $K$ is small, critical theorems are often omitted from the candidate set, creating a hard bottleneck that prevents proof completion regardless of the LLMâs reasoning capability. Increasing $K$ alleviates this issue by improving the likelihood that the necessary theorems are retrieved. (2) We further analyze retrieval effectiveness using coverage-based metrics that are independent of the executor. Specifically, we report two complementary measures as functions of top- $K$ retrieval: (i) problem coverage, defined as the fraction of test problems for which all ground-truth required theorems are contained in the retrieved candidate set,
$$
\mathcal{C}_{prob}=\frac{1}{M}\sum_{i=1}^{M}\mathbb{I}(\mathcal{T}_{i}^{*}\subseteq\mathcal{L}_{P_{i}}), \tag{6}
$$
and (ii) theorem coverage, defined as the fraction of distinct ground-truth theorems that appear in the retrieved sets across the entire test corpus,
$$
\mathcal{C}_{th}=\frac{\sum_{i=1}^{M}|\mathcal{T}_{i}^{*}\cap\mathcal{L}_{P_{i}}|}{\sum_{i=1}^{M}|\mathcal{T}_{i}^{*}|}, \tag{7}
$$
where $\mathcal{T}_{i}^{*}$ is the set of ground-truth theorems used to solve $P_{i}$ , $M$ is the total number of test problems. Figure 4 shows that both coverage metrics increase steadily with $K$ and begin to saturate around $K\approx 100$ â $200$ , closely mirroring the accuracy trends observed in Table 6.
The Effect of Retrieval Query Conditions. We compare different retrieval query conditions in Figure 4. Notably, naive multi-modal fusion (text+image) does not yield additive gains, likely due to modality gaps in representation. Our approach grounds both modalities in a unified formal language, effectively bridging this gap and achieving the highest coverage. However, coverage alone is insufficient: even when all relevant theorems are retrieved, the planner must still correctly select and order them across multiple steps. Our TPG guidance introduces an explicit theorem-precedence prior that complements retrieval, stabilizes long-horizon planning, and proves particularly effective in challenging settings. *[Error downloading image: 2603.04852v1/x4.png]*
Figure 4: Illustrating the problem coverage and theorem coverage according to different $K$ under various query conditions.
## 5 Conclusion
We study multi-step theorem prediction from a training-free perspective and identified structural drift as a core limitation of vanilla in-context learning. To address this issue, we proposed Pri-TPG, a training-free framework that leverages retrieved Theorem Precedence Graphs, a non-parametric structural prior to capture temporal theorem dependencies and constrain inference. Combined with retrieval-guided graph construction and a stepwise symbolic executor, our framework enables LLMs to function as structured planners without parameter updates. Experiments on FormalGeo7k benchmark show that our approach substantially outperforms ICL baselines and achieves performance comparable to state-of-the-art supervised methods. More broadly, these results highlight the importance of explicit topological priors for scalable symbolic reasoning, offering a promising paradigm for general large and structured domains.
Limitations. Despite its effectiveness, the proposed framework has several limitations. First, it depends on LLM reasoning quality and inference efficiency; repeated calls during multi-step planning dominate computation, limiting efficiency and scalability. This constraint is expected to ease as LLMs continue to improve in both reasoning quality and inference speed. Second, performance on the Hard (L5âL6) tiers remains a frontier for improvement. These problems demand long-horizon global consistency, where a single error can invalidate the entire proof. Although the proposed TPG provides structural guidance, it primarily encodes local precedence rather than global reasoning depth. Developing non-parametric mechanisms to incorporate reasoning depth is an important direction for future work.
## Impact Statement
This work studies education-oriented formal geometry solving by combining LLM-based planning with symbolic verification and explicit structural priors in a training-free framework. Its potential benefit is to support tutoring and feedback systems that provide more reliable, checkable solution traces and reduce the cost of maintaining solvers as theorem libraries evolve. We will open-source the code to facilitate reproducibility and follow-up research.
Potential negative impacts include academic misconduct and over-reliance on automated reasoning, which may reduce genuine practice and learning. Since the system cannot guarantee solving every problem, its outputs should not be treated as authoritative. We recommend restricting full-solution outputs in assessment settings and emphasizing step-level guidance over answer delivery in educational use.
## References
- C. Alvin, S. Gulwani, R. Majumdar, and S. Mukhopadhyay (2017) Synthesis of solutions for shaded area geometry problems. In 30th International Florida Artificial Intelligence Research Society Conference, pp. 14â19. Cited by: §2.
- Anthropic (2025) Claude sonnet 4.5. Note: https://www.anthropic.com/news/claude-sonnet-4-5 Cited by: Table 9, §3.6, Table 5.
- S. Bai, Y. Cai, R. Chen, K. Chen, X. Chen, Z. Cheng, L. Deng, W. Ding, C. Gao, C. Ge, W. Ge, Z. Guo, Q. Huang, J. Huang, F. Huang, B. Hui, S. Jiang, Z. Li, M. Li, M. Li, K. Li, Z. Lin, J. Lin, X. Liu, J. Liu, C. Liu, Y. Liu, D. Liu, S. Liu, D. Lu, R. Luo, C. Lv, R. Men, L. Meng, X. Ren, X. Ren, S. Song, Y. Sun, J. Tang, J. Tu, J. Wan, P. Wang, P. Wang, Q. Wang, Y. Wang, T. Xie, Y. Xu, H. Xu, J. Xu, Z. Yang, M. Yang, J. Yang, A. Yang, B. Yu, F. Zhang, H. Zhang, X. Zhang, B. Zheng, H. Zhong, J. Zhou, F. Zhou, J. Zhou, Y. Zhu, and K. Zhu (2025) Qwen3-vl technical report. arXiv preprint arXiv:2511.21631. Cited by: §3.6.
- F. B. Baldassini, M. Shukor, M. Cord, L. Soulier, and B. Piwowarski (2024) What makes multimodal in-context learning work?. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1539â1550. Cited by: §1.
- L. Berglund, M. Tong, M. Kaufmann, M. Balesni, A. C. Stickland, T. Korbak, and O. Evans (2023) The reversal curse: llms trained onâ a is bâ fail to learnâ b is aâ. arXiv preprint arXiv:2309.12288. Cited by: §1.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. (2020) Language models are few-shot learners. Advances in neural information processing systems 33, pp. 1877â1901. Cited by: §1.
- ByteDance (2025) Doubao seed 1.8. Note: https://research.doubao.com/en/seed1_8 Cited by: §3.6.
- J. Chen, J. Tang, J. Qin, X. Liang, L. Liu, E. Xing, and L. Lin (2021) GeoQA: a geometric question answering benchmark towards multimodal numerical reasoning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, C. Zong, F. Xia, W. Li, and R. Navigli (Eds.), Online, pp. 513â523. External Links: Document Cited by: §2, §3.6, §4.1, §4.3.
- Y. Chervonyi, T. H. Trinh, M. OlĹĄĂĄk, X. Yang, H. H. Nguyen, M. Menegali, J. Jung, J. Kim, V. Verma, Q. V. Le, et al. (2025) Gold-medalist performance in solving olympiad geometry with alphageometry2. Journal of Machine Learning Research 26 (241), pp. 1â39. Cited by: §1, §2.
- S. Chou, X. Gao, and J. Zhang (1993) Automated production of traditional proofs for constructive geometry theorems. In [1993] Proceedings Eighth Annual IEEE Symposium on Logic in Computer Science, pp. 48â56. Cited by: §1.
- S. Chou, X.S. Gao, and J.Z. Zhang (1994) Machine proofs in geometry: automated production of readable proofs for geometry theorems. World Scientific, Singapore. External Links: ISBN 978-9810213346 Cited by: §2.
- S. Chou and X. Gao (2001) Automated reasoning in geometry.. Handbook of automated reasoning 1, pp. 707â749. Cited by: §1.
- DeepMind (2025) Gemini 3 pro. Note: https://deepmind.google/models/gemini/pro/ Cited by: Table 9, Table 5.
- N. Dziri, X. Lu, M. Sclar, X. L. Li, L. Jiang, B. Y. Lin, S. Welleck, P. West, C. Bhagavatula, R. Le Bras, et al. (2023) Faith and fate: limits of transformers on compositionality. Advances in Neural Information Processing Systems 36, pp. 70293â70332. Cited by: §1.
- D. Edge, H. Trinh, N. Cheng, J. Bradley, A. Chao, A. Mody, S. Truitt, D. Metropolitansky, R. O. Ness, and J. Larson (2024) From local to global: a graph rag approach to query-focused summarization. arXiv preprint arXiv:2404.16130. Cited by: §2.
- M. GĂźnther, S. Sturua, M. K. Akram, I. Mohr, A. Ungureanu, B. Wang, S. Eslami, S. Martens, M. Werk, N. Wang, et al. (2025) Jina-embeddings-v4: universal embeddings for multimodal multilingual retrieval. In Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025), pp. 531â550. Cited by: §3.3.
- K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang (2020) Retrieval augmented language model pre-training. In International conference on machine learning, pp. 3929â3938. Cited by: §2.
- Y. He, J. Zou, X. Zhang, N. Zhu, and T. Leng (2024) Fgeo-tp: a language model-enhanced solver for euclidean geometry problems. Symmetry 16 (4), pp. 421. Cited by: §3.6.
- Z. Huang, Q. Liu, W. Gao, J. Wu, Y. Yin, H. Wang, and E. Chen (2020) Neural mathematical solver with enhanced formula structure. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1729â1732. Cited by: §2.
- G. Irving, C. Szegedy, A. A. Alemi, N. Een, F. Chollet, and J. Urban (2016) DeepMath - deep sequence models for premise selection. In Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.), Vol. 29, pp. . Cited by: §1.
- B. Jimenez Gutierrez, Y. Shu, Y. Gu, M. Yasunaga, and Y. Su (2024) Hipporag: neurobiologically inspired long-term memory for large language models. Advances in Neural Information Processing Systems 37, pp. 59532â59569. Cited by: §2.
- I. Kabir, M. A. Reza, and S. Billah (2025) Logic-rag: augmenting large multimodal models with visual-spatial knowledge for road scene understanding. arXiv preprint arXiv:2503.12663. Cited by: §2.
- T. Kipf, Y. Li, H. Dai, V. Zambaldi, A. Sanchez-Gonzalez, E. Grefenstette, P. Kohli, and P. Battaglia (2019) Compile: compositional imitation learning and execution. In International Conference on Machine Learning, pp. 3418â3428. Cited by: §1.
- G. Lample, T. Lacroix, M. Lachaux, A. Rodriguez, A. Hayat, T. Lavril, G. Ebner, and X. Martinet (2022) Hypertree proof search for neural theorem proving. Advances in neural information processing systems 35, pp. 26337â26349. Cited by: §1.
- P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. KĂźttler, M. Lewis, W. Yih, T. Rocktäschel, et al. (2020) Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in neural information processing systems 33, pp. 9459â9474. Cited by: §1, §2.
- Z. Li, M. Zhang, F. Yin, and C. Liu (2024) LANS: a layout-aware neural solver for plane geometry problem. In Findings of the Association for Computational Linguistics ACL 2024, pp. 2596â2608. Cited by: §2.
- A. Liu, A. Mei, B. Lin, B. Xue, B. Wang, B. Xu, B. Wu, B. Zhang, C. Lin, C. Dong, et al. (2025) Deepseek-v3. 2: pushing the frontier of open large language models. arXiv preprint arXiv:2512.02556. Cited by: Table 9, §3.6, Table 5.
- N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang (2024) Lost in the middle: how language models use long contexts. Transactions of the Association for Computational Linguistics 12, pp. 157â173. Cited by: §1.
- P. Lu, R. Gong, S. Jiang, L. Qiu, S. Huang, X. Liang, and S. Zhu (2021) Inter-gps: interpretable geometry problem solving with formal language and symbolic reasoning. In The 59th Annual Meeting of the Association for Computational Linguistics (ACL), Cited by: §1, §2, §3.6, §4.1, §4.3.
- M. Ning, Q. Wang, K. Huang, and X. Huang (2023) A symbolic characters aware model for solving geometry problems. In Proceedings of the 31st ACM International Conference on Multimedia, pp. 7767â7775. Cited by: Table 2.
- OpenAI (2025a) GPT-5. Note: https://openai.com/gpt-5/ Cited by: Table 9, §3.6, Table 5.
- OpenAI (2025b) Introducing gpt-5.2. Note: https://openai.com/index/introducing-gpt-5-2/ Cited by: Table 9, §3.6, Table 5.
- B. Paranjape, S. Lundberg, S. Singh, H. Hajishirzi, L. Zettlemoyer, and M. T. Ribeiro (2023) Art: automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:2303.09014. Cited by: §2.
- S. Peng, D. Fu, Y. Liang, L. Gao, and Z. Tang (2023) GeoDRL: a self-learning framework for geometry problem solving using reinforcement learning in deductive reasoning. In Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, pp. 13468â13480. Cited by: §1, §2, Table 2.
- B. Ping, M. Luo, Z. Dang, C. Wang, and C. Jia (2025) Autogps: automated geometry problem solving via multimodal formalization and deductive reasoning. arXiv preprint arXiv:2505.23381. Cited by: §2.
- P. Sarthi, S. Abdullah, A. Tuli, S. Khanna, A. Goldie, and C. D. Manning (2024) Raptor: recursive abstractive processing for tree-organized retrieval. In The Twelfth International Conference on Learning Representations, Cited by: §2.
- D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. (2017) Mastering the game of go without human knowledge. nature 550 (7676), pp. 354â359. Cited by: §1.
- T. H. Trinh, Y. Wu, Q. V. Le, H. He, and T. Luong (2024) Solving olympiad geometry without human demonstrations. Nature 625 (7995), pp. 476â482. Cited by: §1, §2, §3.6.
- K. Valmeekam, M. Marquez, S. Sreedharan, and S. Kambhampati (2023) On the planning abilities of large language models-a critical investigation. Advances in Neural Information Processing Systems 36, pp. 75993â76005. Cited by: §1.
- J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou, et al. (2022) Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems 35, pp. 24824â24837. Cited by: §1, §1.
- W. Wu (1986) Basic principles of mechanical theorem proving in elementary geometries. Journal of Automated Reasoning 2 (3), pp. 221â252. External Links: Document Cited by: §2.
- W. Wu, L. Zhang, J. Liu, X. Tang, Y. Wang, S. Wang, and Q. Wang (2024) E-gps: explainable geometry problem solving via top-down solver and bottom-up generator. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13828â13837. Cited by: §1, §2.
- R. Xia, M. Li, H. Ye, W. Wu, H. Zhou, J. Yuan, T. Peng, X. Cai, X. Yan, B. Wang, et al. (2024) GeoX: geometric problem solving through unified formalized vision-language pre-training. arXiv preprint arXiv:2412.11863. Cited by: §1.
- T. Xiao, J. Liu, Z. Huang, J. Wu, J. Sha, S. Wang, and E. Chen (2024) Learning to solve geometry problems via simulating human dual-reasoning process. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, pp. 6559â6568. Cited by: §3.6, Table 2.
- H. Xin, D. Guo, Z. Shao, Z. Ren, Q. Zhu, B. Liu, C. Ruan, W. Li, and X. Liang (2024) Deepseek-prover: advancing theorem proving in llms through large-scale synthetic data. arXiv preprint arXiv:2405.14333. Cited by: §2.
- K. Yang, A. Swope, A. Gu, R. Chalamala, P. Song, S. Yu, S. Godil, R. J. Prenger, and A. Anandkumar (2023) Leandojo: theorem proving with retrieval-augmented language models. Advances in Neural Information Processing Systems 36, pp. 21573â21612. Cited by: §2.
- X. Zhang, Y. Li, N. Zhu, C. Qin, Z. Zeng, and T. Leng (2025) FGeo-hypergnet: geometric problem solving integrating formalgeo symbolic system and hypergraph neural network. IJCAI â25. External Links: ISBN 978-1-956792-06-5, Document Cited by: §1, §1, §2, §2, §3.1, §3.6, §4.1, Table 2.
- X. Zhang, N. Zhu, C. Qin, Y. Li, Z. Zeng, and T. Leng (2024) Formal representation and solution of plane geometric problems. In The 4th Workshop on Mathematical Reasoning and AI at NeurIPSâ24, Cited by: §1, §2, §3.6, §3.6, §4.1, §4.2.
- N. Zhu, X. Zhang, Q. Huang, F. Zhu, Z. Zeng, and T. Leng (2024) FGeo-parser: autoformalization and solution of plane geometric problems. Symmetry 17 (1), pp. 8. Cited by: §2.
- J. Zou, X. Zhang, Y. He, N. Zhu, and T. Leng (2024) Fgeo-drl: deductive reasoning for geometric problems through deep reinforcement learning. Symmetry 16 (4), pp. 437. Cited by: §1, §2, §3.6.
## Appendix
## Appendix A Implementation Details
### A.1 Retrieval Engineering Pipeline
While the main methodology describes the retrieval-augmented framework, we provide specific details regarding the construction of the candidate theorem pool $\mathcal{L}_{q}$ . The pipeline consists of five stages:
1. Multimodal Embedding. For each problem in the training set and the query set, we construct a representation by concatenating the problem text,formal state and the diagram image. This composite is encoded using jina-embeddings-v4-base-en to produce a unified dense vector.
1. Similarity Search. We perform a brute-force $k$ -Nearest Neighbor (kNN) search using cosine similarity to identify the top- $K$ most similar problems from the training set.
1. Theorem Extraction. From the retrieved $K$ neighbors, we extract the ground-truth theorem sequences used in their solutions.
1. Candidate Aggregation. We aggregate all unique theorems appearing in the retrieved solutions to form the initial query-specific candidate pool.
1. Ranking and Filtering. The aggregated candidates are ranked by their frequency of occurrence in the top- $K$ neighbors. We retain the top $N$ frequent theorems to form the final candidate set $\mathcal{L}_{q}$ , which is then dynamically filtered by the solver at each step based on precondition satisfaction.
### A.2 Hyperparameters
Table 7 lists the default hyperparameters used in our experiments on the FormalGeo7K benchmark.
Table 7: Inference hyperparameters.
| Parameter | Value |
| --- | --- |
| Retrieval Module | |
| Embedding Model | jina-embeddings-v4-base-en |
| Retrieval Size ( $K$ ) | 200 |
| Solver Constraints | |
| Max Steps ( $T_{\max}$ ) | 20 |
| Time Limit per Problem ( $\Delta$ ) | 600s |
| Max Recovery Attempts ( $r$ ) | 3 |
| LLM Generation | |
| Temperature | 0.1 |
### A.3 Implementation Details of Candidate Prioritization.
The composite scoring function is instantiated with $\alpha=5$ , $\beta=10$ , and $\gamma=5$ . This mechanism does not impose a strict ordering for the LLM to adhere to; rather, it reorders the candidate list to surface more contextually relevant theorems earlier, thereby enabling efficient truncation of the candidate window passed to the LLM while preserving high-quality options.
Goal Alignment ( $s_{\text{goal}}$ ). A theorem receives $\alpha=5$ if its name contains keywords semantically aligned with the problemâs goal typeâe.g., âangleâ, âparallelâ, or âperpendicularâ for angular goals; âlengthâ, âpythagoreanâ, or âsimilarâ for metric goals; âareaâ or âratioâ for areal goals.
Graph-Based Successor Promotion ( $s_{\text{graph}}$ ). Immediate successors of $a_{t-1}$ in $G_{q}$ receive a base score of $\beta=10$ , scaled by their normalized edge frequency $w\in[0,1]$ : $\beta\cdot(0.5+w)$ . Second-order successors (reachable via two-hop paths) receive a reduced score of $4$ , modulated by the product of intervening edge weights. This biases the selection toward empirically validated transitions while maintaining a simple linear structure.
History Penalty ( $s_{\text{hist}}$ ). Each prior application of theorem $v$ accumulates a penalty of $\gamma=5$ . In recovery modeâtriggered when $a_{t-1}$ yields no new predicatesâthe failed theorem incurs an additional penalty of $20$ , enforcing exploration of alternative paths.
Importantly, this scoring mechanism serves primarily to facilitate candidate truncation: by ranking theorems, we can limit the input window to the LLM (e.g., top- $k$ candidates) without discarding high-potential actions. As shown in Table 8, candidate prioritization substantially improves performance across all difficulty levels, with the largest gains on harder instances, confirming that better truncation preserves useful actions under a limited context window.
Table 8: Effect of candidate prioritization on multi-step theorem reasoning across difficulty levels.
| Similarity rank [] Pri-TPG (Ours) | 70.79 84.42 | 92.07 98.54 | 76.59 93.09 | 55.26 78.20 | 50.32 64.33 | 41.94 58.06 | 16.67 23.33 |
| --- | --- | --- | --- | --- | --- | --- | --- |
All coefficients are derived from general properties of theorem dependency structures rather than dataset-specific tuning, ensuring that the LLM retains primary agency in final theorem selection based on its broader contextual reasoning.
## Appendix B Additional Material
### B.1 Extended Results
Due to space and formatting constraints, several tables in the main text report only aggregated metrics or a subset of difficulty levels. For completeness and reproducibility, we provide the full breakdown results (Total and L1âL6) in this appendix.
Table 9: Backbone comparison for our method across difficulty levels.
Table 10: Effect of non-parametric structural priors on multi-step theorem reasoning across difficulty levels.
| Vanilla ICL | â | â | â | 26.29 | 52.19 | 23.67 | 7.89 | 5.10 | 0.00 | 0.00 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| w/o TPG | â | â | â | 72.64 | 95.62 | 76.06 | 73.68 | 50.32 | 33.87 | 11.67 |
| [] Pri-TPG | â | â | â | 84.42 | 98.54 | 93.09 | 78.20 | 64.33 | 58.06 | 23.33 |
## Appendix C Prompt Interfaces
To ensure reproducibility, we provide the specific prompts used for both the end-to-end baseline (Direct Solve) and our proposed stepwise solver.
### C.1 Baseline: Direct Solve Prompt
For the direct generation baseline, we employ a Chain-of-Thought (CoT) style prompt that instructs the LLM to output a final answer directly.
Direct Solve System Prompt
You are an expert in plane geometry problem solving. You will be given a geometry problem that includes both a natural language description and a formal language description (including geometric images and problem text). Your task is to analyze the given geometric problem, understand the known conditions and the goal, and provide a step-by-step solution with clear reasoning and accurate calculations. Execution Plan: 1. Understand the Problem: Identify known conditions and goal (brief, 1-2 sentences). 2. Plan the Solution: State which theorem or formula to use (brief, 1-2 sentences). 3. Solve Step-by-Step: Show key calculations (maximum 5 steps, be concise). 4. Verify the Solution: Quick check if necessary (1 sentence, optional). 5. Final Answer: You MUST end with âFINAL ANSWER: [value]â. IMPORTANT: Keep your entire response under 300 words. Be concise but clear. Answer Format (CRITICAL): FINAL ANSWER: [value only, no explanations] â˘
Integer: 55 â˘
Fraction: 31/2 â˘
Square root: 2*sqrt(21) â˘
With pi: 7139*pi/72 â˘
Expression: 96-8*pi Format Rules: Use * for multiplication, sqrt() for roots, and pi for $\pi$ . NO degree symbols, units, or explanations after the answer.
Direct Solve User Prompt Template
Problem text: {problem_text} Formal description of problem image: {image_cdl} Formal description of problem text: {text_cdl} Formal description of problem goal: {goal_cdl}
### C.2 Pri-TPG: Iterative Solver Prompt
Our method uses a constrained interaction loop where the LLM selects a single theorem call at each step.
Iterative System Prompt
You should propose your most promising theorems to try for the current geometry problem. DO NOT output any object names or argument bindings. Critical: Candidate theorems are ranked by similarity and executability (readiness scores), but this does NOT mean you should pick the first one. Analyze the goal, current state, and choose theorems that best advance toward solving the problem. Rules: â˘
Only use theorem names from candidate_theorems. â˘
Do NOT provide geometry objects or arguments; the solver enumerates valid instances automatically. â˘
Check theorem_dependencies (ready vs. blocked) and follow planning_suggestions. â˘
Base your decision on problem, state, history, and delta_from_last_step. â˘
In recovery mode, avoid calls listed in recent_failure. Return STRICT JSON only: {"calls": ["theorem"]}. No commentary.
Iterative User Prompt Body
[INPUT] problem: ⌠(id, statement, formal CDL, goal) state: ⌠(summary, delta) [CANDIDATES] - top_M: - call: <theorem> â status: <ready/blocked> â score: <score> ⌠[HISTORY] - recent steps: ⌠[FAILURE] (conditional) - recent failed: ⌠[OUTPUT] Exactly one theorem call from candidates, including its branch index.