# FCoReBench: Can Large Language Models Solve Challenging First-Order Combinatorial Reasoning Problems?
## Abstract
Can the large language models (LLMs) solve challenging first-order combinatorial reasoning problems such as graph coloring, knapsack, and cryptarithmetic? By first-order, we mean these problems can be instantiated with potentially an infinite number of problem instances of varying sizes. They are also challenging being NP-hard and requiring several reasoning steps to reach a solution. While existing work has focused on coming up with datasets with hard benchmarks, there is limited work which exploits the first-order nature of the problem structure. To address this challenge, we present FCoReBench, a dataset of $40$ such challenging problems, along with scripts to generate problem instances of varying sizes and automatically verify and generate their solutions. We first observe that LLMs, even when aided by symbolic solvers, perform rather poorly on our dataset, being unable to leverage the underlying structure of these problems. We specifically observe a drop in performance with increasing problem size. In response, we propose a new approach, SymPro-LM, which combines LLMs with both symbolic solvers and program interpreters, along with feedback from a few solved examples, to achieve huge performance gains. Our proposed approach is robust to changes in the problem size, and has the unique characteristic of not requiring any LLM call during inference time, unlike earlier approaches. As an additional experiment, we also demonstrate SymPro-LM ’s effectiveness on other logical reasoning benchmarks.
## 1 Introduction
Recent works have shown that large language models (LLMs) can reason like humans (Wei et al., 2022a), and solve diverse natural language reasoning tasks, without the need for any fine-tuning (Wei et al., 2022c; Zhou et al., 2023; Zheng et al., 2023). We note that, while impressive, these tasks are simple reasoning problems, generally requiring only a handful of reasoning steps to reach a solution.
We are motivated by the goal of assessing the reasoning limits of modern-day LLMs. In this paper, we study computationally intensive, first-order combinatorial problems posed in natural language. These problems (e.g., sudoku, knapsack, graph coloring, cryptarithmetic) have long served as important testbeds to assess the intelligence of AI systems (Russell and Norvig, 2010), and strong traditional AI methods have been developed for them. Can LLMs solve these directly? If not, can they solve these with the help of symbolic AI systems like SMT solvers? To answer these questions, we release a dataset named FCoReBench, consisting of $40$ such problems (see Figure 1).
We refer to such problems as fcore (f irst-order co mbinatorial re asoning) problems. Fcore problems can be instantiated with any number of instances of varying sizes, e.g., 9 $\times$ 9 and 16 $\times$ 16 sudoku. Most of the problems in FCoReBench are NP-hard and solving them will require extensive planning and search over a large number of combinations. We provide scripts to generate instances for each problem and verify/generate their solutions. Across all problems we generate 1354 test instances of varying sizes for evaluation and also provide 596 smaller sized solved instances as a training set. We present a detailed comparison with existing benchmarks in the related work (Section 2).
Not surprisingly, our initial experiments reveal that even the largest LLMs can only solve less than a third of these instances. We then turn to recent approaches that augment LLMs with tools for better reasoning. Program-aided Language models (PAL) (Gao et al., 2023) use LLMs to generate programs, offloading execution to a program interpreter. Logic-LM (Pan et al., 2023) and SAT-LM (Ye et al., 2023) use LLMs to convert questions to symbolic representations, and external symbolic solvers perform the actual reasoning. Our experiments show that, by themselves, their performances are not that strong on FCoReBench. At the same time, both these methods demonstrate complementary strengths – PAL can handle first-order structures well, whereas Logic-LM is better at complex reasoning. In response, we propose a new approach named SymPro-LM, which combines the powers of both PAL and symbolic solvers with LLMs to effectively solve fcore problems. In particular, the LLM generates an instance-agnostic program for an fcore problem that converts any problem instance to a symbolic representation. This program passes this representation to a symbolic solver, which returns a solution back to the program. The program then converts the symbolic solution to the desired output representation, as per the natural language instruction. Interestingly, in contrast to LLMs with symbolic solvers, once this program is generated, inference on new fcore instances (of any size) can be done without any LLM calls.
SymPro-LM outperforms few-shot prompting by $21.61$ , PAL by $3.52$ and Logic-LM by $16.83$ percent points on FCoReBench, with GPT-4-Turbo as the LLM. Given the structured nature of fcore problems, we find that utilizing feedback from small sized solved examples to correct the programs generated for just four rounds yields a further $21.02$ percent points gain for SymPro-LM, compared to $12.5$ points for PAL.
We further evaluate SymPro-LM on three (non-first order) logical reasoning benchmarks from literature (Tafjord et al., 2021; bench authors, 2023; Saparov and He, 2023a). SymPro-LM consistently outperforms existing baselines by large margins on two datasets, and is competitive on the third, underscoring the value of integrating LLMs with symbolic solvers through programs. We perform additional analyses to understand impact of hyperparameters on SymPro-LM and its errors. We release the dataset and code for further research. We summarize our contributions below:
- We formally define the task of natural language first-order combinatorial reasoning and present FCoReBench, a corresponding benchmark.
- We provide a thorough evaluation of LLM prompting techniques for fcore problems, offering new insights into existing techniques.
- We propose a novel approach, SymPro-LM, demonstrating its effectiveness on fcore problems as well as other datasets, along with an in-depth analysis of its performance.
<details>
<summary>extracted/6211530/Images/puzzle-bench.png Details</summary>

### Visual Description
## Mixed Problem Set Diagram: Educational Concepts Illustration
### Overview
The image presents six distinct educational problem types arranged horizontally, each with unique visual representations and labeled components. The layout combines mathematical puzzles, optimization problems, and scheduling diagrams in a comparative format.
### Components/Axes
1. **Knapsack Problem**
- Visual: Yellow backpack (15kg capacity) with floating items
- Labels:
- Items: $4 (12kg), $2 (2kg), $10 (4kg), $2 (1kg), $1 (1kg)
- Question mark above backpack
2. **Graph Coloring**
- Visual: 10-node graph with colored nodes (red, blue, green)
- Labels: "Graph Coloring" title
- Structure: Nodes connected by black lines forming a star pattern
3. **KenKen Puzzle**
- Visual: 4x4 grid with arithmetic constraints
- Labels:
- Grid values:
```
3+ | 2 | ? | 3
3 | 4+ | 1 | 2
5+ | 2 | 3 | 1
```
- Operations: "+", "=", "MONEY" equation
4. **Cryparithmetic**
- Visual: Letter-based equation
- Labels:
- Equation: SEND + MORE = MONEY
- Letter-to-digit mapping implied
5. **Shinro Puzzle**
- Visual: 5x5 grid with directional arrows and dots
- Labels:
- Grid coordinates: 1-2 rows/columns
- Symbols: Circles (●), arrows (→, ←, ↑, ↓)
6. **Job-Shop Scheduling**
- Visual: Gantt chart with colored time blocks
- Labels:
- Time axis: 0-30 minutes
- Machines: M1-M4 (purple, blue, green, red)
- C_max = 29 (maximum completion time)
### Detailed Analysis
1. **Knapsack Problem**
- Item values/weights:
- High-value/weight: $10 (4kg)
- Low-value/weight: $1 (1kg)
- Capacity constraint: 15kg backpack
2. **Graph Coloring**
- Node distribution:
- Red: 4 nodes
- Blue: 3 nodes
- Green: 3 nodes
- Connectivity: Star-shaped graph with central blue node
3. **KenKen Puzzle**
- Grid constraints:
- Row sums: 3+, 3, 5+, 2
- Column constraints: 2, 4+, 1, 3
- Center cell: 4+ (requires 1+3 or 2+2)
4. **Cryparithmetic**
- Letter-digit relationships:
- S + M = M or M+1 (carryover)
- E appears in SEND and MONEY
- N appears in SEND and MONEY
5. **Shinro Puzzle**
- Movement patterns:
- Arrows indicate directional constraints
- Dots represent valid positions
- Grid coordinates: 1-2 rows/columns with mixed symbols
6. **Job-Shop Scheduling**
- Time distribution:
- Machine M1: 3+4+2 = 9 minutes
- Machine M2: 2+3+4 = 9 minutes
- Machine M3: 4+3+2 = 9 minutes
- Machine M4: 1+1+1 = 3 minutes
- Critical path: M3's 4-minute task at 25-29 minutes
### Key Observations
1. **Resource Constraints**: All problems involve optimization under constraints (weight, colors, arithmetic, time)
2. **Visual Encoding**:
- Color coding used consistently across sections (red/blue/green for graph coloring and scheduling)
- Spatial relationships emphasize problem structure (e.g., graph connections, grid constraints)
3. **Complexity Indicators**:
- KenKen's 4x4 grid vs. Shinro's 5x5 grid
- Job-Shop's 4-machine system vs. Knapsack's single constraint
### Interpretation
This composite diagram serves as an educational taxonomy of problem-solving domains:
1. **Mathematical Optimization**: Knapsack and Graph Coloring represent NP-hard problems
2. **Logical Deduction**: KenKen and Shinro require constraint satisfaction
3. **Operational Research**: Job-Shop Scheduling demonstrates real-world scheduling challenges
4. **Cryptarithmetic** bridges mathematical logic with linguistic patterns
The visual hierarchy suggests a progression from physical constraints (Knapsack) to abstract patterns (Graph Coloring), then to structured puzzles (KenKen/Shinro), culminating in operational systems (Job-Shop). The shared color coding across sections implies intentional cross-problem comparisons, particularly between Graph Coloring and Job-Shop Scheduling's machine assignments.
</details>
Figure 1: Illustrative examples of problems in FCoReBench (represented as images for illustration).
## 2 Related Work
Neuro-Symbolic AI: Our work falls in the broad category of neuro-symbolic AI (Yu et al., 2023) which builds models leveraging the complementary strengths of neural and symbolic methods. Several prior works build neuro-symbolic models for solving combinatorial reasoning problems (Palm et al., 2018; Wang et al., 2019; Paulus et al., 2021; Nandwani et al., 2022a, b). These develop specialized problem-specific modules (that are typically not size-invariant), which are trained over large training datasets. In contrast, SymPro-LM uses LLMs, and bypasses problem-specific architectures, generalizes to problems of varying sizes, and is trained with very few solved instances.
Reasoning with Language Models: The previous paradigm to reasoning was fine-tuning of LLMs (Clark et al., 2021; Tafjord et al., 2021; Yang et al., 2022), but as LLMs scaled, they have been found to reason well, when provided with in-context examples without any fine-tuning (Brown et al., 2020; Wei et al., 2022b). Since then, many prompting approaches have been developed that leverage in-context learning. Prominent ones include Chain of Thought (CoT) prompting (Wei et al., 2022c; Kojima et al., 2022), Least-to-Most prompting (Zhou et al., 2023), Progressive-Hint prompting (Zheng et al., 2023) and Tree-of-Thoughts (ToT) prompting (Yao et al., 2023).
Tool Augmented Language Models: Augmenting LLMs with external tools has emerged as a way to solve complex reasoning problems (Schick et al., 2023; Paranjape et al., 2023). The idea is to offload a part of the task to specialized external tools, thereby reducing error rates. Program-aided Language models (Gao et al., 2023) invoke a Python interpreter over a program generated by an LLM. Logic-LM (Pan et al., 2023) and SAT-LM (Ye et al., 2023) integrate reasoning of symbolic solvers with LLMs, which convert the natural language problem into a symbolic representation. SymPro-LM falls in this category and combines LLMs with both program interpreters and symbolic solvers.
Logical Reasoning Benchmarks: There are several reasoning benchmarks in literature, such as LogiQA (Liu et al., 2020) for mixed reasoning, GSM8K (Cobbe et al., 2021) for arithmetic reasoning, FOLIO (Han et al., 2022) for first-order logic, PrOntoQA (Saparov and He, 2023b) and ProofWriter (Tafjord et al., 2021) for deductive reasoning, AR-LSAT (Zhong et al., 2021) for analytical reasoning. These dataset are not first-order i.e. each problem is accompanied with a single instance (despite the rules potentially being described in first-order logic). We propose FCoReBench, which substantially extends the complexity of these benchmarks by investigating computationally hard, first-order combinatorial reasoning problems. Among recent works, NLGraph (Wang et al., 2023) studies structured reasoning problems but is limited to graph based problems, and has only 8 problems in its dataset. On the other hand, NPHardEval (Fan et al., 2023) studies problems from the lens of computational complexity, but works with a relatively small set of 10 problems. In contrast we study the more broader area of first-order reasoning, we investigate the associated complexities of structured reasoning, and have a much large problem set (sized 40). Specifically, all the NP-Hard problems in these two datasets are also present in our benchmark.
## 3 Problem Setup: Natural Language First-order Combinatorial Reasoning
<details>
<summary>extracted/6211530/Images/puzzle-bench-example-sudoku.png Details</summary>

### Visual Description
## Textual Representation of Sudoku Solver Rules and Formats
### Overview
The image presents a structured textual representation of Sudoku solver components, including rules, input/output formats, and solved examples. It uses color-coded sections to organize information about Sudoku constraints, data representation, and solution validation.
### Components/Axes
1. **Natural Language Description of Rules (NL(C))** (Orange Box)
- Contains 5 bullet points defining Sudoku constraints
- Key rules:
- Grid must be filled with numbers 1-n
- Each row/column must contain 1-n exactly once
- Sub-grids of size √n×√n must contain 1-n exactly once
- n must be a perfect square
2. **Natural Language Description of Input Format (NL(X))** (Blue Box)
- Contains 5 bullet points describing input structure
- Specifies:
- n×n grid representation
- Rows represent unsolved grid state
- 0s indicate empty cells
- Filled cells contain numbers 1-n
3. **Natural Language Description of Output Format (NL(Y))** (Green Box)
- Contains 3 bullet points describing output structure
- Specifies:
- n×n grid representation
- Rows represent solved grid state
- Numbers 1-n represent filled cells
4. **Solved Examples in Textual Representation (DP)** (Gray Box)
- Contains 2 input-output pairs
- Each pair shows:
- Input grid with 0s for empty cells
- Corresponding solved grid
### Detailed Analysis
**NL(C) Rules:**
1. Empty cells must be filled with numbers 1-n
2. Each row must contain 1-n exactly once
3. Each column must contain 1-n exactly once
4. Each √n×√n sub-grid must contain 1-n exactly once
5. n must be a perfect square
**NL(X) Input Format:**
- Grid dimensions: n×n
- Cell values: 0 (empty) or 1-n (pre-filled)
- Row representation: Space-separated values
- Example Input-1:
</details>
Figure 2: FCoReBench Example: Filling a $n\times n$ Sudoku board along with its rules, input-output format, and a couple of sample input-output pairs.
A first-order combinatorial reasoning problem $\mathcal{P}$ has three components: a space of legal input instances ( $\mathcal{X}$ ), a space of legal outputs ( $\mathcal{Y}$ ), and a set of constraints ( $\mathcal{C}$ ) that every input-output pair must satisfy. E.g., for sudoku, $\mathcal{X}$ is the space of partially-filled grids with $n\times n$ cells, $\mathcal{Y}$ is the space of fully-filled grids of the same size, and $\mathcal{C}$ comprises row, column, and box alldiff constraints, with input cell persistence. To communicate a structured problem instance (or its output) to an NLP system, it must be serialized in text. We overload $\mathcal{X}$ and $\mathcal{Y}$ to also denote the formats for these serialized input and output instances. Two instances for sudoku are shown in Figure 2 (grey box). We are also provided (serialized) training data of input-output instance pairs, $\mathcal{D}_{\mathcal{P}}$ $=\{(x^{(i)},y^{(i)})\}_{i=1}^{N}$ , where $x^{(i)}\in\mathcal{X},y^{(i)}\in\mathcal{Y}$ , such that $(x^{(i)},y^{(i)})$ honors all constraints in $\mathcal{C}$ .
Further, we verbalize all three components – input-output formats and constraints – in natural language instructions. We denote these instructions by $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ , and $NL(\mathcal{C})$ , respectively. Figure 2 illustrates these for sudoku. With this notation, we summarize our setup as follows. For an fcore problem $\mathcal{P}=\langle\mathcal{X},\mathcal{Y},\mathcal{C}\rangle$ , we are provided $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ , $NL(\mathcal{C})$ and training data $\mathcal{D}_{\mathcal{P}}$ , and our goal is to learn a function $\mathcal{F}$ , which maps any (serialized) $x\in\mathcal{X}$ to its corresponding (serialized) solution $y\in\mathcal{Y}$ such that $(x,y)$ honors all constraints in $\mathcal{C}$ .
## 4 FCoReBench : Dataset Construction
First, we shortlisted computationally challenging first-order problems from various sources. We manually scanned Wikipedia https://en.wikipedia.org/wiki/List_of_NP-complete_problems for NP-hard algorithmic problems and logical-puzzles. We also took challenging logical-puzzles from other publishing houses (e.g., Nikoli), 2 and real world problems from the operations research community and the industrial track of the annual SAT competition https://www.nikoli.co.jp/en/puzzles/, https://satcompetition.github.io/. From this set, we selected problems (1) that can be described in natural language (we remove problems where some rules are inherently visual), and (2) for whom, the training and test datasets can be created with a reasonable programming effort. This led to $40$ fcore problems (see Table 7 for a complete list), of which 30 are known to be NP-hard and others have unknown complexity. 10 problems are graph-based (e.g., graph coloring), 18 are grid based (e.g., sudoku), 5 are set-based (e.g., knapsack), 5 are real-world settings (e.g. car sequencing) and 2 are miscellaneous (e.g., cryptarithmetic).
Two authors of the paper having formal background in automated reasoning and logic then created the natural language instructions and the input-output format for each problem. First, for each problem one author created the input-output formats and the instructions for them ( $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ ). Second, the same author then created the natural language rules ( $NL(\mathcal{C})$ ) by referring to the respective sources and re-writing the rules. These rules were verified by the other author making sure that they were correct i.e. the meaning of the problem did not change and they were unambiguous. The rules were re-written to ensure that an LLM cannot easily invoke its prior knowledge about the same problem. For the same reason, the name of the problem was hidden.
In the case of errors in the natural language descriptions, feedback was given to the author who wrote the descriptions to correct them. In our case typically there were no corrections required except 3 problems where the descriptions were corrected within a single round of feedback. A third independent annotator was employed who was tasked with reading the natural language descriptions and solving the input instances in the training set. The solutions were then verified to make sure that the rules were written and comprehensible by a human correctly. The annotator was able to solve all instances correctly highlighting that the descriptions were correct. The guidelines utilized to re-write the rules from their respective sources were to use crisp and concise English without utilizing technical jargon and avoiding ambiguities. The rules were intended to be understood by any person with a reasonable comprehension of the language and did not contain any formal specifications or mathematical formulas. Appendices A.2 and A.3 have detailed examples of rules and formats, respectively.
Next, we created train/test data for each problem. These instances are generated programmatically by scripts written by the authors. For each problem, one author also wrote a solver and a verification script, and the other verified that these scripts and suggested corrections if needed. In all but one case the other author found the scripts to be correct. These scripts (after correction) were also verified through manually curated test cases. These scripts were then used to ensure the feasibility of instances.
Since a single problem instance can potentially have multiple correct solutions (Nandwani et al., 2021) – all solutions are provided for each training input. The instances in the test set are typically larger in size than those in training. Because of their size, test instances may have too many solutions, and computing all of them can be expensive. Instead, the verification script can be used, which outputs the correctness of a candidate solution for any test instance. The scripts are a part of the dataset and can be used to generate any number of instances of varying complexity for each problem to easily extend the dataset. Keeping the prohibitive experimentation costs with LLMs in mind, we generate around 15 training instances and around 34 test instances on average per problem. In total FCoReBench has 596 training instances and 1354 test instances.
## 5 SymPro-LM
Preliminaries: In the following, we assume that we have access to an LLM $\mathcal{L}$ , which can work with various prompting strategies, a program interpreter $\mathcal{I}$ , which can execute programs written in its language and a symbolic solver $\mathcal{S}$ , which takes as input a pair of the form $(E,V)$ , where $E$ is set of equations (constraints) specified in the language of $\mathcal{S}$ , and $V$ is a set of (free) variables in $E$ , and produces an assignment $\mathcal{A}$ to the variables in $V$ that satisfies the set of equations in $E$ . Given the an fcore problem $\mathcal{P}=\langle\mathcal{X},\mathcal{Y},\mathcal{C}\rangle$ described by $NL(\mathcal{C})$ , $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ and $\mathcal{D_{\mathcal{P}}}$ , we would like to make effective use of $\mathcal{L}$ , $\mathcal{I}$ and $\mathcal{S}$ , to learn the mapping $\mathcal{F}$ , which takes any input $x\in\mathcal{X}$ , and maps it to $y\in\mathcal{Y}$ , such that $(x,y)$ honors the constraints in $\mathcal{C}$ .
Background: We consider the following possible representations for $\mathcal{F}$ which cover existing work.
- Exclusively LLM: Many prompting strategies (Wei et al., 2022c; Zhou et al., 2023) make exclusive use of $\mathcal{L}$ to represent $\mathcal{F}$ . $\mathcal{L}$ is supplied with a prompt consisting of the description of $\mathcal{P}$ via $NL(\mathcal{C})$ , $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ , the input $x$ , along with specific instructions on how to solve the problem and asked to output $y$ directly. This puts the entire burden of discovering $\mathcal{F}$ on the LLM.
- LLM $\rightarrow$ Program: In strategies such as PAL (Gao et al., 2023), the LLM is prompted to output a program, which then is interpreted by $\mathcal{I}$ on the input $x$ , to produce the output $y$ .
- LLM + Solver: Strategies such as Logic-LM (Pan et al., 2023) and Sat-LM (Ye et al., 2023) make use of both the LLM $\mathcal{L}$ and the symbolic solver $\mathcal{S}$ . The primary goal of $\mathcal{L}$ is to to act as an interface for translating the problem description for $\mathcal{P}$ and the input $x$ , to the language of the solver $\mathcal{S}$ . The primary burden of solving the problem is on $\mathcal{S}$ , whose output is then parsed as $y$ .
### 5.1 Our Approach
<details>
<summary>extracted/6211530/Images/puzzle-lm.png Details</summary>

### Visual Description
## Flowchart: Hybrid System Architecture for Natural Language Processing and Symbolic Reasoning
### Overview
The diagram illustrates a hybrid computational system integrating machine learning (LLM) with symbolic reasoning (Symbolic Solver). It depicts data flow between four core components: LLM, Feedback Agent, Python Program, and Symbolic Solver, with explicit labels for inputs, outputs, and intermediate variables.
### Components/Axes
1. **Title**: "Natural Language Description of Rules, Input-Output Format of P" with subcomponents:
- `NL(C)`: Natural Language description of rules
- `NL(X)`: Input format
- `NL(Y)`: Output format
2. **Core Components**:
- **LLM** (Orange block): Receives natural language input and produces predicted outputs
- **Feedback Agent** (Green block): Compares predicted outputs (`ŷ`) with gold standard outputs (`y`)
- **Python Program** (Blue block): Processes inputs (`x`) and interfaces with the Symbolic Solver
- **Symbolic Solver** (Purple block): Executes symbolic reasoning with inputs (`E_x, V_x`) and outputs (`A_x`)
3. **Data Flow**:
- Arrows indicate directional relationships:
- `LLM → Feedback Agent`: Predicted Output (`ŷ`)
- `Feedback Agent → LLM`: Feedback signal (dashed line)
- `LLM → Python Program`: Solved Input (`x`)
- `Python Program → Symbolic Solver`: `(E_x, V_x)`
- `Symbolic Solver → Python Program`: `A_x`
### Detailed Analysis
- **LLM**:
- Receives natural language input (`NL(C), NL(X), NL(Y)`)
- Generates predicted outputs (`ŷ`) for the Feedback Agent
- Provides solved inputs (`x`) to the Python Program
- **Feedback Agent**:
- Compares `ŷ` (predicted) with `y` (gold standard)
- Sends feedback to LLM via dashed line (likely error correction signals)
- **Python Program**:
- Acts as intermediary between LLM and Symbolic Solver
- Processes `x` into `(E_x, V_x)` for symbolic reasoning
- Receives `A_x` (action/result) from Symbolic Solver
- **Symbolic Solver**:
- Executes rule-based logic using `(E_x, V_x)`
- Outputs `A_x` (likely actions or formalized results)
### Key Observations
1. **Feedback Loop**: The dashed line between Feedback Agent and LLM suggests iterative refinement of LLM outputs.
2. **Hybrid Architecture**: Combines statistical learning (LLM) with formal symbolic reasoning (Symbolic Solver).
3. **Data Transformation**: Python Program mediates between natural language processing and symbolic computation.
4. **Variable Notation**:
- `E_x, V_x`: Likely represent encoded input variables
- `A_x`: Action/result output from symbolic reasoning
### Interpretation
This architecture demonstrates a **neuro-symbolic AI system** where:
- The LLM handles pattern recognition and prediction
- The Feedback Agent ensures alignment with ground truth
- The Python Program serves as a computational bridge
- The Symbolic Solver enforces rule-based constraints
The system likely addresses challenges in:
1. **Explainability**: Symbolic Solver provides formal reasoning traces
2. **Robustness**: Feedback loop mitigates LLM hallucinations
3. **Generalization**: Python Program adapts outputs to formal systems
The absence of explicit numerical values suggests this is a conceptual architecture rather than an empirical study. The bidirectional flow between LLM and Feedback Agent implies continuous model improvement, while the Symbolic Solver's role indicates domain-specific constraint enforcement.
</details>
Figure 3: SymPro-LM: Solid lines indicate the main flow and dotted lines indicate feedback pathways.
Our approach can be seen as a combination of LLM $\rightarrow$ Program and LLM+Solver strategies described above. While the primary role of the LLM is to do the interfacing between the natural language description of the problem $\mathcal{P}$ , the task of solving the actual problem is delegated to the solver $\mathcal{S}$ as in LLM+Solver strategy. But unlike them, where the LLM directly calls the solver, we now prompt it to write a program, $\psi$ , which can work with any given input $x\in\mathcal{X}$ of any size. This allows us to get rid of the LLM calls at inference time, resulting in a "lifted" implementation. The program $\psi$ internally represents the specification of the problem. It takes as argument an input $x$ , and then converts it according to the inferred specification of the problem to a set of equations $(E_{x},V_{x})$ in the language of the solver $\mathcal{S}$ to get the solution to the original problem. The solver $S$ then outputs an assignment $A_{x}$ in its own representation, which is then passed back to the program $\psi$ , which converts it back to the desired output format specified by $\mathcal{Y}$ and produces output $\hat{y}$ . Broadly, our pipeline consists of the 3 components which we describe next in detail.
- Prompting LLMs: The LLM is prompted with $NL(\mathcal{C})$ , $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ (see Figure 2) to generate an input-agnostic program $\psi$ . The LLM is instructed to write $\psi$ to read an input from a file, convert it to a symbolic representation according to the inferred specification of the problem, pass the symbolic representation to the solver and then use the solution from the solver to generate the output in the desired format. The LLM is also prompted with information about the solver and its underlying language. Optionally we can also provide the LLM with a subset of $\mathcal{D}_{\mathcal{P}}$ (see Appendix B.3 for exact prompts).
- Symbolic Solver: $\psi$ can convert any input instance $x$ to $(E_{x},V_{x})$ which it passes to the symbolic solver. The solver is agnostic to how the representation $(E_{x},V_{x})$ was created and tries to find an assignment $A_{x}$ to $V_{x}$ which satisfies $E_{x}$ which is passed back to $\psi$ (see Appendix E.1 for sample programs generated).
- Generating the Final Output: $\psi$ then uses $\mathcal{A}_{x}$ to generate the predicted output $\hat{y}$ . This step is need because the symbolic representation was created by $\psi$ and it must recover the desired output representation from $\mathcal{A}_{x}$ , which might not be straightforward for all problem representations.
Refinement via Solved Examples: We make use of $\mathcal{D}_{\mathcal{P}}$ to verify and (if needed) make corrections to $\psi$ . For each $(x,y)\in\mathcal{D}_{\mathcal{P}}$ (solved input-output pair), we run $\psi$ on $x$ to generate the prediction $\hat{y}$ , during which the following can happen: 1) Errors during execution of $\psi$ ; 2) The solver is unable to find $\mathcal{A}_{x}$ under a certain time limit; 3) $\hat{y}\neq y$ , i.e. the predicted output is incorrect; 4) $\hat{y}=y$ , i.e. the predicted output is correct. If for any training input one of the first three cases occur we provide automated feedback to the LLM through prompts to improve and generate a new program. This process is repeated till all training examples are solved correctly or till a maximum number of feedback rounds is reached. The feedback is simple in nature and includes the nature of the error, the actual error from the interpreter/symbolic solver and the input instance on which the error was generated. For example, in the case where the output doesn’t match the gold output we prompt the LLM with the solved example it got wrong and the expected solution. Appendix B contains details of feedback prompts.
It is possible that a single run of SymPro-LM (along with feedback) is unable to generate the correct solution for all training examples – so, we restart SymPro-LM multiple times for a given problem. Given the probabilistic nature of LLMs a new program is generated at each restart and a new feedback process continues. For the final program, we pick the best program generated during these runs, as judged by the accuracy on the training set. Figure 3 describes our entire approach diagrammatically.
SymPro-LM for Non-First Order Reasoning Datasets: For datasets that are not first-order in nature, a single program does not exist which can solve all problems, hence we prompt the LLM to generate a new program for each test set instance. Thus we cannot use feedback from solved examples and we only use feedback to correct syntactic mistakes (if any). The prompt contains an instruction to write a program which will use a symbolic solver to solve the problem. Additionally, we provide details about the solver to be used. The prompt also contains in-context examples demonstrating sample programs for other logical reasoning questions. The LLM should parse the logical reasoning question and extract the corresponding facts/rules which it needs to pass to the solver (via the program). Once the solver returns with an answer, it is passed back to the program to generate the final output.
## 6 Experimental Setup
Our experiments answer these research questions. (1) How does SymPro-LM compare with other LLM-based reasoning approaches on fcore problems? (2) How useful is using feedback from solved examples and multiple runs for fcore problems? (3) How does SymPro-LM compare with other methods on other existing (non-first order) logical reasoning benchmarks? (4) What is the nature of errors made by SymPro-LM and other baselines?
Baselines: On FCoReBench, we compare our method with 4 baselines: 1) Standard LLM prompting, which leverages in-context learning to directly answer the questions; 2) Program-aided Language Models, which use imperative programs for reasoning and offload the solution step to a program interpreter; 3) Logic-LM, which offloads the reasoning to a symbolic solver. 4) Tree-of-Thoughts (ToT) Yao et al. (2023), which is a search based prompting technique. These techniques (Yao et al., 2023; Hao et al., 2023) involve considerable manual effort for writing specialized prompts for each problem and are estimated to be 2-3 orders of magnitude more expensive than other baselines. We thus decide to present a separate comparison with ToT on a subset of FCoReBench (see Appendix C.1.1 for more details regarding ToT experiments). We use Z3 (De Moura and Bjørner, 2008) an efficient SMT solver for experiments with Logic-LM and SymPro-LM. We use the Python interpreter for experiments with PAL and SymPro-LM. We also evaluate refinement for PAL and SymPro-LM by using 5 runs each with 4 rounds of feedback on solved examples for each problem. We evaluate refinement for Logic-LM by providing 4 rounds of feedback to correct syntactic errors in constraints (if any) for each problem instance. We decide not to evaluate SAT-LM given its conceptual similarity to Logic-LM having being proposed concurrently.
Models: We experiment with 3 LLMs: GPT-4-Turbo (gpt-4-0125-preview) (OpenAI, 2023) which is a SOTA LLM by OpenAI, GPT-3.5-Turbo (gpt-3.5-turbo-0125), a relatively smaller LLM by OpenAI and Mixtral 8x7B (open-mixtral-8x7b) (Jiang et al., 2024), an open-source mixture-of-experts model developed by Mistral AI. We set the temperature to $0 0$ for few-shot prompting and Logic-LM for reproducibility and to $0.7$ to sample several runs for PAL and SymPro-LM.
Prompting LLMs: Each method’s prompt includes the natural language description of the problem’s rules and the input-output format, along with two solved examples. No additional intermediate supervision (e.g., SMT or Python program) is given in the prompt. For few-shot prompting we directly prompt the LLM to solve each test set instance separately. For PAL we prompt the LLM to write an input-agnostic Python program which reads the input from a file, reasons to solve the input and then writes the solution to another file, the program generated is run on each testing set instance. For Logic-LM for each test set instance we prompt the LLM to convert it into its symbolic representation which is then fed to a symbolic solver, the prompt additionally contains the description of the language of the solver. We then prompt the LLM with the solution from the solver and ask it to generate the output in the desired format (see Section 5). Prompt templates are detailed in Appendix B and other experimental details can be found in Appendix C.
Metrics: For each problem, we use the associated verification script to check the correctness of the candidate solution for each test instance. This script computes the accuracy as the fraction of test instances solved correctly, using binary marking assigning 1 to correct solutions and 0 for incorrect ones. We report the macro-average of test set accuracies across all problems in FCoReBench.
Additional Datasets: Apart from FCoReBench, we also evaluate SymPro-LM on 3 additional logical reasoning datasets: (1) LogicalDeduction from the BigBench (bench authors, 2023) benchmark, (2) ProofWriter (Tafjord et al., 2021) and (3) PrOntoQA (Saparov and He, 2023a). In addition to other baselines, we also compare with Chain-of-Thought (CoT) prompting (Wei et al., 2022c), as it performs significantly better than standard prompting for such datasets. Recall that these benchmarks are not first-order in nature i.e. each problem is accompanied with a single instance (despite the rules potentially being first-order) and hence we have to run SymPro-LM (and other methods) separately for each test instance (see Appendix C.2 for more details).
## 7 Results
Table 1 describes the main results for FCoReBench. Unsurprisingly, GPT-4-Turbo is hugely better than other LLMs. Mixtral 8x7B struggles on our benchmark indicating that smaller LLMs (even with mixture of experts) are not as effective at complex reasoning. Mixtral in general does badly, often doing worse than random (especially when used without refinement). PAL and SymPro-LM tend to perform better than other baselines benefiting from the vast pre-training of LLMs on code (Chen et al., 2021). Logic-LM performs rather poorly with smaller LLMs indicating that they struggle to invoke symbolic solvers directly.
Table 1: Results for FCoReBench. - / + indicate before / after refinement. Performance for random guessing is 20.13%.
| Mixtral 8x7B GPT-3.5-Turbo GPT-4-Turbo | 25.06% 27.02% 29.33% | 14.98% 32.66% 47.42% | 36.09% 49.19% 66.40% | 0.21% 6.04% 34.11% | 2.04% 6.58% 38.51% | 8.08% 17.08% 50.94% | 30.09% 50.35% 83.37% |
| --- | --- | --- | --- | --- | --- | --- | --- |
Hereafter, we focus primarily on GPT-4-Turbo’s performance, since it is far superior to other models. SymPro-LM outperforms few-shot prompting and Logic-LM across all problems in FCoReBench. On average the improvements are by an impressive $54.04\$ against few-shot prompting and by $44.86\$ against Logic-LM (with refinement). Few-shot prompting solve less than a third of the problems with GPT-4-Turbo, suggesting that even the largest LLMs cannot directly perform complex reasoning. While Logic-LM performs better, it still isn’t that good either, indicating that combining LLMs with symbolic solvers is not enough for such reasoning problems.
Table 2: Logic-LM’s performance on FCoReBench evaluated with refinement.
| Correct Output Incorrect Output Timeout Error | 6.58% 62.11% 2.375% | 38.51% 52.06% 2.49% |
| --- | --- | --- |
| Syntactic Error | 29.04% | 6.91% |
Table 3: Error analysis at a program level for GPT-4-Turbo before and after refinement for PAL and SymPro-LM. Results are averaged over all runs for a problem and further over all problems in FCoReBench.
| Incorrect Program Semantically Incorrect Program Python Runtime Error | 70% / 57% 62% / 49.5% 7% / 4.5% | 58% / 38% 29% / 20.5% 13.5% / 5.5% |
| --- | --- | --- |
| Timeout | 1% / 3% | 15.5% / 12% |
Further qualitative analysis suggests that Logic-LM gets confused in handling the structure of fcore problems. As problem instance size grows, it tends to make syntactic mistakes with smaller LLMs (Table 3). With larger LLMs, syntactic mistakes reduce, but constraints still remain semantically incorrect and do not get corrected through feedback.
Often this is because LLMs are error-prone when enumerating combinatorial constraints, i.e., they struggle with executing implicit for-loops and conditionals (see Appendix F). In contrast, SymPro-LM and PAL manage first order structures well, since writing code for a loop/conditional is not that hard, and the correct loop-execution is done by a program interpreter. These (size-invariant) programs then get used independently without any LLM call at inference time to solve any input instance – easily generalizing to larger instances – highlighting the benefit of using a program interpreter for such combinatorial problems.
At the same time, PAL is also not as effective on FCoReBench. Table 4 compares the effect of feedback and multiple runs on PAL and SymPro-LM. SymPro-LM outperforms PAL by $16.97\$ on FCoReBench (with refinement). When LLMs are forced to write programs for performing complicated reasoning, they tend to produce brute-force solutions that often are either incorrect or slow (see Table- 8 in the appendix). This highlights the value of offloading reasoning to a symbolic solver. Interestingly, feedback from solved examples and re-runs is more effective (Table 3) for SymPro-LM, as also shown by larger gains with increasing number of feedback rounds and runs (Table 4). We hypothesize that this is because declarative programs (generated by SymPro-LM) are easier to correct, than imperative programs (produced by PAL).
Table 4: Comparative analysis between PAL and SymPro-LM on FCoReBench for GPT-4-Turbo.
| PAL SymPro-LM | 47.42% 50.94% $\uparrow$ 3.52% | 54.00% 62.54% $\uparrow$ 8.54% | 57.09% 68.52% $\uparrow$ 11.43% | 58.82% 71.12% $\uparrow$ 12.3% | 59.92% 71.96% $\uparrow$ 12.04% |
| --- | --- | --- | --- | --- | --- |
(a) Effect of feedback rounds for a single run
| PAL SymPro-LM | 59.92% 71.96% $\uparrow$ 12.04% | 62.54% 77.21% $\uparrow$ 14.67% | 63.95% 80.06% $\uparrow$ 16.11% | 65.19% 82.06% $\uparrow$ 16.87% | 66.40% 83.37% $\uparrow$ 16.97% |
| --- | --- | --- | --- | --- | --- |
(b) Effect of multiple runs each with 4 feedback rounds
Table 5: Accuracy and cost comparison between ToT prompting and SymPro-LM with GPT-4-Turbo for 3 problems in FCoReBench. Costs are per test instance for ToT and one time costs per problem for SymPro-LM.
| Latin Squares 4x4 Magic Square | 3x3 32.5% 3x3 | 46.33% $0.5135 26.25% | $0.1235 100% $0.4325 | 100% $0.02 100% | $0.02 $0.02 |
| --- | --- | --- | --- | --- | --- |
| 4x4 | 8% | $0.881 | 100% | $0.02 | |
| Sujiko | 3x3 | 7.5% | $0.572 | 100% | $0.02 |
| 4x4 | 0% | $1.676 | 100% | $0.02 | |
Comparison with ToT Prompting: Table 5 compares SymPro-LM with ToT prompting on 3 problems. SymPro-LM is far superior in terms of cost and accuracy, indicating that even the largest LLMs cannot do complex reasoning on problems with large search depths and branching factors, despite being called multiple times with search-based prompting. Due to its programmatic nature, SymPro-LM generalizes even better to larger instances and is also hugely cost effective, as there is no need to call an LLM for each instance separately. We do not perform further experiments with ToT prompting, due to cost considerations.
<details>
<summary>extracted/6211530/Images/size-vs-algo.png Details</summary>

### Visual Description
## Line Charts: Algorithm Accuracy Across Puzzle Games
### Overview
The image contains three line charts comparing the accuracy of four algorithms (Few-Shot, Logic-LM, PAL, SymPro-LM) across different puzzle board sizes for Sudoku, Sujiko, and Magic-Square games. Accuracy is measured on a 0-100% scale, with SymPro-LM consistently achieving perfect scores.
### Components/Axes
- **X-Axes (Board Size)**:
- Sudoku: 4x4, 9x9, 16x16, 25x25
- Sujiko: 3x3, 4x4, 5x5
- Magic-Square: 3x3, 4x4, 5x5
- **Y-Axes (Accuracy %)**: 0-100% scale
- **Legends**: Located on the right of each chart, with color-coded labels:
- Orange: Few-Shot
- Purple: Logic-LM
- Blue: PAL
- Green: SymPro-LM
### Detailed Analysis
#### Sudoku
- **SymPro-LM**: Flat line at 100% across all board sizes.
- **Logic-LM**: Starts at ~60% (4x4), drops sharply to ~5% (9x9), and plateaus near 0% for larger boards.
- **Few-Shot**: Begins at ~55% (4x4), plummets to 0% by 9x9, remaining at 0% for larger boards.
- **PAL**: Starts at ~95% (4x4), declines to ~60% (25x25).
#### Sujiko
- **SymPro-LM**: Flat line at 100% across all board sizes.
- **Logic-LM**: Starts at ~65% (3x3), declines to ~50% (4x4), and drops to ~10% (5x5).
- **Few-Shot**: Begins at ~45% (3x3), declines to ~20% (4x4), and drops to 0% (5x5).
- **PAL**: Starts at ~95% (3x3), declines to ~80% (5x5).
#### Magic-Square
- **SymPro-LM**: Flat line at 100% across all board sizes.
- **Logic-LM**: Starts at ~65% (3x3), declines to ~30% (4x4), and drops to ~5% (5x5).
- **Few-Shot**: Begins at ~20% (3x3), declines to ~0% (4x4), and remains at 0% (5x5).
- **PAL**: Starts at ~95% (3x3), drops sharply to 0% (4x4), and remains at 0% (5x5).
### Key Observations
1. **SymPro-LM Dominance**: Achieves 100% accuracy universally, indicating perfect performance across all games and board sizes.
2. **Logic-LM Degradation**: Shows strong initial performance but degrades significantly with increasing board complexity.
3. **PAL Instability**: Performs well initially but collapses entirely for larger boards in Magic-Square and Sujiko.
4. **Few-Shot Limitations**: Struggles across all games, with near-zero accuracy for boards larger than 4x4.
### Interpretation
The data suggests SymPro-LM is the most robust algorithm, maintaining perfect accuracy regardless of puzzle complexity. Logic-LM and PAL exhibit initial competence but fail to scale, while Few-Shot demonstrates the poorest performance. This pattern implies that SymPro-LM's design inherently handles complexity better, whereas other algorithms may lack adaptive mechanisms for larger problem spaces. The sharp declines in Logic-LM and PAL for Magic-Square (5x5) highlight particular vulnerabilities to high-dimensional constraint satisfaction problems.
</details>
Figure 4: Effect of increasing problem instance size on baselines and SymPro-LM for GPT-4-Turbo.
Effect of Problem Instance Size: We now report performance of SymPro-LM and other baselines against varying problem instance sizes (see Figure 4) for 3 problems in FCoReBench (sudoku, sujiko and magic-square). Increasing the problem instance size increases the number of variables, accompanying constraints and reasoning steps required to reach the solution. We observe that being programmatic SymPro-LM and PAL, are relatively robust against increase in size of input instances. In comparison, performance of Logic-LM and few-shot prompting declines sharply. PAL programs are often inefficient and may see performance drop when they fail to find a solution within the time limit.
<details>
<summary>extracted/6211530/Images/feedback-effect.png Details</summary>

### Visual Description
## Line Chart: Test Accuracy vs. Number of Feedback Rounds
### Overview
The chart displays test accuracy percentages across five methods (K-Clique, Keisuke, Number Link, Shinro, Sujiko) and an average line, plotted against the number of feedback rounds (0–4). The y-axis ranges from 0% to 100%, with gridlines at 20% intervals. The Average line (red) dominates the upper portion, while Number Link (green) remains near the bottom.
### Components/Axes
- **X-axis**: Number of Feedback Rounds (0, 1, 2, 3, 4)
- **Y-axis**: Test Accuracy (%) (0%, 20%, 40%, 60%, 80%, 100%)
- **Legend**: Located on the right, with color/symbol mappings:
- K-Clique: Blue cross (`+`)
- Keisuke: Orange diamond (`◇`)
- Number Link: Green cross (`✖️`)
- Shinro: Red dashed line with triangle (`▼`)
- Sujiko: Purple hexagon (`⬡`)
- Average: Red solid line with triangle (`▼`)
### Detailed Analysis
1. **Average Line (Red)**:
- Starts at **50.94%** (round 0), rising steadily to **71.96%** (round 4).
- Intermediate values: 62.54% (round 1), 68.52% (round 2), 71.12% (round 3).
2. **Sujiko (Purple)**:
- Flat line at **100%** across all rounds.
3. **K-Clique (Blue)**:
- Begins at **70%** (round 0), spikes to **100%** (round 1), then declines to **80%** (round 4).
- Intermediate values: 90% (round 2), 85% (round 3).
4. **Keisuke (Orange)**:
- Starts at **20%** (round 0), jumps to **50%** (round 1), plateaus at **50%** (round 2), then rises to **60%** (rounds 3–4).
5. **Number Link (Green)**:
- Starts at **0%** (rounds 0–1), increases to **7%** (round 2), **8%** (rounds 3–4).
6. **Shinro (Red Dashed)**:
- Begins at **35%** (round 0), rises to **65%** (round 1), **75%** (round 2), **85%** (rounds 3–4).
### Key Observations
- **Sujiko** maintains perfect accuracy (100%) regardless of feedback rounds, suggesting robustness or overfitting.
- The **Average line** shows consistent improvement, indicating overall positive impact of feedback rounds.
- **K-Clique** exhibits a sharp initial gain followed by a decline, possibly due to overfitting or diminishing returns.
- **Number Link** starts at 0% but shows marginal improvement, hinting at slow convergence.
- **Shinro** and **Keisuke** demonstrate steady gains, with Shinro achieving the second-highest final accuracy (85%).
### Interpretation
The data suggests that **Sujiko** is the most reliable method, maintaining perfect accuracy. The **Average line** highlights a general trend of improvement with more feedback, though individual methods vary significantly. **K-Clique**'s post-round-1 decline raises questions about its stability or suitability for iterative refinement. **Number Link**'s slow progress implies it may require more feedback or is less effective for this task. The divergence between methods underscores the importance of method selection based on task requirements and feedback dynamics.
</details>
(a) Effect of feedback
<details>
<summary>extracted/6211530/Images/effect-runs.png Details</summary>

### Visual Description
## Line Chart: Performance Metrics Across Number of Runs
### Overview
The chart displays performance metrics (percentage values) for six different methods across five runs. The y-axis represents percentage values (0-100%), and the x-axis represents the number of runs (1-5). Each line corresponds to a specific method, with distinct colors and markers. The "Average" line (red triangles) shows an overall upward trend, while other lines exhibit varying patterns.
### Components/Axes
- **X-axis**: "Number of Runs" (1 to 5, integer scale)
- **Y-axis**: Percentage values (0-100%, with gridlines at 20% intervals)
- **Legend**: Located on the right side, with the following entries:
- **Car-Sequencing**: Blue cross (`+`)
- **Dosun Fuwari**: Orange cross (`x`)
- **K-Metric-Centre**: Green diamond (`◇`)
- **Number Link**: Red triangle (`▲`)
- **Survo**: Purple pentagon (`⬤`)
- **Average**: Red solid line with triangles (`▲`)
### Detailed Analysis
1. **Car-Sequencing** (blue `+`):
- Flat line at 0% for all runs.
- No visible data points above the baseline.
2. **Dosun Fuwari** (orange `x`):
- Starts at 40% (run 1) and increases steadily to 100% (run 5).
- Data points: 40%, 70%, 90%, 95%, 100%.
3. **K-Metric-Centre** (green `◇`):
- Starts at 20% (run 1) and rises sharply to 83.37% (run 5).
- Data points: 20%, 40%, 60%, 80%, 83.37%.
4. **Number Link** (red `▲`):
- Starts at 10% (run 1) and increases linearly to 35% (run 5).
- Data points: 10%, 15%, 20%, 25%, 35%.
5. **Survo** (purple `⬤`):
- Flat line at 100% for all runs.
- No variation across runs.
6. **Average** (red solid line with `▲`):
- Starts at 71.96% (run 1) and increases to 83.37% (run 5).
- Data points: 71.96%, 77.21%, 80.06%, 82.06%, 83.37%.
### Key Observations
- **Survo** consistently achieves 100% performance, suggesting it is the optimal method.
- **Dosun Fuwari** and **K-Metric-Centre** show significant improvement over runs, with Dosun Fuwari reaching 100% by run 5.
- **Number Link** starts at the lowest performance (10%) but improves steadily.
- The **Average** line reflects a gradual upward trend, indicating overall performance improvement across methods.
- **Car-Sequencing** remains at 0%, possibly indicating a baseline or control group.
### Interpretation
The chart demonstrates that performance improves with more runs for most methods, except **Survo** (already optimal) and **Car-Sequencing** (no improvement). **Dosun Fuwari** and **K-Metric-Centre** show the most dramatic gains, while **Number Link** starts weak but catches up. The **Average** line highlights a general trend of improvement, suggesting that iterative runs enhance outcomes. The flat lines for **Survo** and **Car-Sequencing** imply these methods are either already optimal or ineffective, respectively. This data could inform decisions about resource allocation or method selection in iterative processes.
</details>
(b) Effect of multiple runs
<details>
<summary>extracted/6211530/Images/solved-examples-count.png Details</summary>

### Visual Description
## Line Graph: Number of Solved Examples vs. Number of Feedback Rounds
### Overview
The image is a line graph comparing the percentage of solved examples across different numbers of feedback rounds (0–4). Five data series are plotted, each representing a fixed number of solved examples (0, 1, 4, 7, 10). The y-axis ranges from 50% to 78%, and the x-axis ranges from 0 to 4 feedback rounds. Percentages are annotated above each data point.
---
### Components/Axes
- **X-axis**: "Number of Feedback Rounds" (0, 1, 2, 3, 4).
- **Y-axis**: "Number of Solved Examples" (50% to 78%, increments of 2%).
- **Legend**: Located in the top-left corner, with five color-coded lines:
- **Blue**: 0 solved examples.
- **Orange**: 1 solved example.
- **Green**: 4 solved examples.
- **Red**: 7 solved examples.
- **Purple**: 10 solved examples.
- **Data Points**: Each line has five markers (circles) at x = 0, 1, 2, 3, 4, with percentages labeled above them.
---
### Detailed Analysis
1. **Blue Line (0 solved examples)**:
- Flat line at **50.94%** across all feedback rounds.
- No improvement observed regardless of feedback rounds.
2. **Orange Line (1 solved example)**:
- Starts at **50.94%** (x=0) and increases to **66.11%** (x=4).
- Gradual upward trend: 60.35% (x=1), 64.48% (x=2), 66.10% (x=3).
3. **Green Line (4 solved examples)**:
- Starts at **50.94%** (x=0) and rises to **70.62%** (x=4).
- Steeper trend: 62.31% (x=1), 67.90% (x=2), 70.22% (x=3).
4. **Red Line (7 solved examples)**:
- Starts at **50.94%** (x=0) and peaks at **71.73%** (x=4).
- Consistent growth: 62.54% (x=1), 68.52% (x=2), 70.89% (x=3).
5. **Purple Line (10 solved examples)**:
- Starts at **50.94%** (x=0) and reaches **71.96%** (x=4).
- Slightly higher than the red line at x=4 but dips below it at x=2 (68.28% vs. 68.52%).
---
### Key Observations
- **Flat Line for 0 Solved Examples**: The blue line remains constant at 50.94%, indicating no improvement with feedback rounds when no examples are solved.
- **Positive Correlation**: Higher numbers of solved examples (1, 4, 7, 10) show increasing percentages with more feedback rounds.
- **Crossover at x=2**: The red line (7 solved) briefly surpasses the purple line (10 solved) at x=2 (68.52% vs. 68.28%), but the purple line regains the lead by x=3.
- **Diminishing Returns**: The rate of improvement slows for all lines as feedback rounds increase, especially noticeable in the 10 solved examples line.
---
### Interpretation
The data demonstrates a clear relationship between the number of solved examples and performance improvement with feedback rounds:
1. **No Solved Examples (Blue Line)**: Feedback rounds have no impact, suggesting that solved examples are a prerequisite for learning.
2. **Increasing Solved Examples**: More solved examples correlate with higher performance gains. For instance:
- 1 solved example improves from 50.94% to 66.11%.
- 10 solved examples improve from 50.94% to 71.96%.
3. **Crossover Phenomenon**: At x=2, the 7 solved examples line outperforms the 10 solved examples line, possibly due to diminishing returns or saturation effects in the 10 solved examples series.
4. **Practical Implications**: The graph highlights the importance of initial solved examples in leveraging feedback rounds. Without solved examples, feedback is ineffective. With more solved examples, feedback rounds yield progressively better results, though the marginal gains may plateau.
This analysis underscores the value of incremental learning and the role of solved examples in optimizing feedback-driven improvement.
</details>
(c) Effect of # of solved examples
Figure 5: Effect of feedback and multiple runs with GPT-4-Turbo. (a) and (b) show results with 10 solved examples for feedback where dashed lines show results for individual problems in FCoReBench, with coloured lines highlighting specific problems and the red bold line represents the average effect across all problems. (c) shows the effect of number of solved examples used for feedback in a single run.
Effect of Feedback on Solved Examples: Figure 5(a) describes the effect of multiple rounds of feedback for SymPro-LM. Feedback helps performance significantly; utilizing 4 feedback rounds improves performance by $21.02\$ . Even the largest LLMs commit errors, making it important to verify and correct their work. But feedback on its own is not enough, a single run might end-up in a wrong reasoning path, which is not corrected by feedback making it important to utilize multiple runs for effective reasoning. Utilizing 5 runs improves the performance by additional $11.41\$ (Figure 5(b)) after which the gains tend to saturate. Performance also increases with an increase in the number of solved examples (Figure 5(c)). Each solved example helps in detecting and correcting different errors. However, performance tends to saturate at 7 solved examples and no new errors are discovered/corrected, even with additional training data.
### 7.1 Results on Other Datasets
Table 6: Results for baselines & SymPro-LM on other benchmarks. Best results with each LLM are highlighted.
| Logical Deduction ProofWriter PrOntoQA | 39.66 % 40.50 % 49.60 % | 50.66 % 57.16 % 83.20 % | 66.33 % 50.5 % 98.40 % | 71.00 % 70.16 % 72.20 % | 78.00 % 74.167 % 97.40 % | 65.33 % 46.5 % 83.00 % | 76.00 % 61.66 % 98.80 % | 81.66 % 76.29 % 99.80 % | 82.67 % 74.83 % 91.20 % | 94.00 % 89.83 % 97.80 % |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
Table 6 reports the performance on non-first order datasets. SymPro-LM outperforms all other baselines on ProofWriter and LogicalDeduction, particularly Logic-LM. This showcases the value of integrating LLMs with symbolic solvers through programs, even for standard reasoning tasks. These experiments suggest that LLMs translate natural language questions into programs using solvers much more effectively than into symbolic formulations directly. We attribute this to the vast pre-training of LLMs on code (Brown et al., 2020; Chen et al., 2021). For instance, on the LogicalDeduction benchmark, while Logic-LM does not make syntactic errors during translation it often makes logical errors. These errors significantly decrease when LLMs are prompted to produce programs instead (Figure 6(b)). Error analysis on ProofWriter and PrOntoQA reveals that for more complex natural language questions, LLMs also start making syntactic errors during translation as the number of rules/facts start increasing. With SymPro-LM these errors are vastly reduced because, apart from the benefit from pre-training, LLMs also start utilizing programming constructs like dictionaries and loops to make most out of the structure in these problems (Figure 6(a)). PAL and CoT perform marginally better on PrOntoQA because the reasoning style for problems in this dataset involves forward-chain reasoning which aligns with PAL’s and CoT’s style of reasoning. Integrating symbolic solvers is not as useful for this dataset, but still achieves competitive performance.
<details>
<summary>extracted/6211530/Images/prontoQA-example.png Details</summary>

### Visual Description
## Logical Problem Representation: Wumpus World Knowledge Base
### Overview
The image presents a formal knowledge base for a modified Wumpus World problem, combining logical assertions with programming code. It includes:
1. A problem statement with 14 logical propositions
2. Two code implementations (Logic-LM and SymPro-LM)
3. Predicate definitions, facts, and inference rules
### Components/Axes
**Problem Statement Section**
- Labels: "Problem" (green header)
- Content: 14 propositional statements about entities (wumpus, opaqu, numpus, etc.)
**Logic-LM Section**
- Header: "Logic-LM" (orange background)
- Subsections:
- Predicates (7 definitions)
- Facts (1 entry)
- Rules (4 implications)
**SymPro-LM Section**
- Header: "SymPro-LM" (blue background)
- Content: Python-style code with:
- Property definitions
- Rule addition logic
### Detailed Analysis
**Problem Statement**
```
Jompus are not dull. Every wumpus is opaqu. Wumpuses are dumpuses. Every dumpus is not floral. Dumpuses are numpuses. Each numpus is not luminous. Each numpus is a vumpus. Every vumpus is large. Vumpuses are dumpuses. Every tumpus is not orange. Every tumpus is a zumpus. Zumpus are dull. Every zumpus is an impus. Every impus is spicy. Every impus is a rompus. Rompus are not temperate. Every rompus is a yumpus. Sam is a dumpus. Is Sam Dull?
```
**Logic-LM Implementation**
```python
Predicates:
- Wumpus(x, bool) ::: Does x belong to Wumpus?
- Opaque(x, bool) ::: Is x opaque?
- Numpus(x, bool) ::: Does x belong to Numpus?
- Vumpus(x, bool) ::: Does x belong to Vumpus?
- Large(x, bool) ::: Is x large?
- Tumpus(x, bool) ::: Does x belong to Tumpus?
Facts:
- Dumpus(Sam, True)
Rules:
- Wumpus(x, True) >>> Opaque(x, True)
- Wumpuses(x, True) >>> Dumpus(x, True)
- Vumpus(x, True) >>> Large(x, True)
- Vumpuses(x, True) >>> Tumpus(x, True)
```
**SymPro-LM Implementation**
```python
# Define properties using dictionaries
properties = {
"jompus": {"dull": False},
"wumpus": {"opaque": True, "dumpus": True},
"dumpus": {"floral": False, "numpus": True},
"vumpus": {"large": True, "tumpus": True}
}
# Add rules using for loops and dicts
for entity, props in properties.items():
for prop, value in props.items():
if value:
s.add(Implies(Bool(f'{entity}_Sam'), Bool(f'prop_Sam')))
else:
s.add(Implies(Bool(f'{entity}_Sam'), Not(Bool(f'prop_Sam'))))
```
### Key Observations
1. **Entity Hierarchy**:
- Wumpus → Opaque → Dumpus → Numpus → Vumpus
- Tumpus → Zumpus → Impus → Rompus → Yumpus
2. **Contradictory Properties**:
- Zumpus are explicitly defined as "dull" (True)
- Wumpus are defined as "opaque" (True)
3. **Rule Structure**:
- All rules follow the pattern: EntityProperty → TargetProperty
- Uses boolean implications for knowledge inference
### Interpretation
This knowledge base demonstrates:
1. **Logical Modeling**: Translates natural language propositions into formal logic
2. **Programming Implementation**: Shows two approaches to represent the same logic:
- Logic-LM: Traditional predicate logic
- SymPro-LM: Object-oriented approach with property dictionaries
3. **Inference Capability**: The system can answer "Is Sam Dull?" by tracing:
- Sam is a Dumpus (fact)
- Dumpus are Numpus (rule)
- Numpus are Vumpus (rule)
- Vumpus are Large (rule)
- Final conclusion: Sam is not dull (from Jompus definition)
The dual implementation suggests a comparison between symbolic logic systems and object-oriented knowledge representation, with the problem statement containing 14 distinct entity relationships and 4 inference rules.
</details>
(a) PrOntoQA
<details>
<summary>extracted/6211530/Images/logicaldeduction-example.png Details</summary>

### Visual Description
## Textual Problem with Programmatic Solutions: Antique Car Show Age Ranking
### Overview
The image presents a logic puzzle about ranking five antique vehicles by age, followed by two programmatic representations (Logic-LM and SymPro-LM) for solving it. The problem involves deductive reasoning to determine the second-oldest vehicle among a truck, motorcycle, limousine, station wagon, and sedan.
### Components/Axes
1. **Problem Section (Green Box)**
- **Text**: Describes five vehicles and their age relationships:
- Station wagon is the oldest
- Limousine > truck
- Limousine < sedan
- Sedan > motorcycle
- Limousine is not the second-oldest
2. **Logic-LM Section (Orange Box)**
- **Domain**: Integer values 1 (oldest) to 5 (newest)
- **Variables**:
- `truck [IN] [1,2,3,4,5]`
- `motorcycle [IN] [1,2,3,4,5]`
- (Implied variables for limousine, station_wagon, sedan)
- **Constraints**:
- `station_wagon == 1` (oldest)
- `limousine > truck` (limousine is older than truck)
- `limousine > sedan` (limousine is newer than sedan)
3. **SymPro-LM Section (Blue Box)**
- **Domain**: Same as Logic-LM but with commented numbering
- **Variables**: Explicitly listed as `['truck', 'motorcycle', 'limousine', 'station_wagon', 'sedan']`
- **Constraints**:
- `station_wagon == 1`
- `limousine < truck` (written as `limousine < truck` in lambda notation)
- `limousine < sedan` (implied by `limousine > sedan` in Logic-LM)
### Detailed Analysis
**Problem Section**
- Vehicles: Truck, motorcycle, limousine, station wagon, sedan
- Age relationships:
1. Station wagon is oldest (position 1)
2. Limousine > truck (limousine is older than truck)
3. Limousine < sedan (limousine is newer than sedan)
4. Sedan > motorcycle (sedan is older than motorcycle)
5. Limousine is not second-oldest
**Logic-LM Structure**
- Uses integer domains (1-5) for age ranking
- Explicit constraints:
- `station_wagon == 1`
- `limousine > truck`
- `limousine > sedan`
**SymPro-LM Structure**
- Uses commented domain numbering (#1=oldest, #5=newest)
- Implements constraints via lambda functions:
- `lambda station_wagon: station_wagon == 1`
- `lambda limousine, truck: limousine < truck`
### Key Observations
1. **Contradiction in Constraints**:
- Logic-LM states `limousine > sedan` while SymPro-LM uses `limousine < sedan` - this appears to be a transcription error in one of the implementations.
2. **Positional Logic**:
- Station wagon is fixed at position 1 (oldest)
- Limousine must be older than both truck and sedan
- Sedan must be older than motorcycle
- Limousine cannot be second-oldest
3. **Implementation Differences**:
- Logic-LM uses direct inequality constraints
- SymPro-LM uses lambda functions for constraint definition
- SymPro-LM includes commented domain numbering
### Interpretation
The problem requires determining the second-oldest vehicle through logical deduction. The programmatic solutions attempt to model this using constraint satisfaction:
1. **Logical Deduction**:
- Station wagon (1st)
- Limousine must be older than truck and sedan but not second-oldest
- Possible positions for limousine: 3rd or 4th
- Sedan must be older than motorcycle
- Truck must be younger than limousine
2. **Programmatic Approach**:
- Both implementations use integer domains for age ranking
- Logic-LM uses direct constraints while SymPro-LM uses functional programming constructs
- The contradiction in limousine/sedan ordering suggests a potential error in one implementation
3. **Solution Path**:
- Valid ranking must satisfy:
- 1: station_wagon
- 2: Not limousine
- 3-5: Remaining vehicles ordered by constraints
- Second-oldest must be either sedan or motorcycle based on constraints
This represents a classic logic puzzle that can be solved through constraint programming, though the conflicting limousine/sedan constraints in the implementations highlight the importance of careful constraint formulation.
</details>
(b) LogicalDeduction
Figure 6: Examples highlighting benefits of integrating LLMs with symbolic solver through programs.
## 8 Discussion
We analyze FCoReBench to identify where LLMs excel and where the largest models still struggle. Based on SymPro-LM ’s performance, we categorize FCoReBench problems into three broad groups.
1) Problems that SymPro-LM solved with 100% accuracy without any feedback. 8 such problems exist out of the $40$ , including vertex-cover and latin-square. These problems have a one-to-one correspondence between the natural language description of the rules and the program for generating the constraints and the LLM essentially has to perform a pure translation task which they excel at.
2) Problems that SymPro-LM solved with 100% accuracy but after feedback from solved examples. There are 20 such problems. They typically do not have a one-to-one correspondence between rule descriptions and code, thus requiring some reasoning to encode the problem in the solver’s language. For eg. one must define auxiliary variables and/or compose several primitives to encode a single natural language rule. GPT-4-Turbo initially misses constraints or encodes the problem incorrectly, but with feedback, it can spot its mistakes and corrects its programs. Examples include k-clique and binairo. In binairo, for example, GPT-4-Turbo incorrectly encodes the constraints for ensuring all columns and rows to be distinct but fixes this mistake after feedback (see Figure 17 in the appendix). LLMs can leverage their vast pre-training to discover non-trivial encodings for several interesting problems and solved examples can help guide LLMs to correct solutions in case of mistakes.
3) Problems with performance below 100% that are not corrected through feedback or utilizing multiple runs. For these 12 problems, LLM finds it difficult to encode some natural language constraint into SMT. Examples include number-link and hamiltonian path, where GPT-4-Turbo is not able to figure out how to encode existence of paths as SMT constraints. In our opinion, these conversions are peculiar, and may be hard even for average CS students. We hope that further analysis of these 12 domains opens up research directions for neuro-symbolic reasoning with LLMs.
## 9 Conclusion and Limitations
We investigate the reasoning abilities of LLMs on structured first-order combinatorial reasoning problems. We formally define the task, and we present FCoReBench, a novel benchmark of $40$ such problems and find that existing tool-augmented techniques, such as Logic-LM and PAL fare poorly. In response, we propose SymPro-LM – a new technique to aid LLMs with both program interpreters and symbolic solvers. It uses LLMs to convert text into executable code, which is then processed by interpreters to define constraints, allowing symbolic solvers to efficiently tackle the reasoning tasks. Our extensive experiments show that SymPro-LM ’s integrated approach leads to superior performance on our dataset as well as existing benchmarks. Error analysis reveals that SymPro-LM struggles for a certain class of problems where conversion to symbolic representation is not straightforward. In such cases simple feedback strategies do not improve reasoning; exploring methods to alleviate such problems is a promising direction for future work. Another future work direction is to extend this dataset to include images of inputs and outputs, instead of serialized text representations, and assess the reasoning abilities of vision-language models, like GPT4-V.
Limitations: While we study a wide variety of fcore problems, more such problems always exist and adding these to FCoReBench remains a direction of future work. Additionally we assume that input instances and their outputs have a fixed pre-defined (serialized) representation, which may not always be easy to find. Another limitation is that encoding of many problems in the solver’s language can potentially be complicated. Our method relies on the pre-training of LLMs to achieve this without any training/fine-tuning, and addressing this is a direction for future work.
## References
- bench authors [2023] BIG bench authors. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023. ISSN 2835-8856.
- Brown et al. [2020] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
- Chen et al. [2021] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021.
- Clark et al. [2021] Peter Clark, Oyvind Tafjord, and Kyle Richardson. Transformers as soft reasoners over language. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI’20, 2021. ISBN 9780999241165.
- Cobbe et al. [2021] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021.
- Colbourn [1984] Charles J. Colbourn. The complexity of completing partial latin squares. Discrete Applied Mathematics, 8(1):25–30, 1984. ISSN 0166-218X. doi: https://doi.org/10.1016/0166-218X(84)90075-1.
- De Biasi [2013] Marzio De Biasi. Binary puzzle is np–complete, 07 2013.
- De Moura and Bjørner [2008] Leonardo De Moura and Nikolaj Bjørner. Z3: An efficient smt solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 337–340. Springer, 2008.
- Demaine and Rudoy [2018] Erik D. Demaine and Mikhail Rudoy. Theoretical Computer Science, 732:80–84, 2018. ISSN 0304-3975. doi: https://doi.org/10.1016/j.tcs.2018.04.031.
- Epstein [1987] D. Epstein. On the np-completeness of cryptarithms. ACM SIGACT News, 18(3):38–40, 1987. doi: 10.1145/24658.24662.
- Fan et al. [2023] Lizhou Fan, Wenyue Hua, Lingyao Li, Haoyang Ling, and Yongfeng Zhang. Nphardeval: Dynamic benchmark on reasoning ability of large language models via complexity classes. arXiv preprint arXiv:2312.14890, 2023.
- Gao et al. [2023] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning, pages 10764–10799. PMLR, 2023.
- Garey et al. [1976a] M. R. Garey, D. S. Johnson, and Ravi Sethi. The complexity of flowshop and jobshop scheduling. Mathematics of Operations Research, 1(2):117–129, 1976a. doi: 10.1287/moor.1.2.117.
- Garey et al. [1976b] M. R. Garey, D. S. Johnson, and Ravi Sethi. The complexity of flowshop and jobshop scheduling. Mathematics of Operations Research, 1(2):117–129, 1976b. ISSN 0364765X, 15265471.
- Gent et al. [2017] Ian P. Gent, Christopher Jefferson, and Peter Nightingale. Complexity of n-queens completion. Journal of Artificial Intelligence Research (JAIR), 58:1–16, 2017. doi: 10.1613/jair.5512.
- Han et al. [2022] Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, David Peng, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Shafiq R. Joty, Alexander R. Fabbri, Wojciech Kryscinski, Xi Victoria Lin, Caiming Xiong, and Dragomir Radev. FOLIO: natural language reasoning with first-order logic. CoRR, abs/2209.00840, 2022. doi: 10.48550/ARXIV.2209.00840.
- Hao et al. [2023] Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. Reasoning with language model is planning with world model, 2023.
- Haraguchi and Ono [2015] Kazuya Haraguchi and Hirotaka Ono. How simple algorithms can solve latin square completion-type puzzles approximately. Journal of Information Processing, 23(3):276–283, 2015. doi: 10.2197/ipsjjip.23.276.
- HIGUCHI and KIMURA [2019] Yuta HIGUCHI and Kei KIMURA. Np-completeness of fill-a-pix and completeness of its fewest clues problem. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E102.A(11):1490–1496, 2019. doi: 10.1587/transfun.E102.A.1490.
- Itai et al. [1982] Alon Itai, Christos H. Papadimitriou, and Jayme Luiz Szwarcfiter. Hamilton paths in grid graphs. SIAM Journal on Computing, 11(4):676–686, 1982. doi: 10.1137/0211056.
- Iwamoto and Ibusuki [2018] Chuzo Iwamoto and Tatsuaki Ibusuki. Dosun-fuwari is np-complete. Journal of Information Processing, 26:358–361, 2018. doi: 10.2197/ipsjjip.26.358.
- Jiang et al. [2024] Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mixtral of experts, 2024.
- Kis [2004] Tamás Kis. On the complexity of the car sequencing problem. Operations Research Letters, 32(4):331–335, 2004. ISSN 0167-6377. doi: https://doi.org/10.1016/j.orl.2003.09.003.
- Kojima et al. [2022] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.
- Liu et al. [2020] Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. In Christian Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3622–3628. ijcai.org, 2020. doi: 10.24963/IJCAI.2020/501.
- Lloyd et al. [2022] Huw Lloyd, Matthew Crossley, Mark Sinclair, and Martyn Amos. J-pop: Japanese puzzles as optimization problems. IEEE Transactions on Games, 14(3):391–402, 2022. doi: 10.1109/TG.2021.3081817.
- Nandwani et al. [2021] Yatin Nandwani, Deepanshu Jindal, Mausam, and Parag Singla. Neural learning of one-of-many solutions for combinatorial problems in structured output spaces. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
- Nandwani et al. [2022a] Yatin Nandwani, Vidit Jain, Mausam, and Parag Singla. Neural models for output-space invariance in combinatorial problems. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022a.
- Nandwani et al. [2022b] Yatin Nandwani, Rishabh Ranjan, Mausam, and Parag Singla. A solver-free framework for scalable learning in neural ILP architectures. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022b.
- OpenAI [2023] OpenAI. Gpt-4 technical report, 2023.
- Palm et al. [2018] Rasmus Berg Palm, Ulrich Paquet, and Ole Winther. Recurrent relational networks. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 3372–3382, 2018.
- Pan et al. [2023] Liangming Pan, Alon Albalak, Xinyi Wang, and William Wang. Logic-lm: Empowering large language models with symbolic solvers for faithful logical reasoning. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 3806–3824. Association for Computational Linguistics, 2023.
- Paranjape et al. [2023] Bhargavi Paranjape, Scott M. Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Túlio Ribeiro. ART: automatic multi-step reasoning and tool-use for large language models. CoRR, abs/2303.09014, 2023. doi: 10.48550/ARXIV.2303.09014.
- Paulus et al. [2021] Anselm Paulus, Michal Rolínek, Vít Musil, Brandon Amos, and Georg Martius. Comboptnet: Fit the right np-hard problem by learning integer programming constraints. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 8443–8453. PMLR, 2021.
- Russell and Norvig [2010] Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 3 edition, 2010.
- Saparov and He [2023a] Abulhair Saparov and He He. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023a.
- Saparov and He [2023b] Abulhair Saparov and He He. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023b.
- Schick et al. [2023] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023. doi: 10.48550/ARXIV.2302.04761.
- Tafjord et al. [2021] Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. Proofwriter: Generating implications, proofs, and abductive statements over natural language. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 3621–3634. Association for Computational Linguistics, 2021. doi: 10.18653/V1/2021.FINDINGS-ACL.317.
- Wang et al. [2023] Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, and Yulia Tsvetkov. Can language models solve graph problems in natural language? In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
- Wang et al. [2019] Po-Wei Wang, Priya Donti, Bryan Wilder, and Zico Kolter. SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 6545–6554. PMLR, 09–15 Jun 2019.
- Wei et al. [2022a] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022, 2022a.
- Wei et al. [2022b] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022, 2022b.
- Wei et al. [2022c] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022c.
- Yang et al. [2022] Kaiyu Yang, Jia Deng, and Danqi Chen. Generating natural language proofs with verifier-guided search. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 89–105, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.7.
- Yao et al. [2023] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. CoRR, abs/2305.10601, 2023. doi: 10.48550/ARXIV.2305.10601.
- YATO and SETA [2003] Takayuki YATO and Takahiro SETA. Complexity and completeness of finding another solution and its application to puzzles. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E86-A, 05 2003.
- Ye et al. [2023] Xi Ye, Qiaochu Chen, Isil Dillig, and Greg Durrett. Satlm: Satisfiability-aided language models using declarative prompting. In Proceedings of NeurIPS, 2023.
- Yu et al. [2023] Dongran Yu, Bo Yang, Dayou Liu, Hui Wang, and Shirui Pan. A survey on neural-symbolic learning systems. Neural Networks, 166:105–126, 2023. doi: 10.1016/J.NEUNET.2023.06.028.
- Zheng et al. [2023] Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. Progressive-hint prompting improves reasoning in large language models. CoRR, abs/2304.09797, 2023. doi: 10.48550/ARXIV.2304.09797.
- Zhong et al. [2021] Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Jiahai Wang, Jian Yin, Ming Zhou, and Nan Duan. AR-LSAT: investigating analytical reasoning of text. CoRR, abs/2104.06598, 2021.
- Zhou et al. [2023] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V. Le, and Ed H. Chi. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
## Appendix A FCoReBench
### A.1 Dataset Details and Statistics
Our dataset namely FCoReBench has $40$ different fcore problems that have been collected from various sources. Some of these problems are logical-puzzles from publishing houses like Nikoli, some problems are from operations research literature, some are from the annual SAT competition and other problems are well-known computational problems from Computer Science literature such as hamiltonian path and minimum-dominating set. Table 7 gives the details of all problems in our dataset. To create our training and test sets, we write scripts to synthetically generate problem instances. These can be used to extend the dataset as needed with any number of instances of any size. For experimentation, we generate some solved training instances and a separate set of testing instances. Each problem also has a natural language description of its rules, and a natural language description of the input-format which specify how input problem instances and their solutions are represented in text. The next few sections give illustrative examples and other details.
### A.2 Natural Language Description of Rules
This section describes how we create the natural language description of rules for problems in FCoReBench. We extract rules from the sources such as the Wikipedia/Nikoli pages of the corresponding problems. These rules are reworded by a human expert to reduce dataset contamination. Another human expert ensures that there are no ambiguities in the reworded description of the rules. The rules are generalized, when needed (for eg. from a $9\times 9$ Sudoku to a $n\times n$ Sudoku). The following sections provide few examples.
#### A.2.1 Example Problem: Survo
<details>
<summary>extracted/6211530/Images/survo-example.png Details</summary>

### Visual Description
## Data Table: Value Redistribution Across Categories
### Overview
The image shows two side-by-side tables labeled with columns A-D and rows 1-3, connected by a rightward arrow. Both tables include a totals row at the bottom. The left table contains sparse numerical values, while the right table is fully populated, suggesting a transformation or redistribution of values while preserving row totals.
### Components/Axes
- **Columns**: Labeled A, B, C, D (categorical variables).
- **Rows**: Labeled 1, 2, 3 (discrete categories), with a totals row at the bottom.
- **Totals**: Final row sums column values; final column sums row values.
- **Arrow**: Indicates a directional transformation from left to right table.
### Detailed Analysis
#### Left Table
| Row | A | B | C | D | Row Total |
|-----|----|----|----|----|-----------|
| 1 | | 6 | | 30 | 36 |
| 2 | 8 | 1 | | 18 | 27 |
| 3 | | 9 | 3 | 30 | 42 |
| **Total** | 27 | 16 | 10 | 25 | |
#### Right Table
| Row | A | B | C | D | Row Total |
|-----|----|----|----|----|-----------|
| 1 | 12 | 6 | 2 | 10 | 30 |
| 2 | 8 | 1 | 5 | 4 | 18 |
| 3 | 7 | 9 | 3 | 11 | 30 |
| **Total** | 27 | 16 | 10 | 25 | |
### Key Observations
1. **Row Totals Preservation**:
- Left table row totals (36, 27, 42) do not match right table row totals (30, 18, 30). This discrepancy suggests either:
- A misalignment in the image (e.g., misplaced totals row).
- A transformation that intentionally alters row totals (unlikely, given the arrow implies equivalence).
- **Conclusion**: Likely a visual error in the left table’s totals row. The right table’s totals (27, 16, 10, 25) align with the left table’s column sums.
2. **Value Redistribution**:
- Right table fills empty cells in the left table while maintaining column totals.
- Example:
- Left Row 1: B=6, D=30 → Right Row 1: A=12, B=6, C=2, D=10 (sum=30).
- Left Row 2: A=8, B=1, D=18 → Right Row 2: A=8, B=1, C=5, D=4 (sum=18).
- Left Row 3: B=9, C=3, D=30 → Right Row 3: A=7, B=9, C=3, D=11 (sum=30).
3. **Column Totals Consistency**:
- Both tables’ column totals (A=27, B=16, C=10, D=25) match, confirming the transformation preserves column-level aggregates.
### Interpretation
The right table represents a **redistribution of values** from the left table, likely modeling a scenario where:
- Original sparse data (left) is expanded into a complete allocation (right).
- Values are reallocated across categories (A-D) while preserving column totals.
- The arrow implies a causal or procedural relationship (e.g., budget allocation, resource distribution).
Notable patterns:
- **Row 1**: Significant increase in A (from 0 to 12) and C (from 0 to 2), offset by reduced D (30 → 10).
- **Row 3**: Introduction of A=7 and D=11, balancing B and C values.
- **Column C**: All values originate from Row 3 in the left table (C=3), redistributed as 2, 5, and 3 in the right table.
This suggests a systematic process (e.g., optimization, constraint satisfaction) that distributes values to meet predefined column totals while adjusting row-level allocations.
</details>
Figure 7: Conversion of an input survo problem instance to its solution.
Survo (Figure 7) is an example problem from FCoReBench. The task is to fill a $m\times n$ rectangular board with numbers from $1-m*n$ such that each row and column sums to an intended target. (Survo-Wikipedia). The box given below describes the rules of Survo more formally in natural language.
We are given a partially filled $m\times n$ rectangular board, intended row sums and column sums. - Empty cells are to be filled with numbers - Numbers in the solved board can range from $1$ to $m*n$ - Numbers present in filled cells on the input board cannot be removed - Each number from 1 to m*n must appear exactly once on the solved board - All the empty cells should be filled such that each row and each column of the solved board must sum to the respective row sum and column sum as specified in the input
#### A.2.2 Example Problem: Hamiltonian Path
<details>
<summary>extracted/6211530/Images/hamiltonian-path-example.png Details</summary>

### Visual Description
## Diagram: Network Topology Illustration
### Overview
The image depicts a network topology diagram with five yellow circular nodes connected by black and red lines. No textual labels, legends, or axis markers are present.
### Components/Axes
- **Nodes**: Five yellow circles (no labels or identifiers).
- **Edges**:
- Black lines: Primary connections between nodes.
- Red lines: Highlighted path (e.g., between nodes 1→2→3→4→5).
- **No legends, axis titles, or textual annotations** are visible.
### Detailed Analysis
- **Node Placement**:
- Nodes are arranged in a roughly linear sequence (left to right: Node 1, Node 2, Node 3, Node 4, Node 5).
- Node 3 is centrally located, acting as a hub with multiple connections.
- **Edge Patterns**:
- Black lines form a complete graph (all nodes interconnected).
- Red lines trace a specific route: Node 1 → Node 2 → Node 3 → Node 4 → Node 5.
- **No numerical values, scales, or categorical labels** are present.
### Key Observations
1. The red-highlighted path suggests a prioritized or critical route within the network.
2. Node 3’s central position implies it may serve as a primary relay or junction point.
3. The absence of labels or legends prevents identification of nodes or edge types (e.g., data flow, latency, bandwidth).
### Interpretation
This diagram likely represents a simplified network architecture, emphasizing connectivity and a specific traversal path. The lack of textual details limits quantitative analysis but highlights structural relationships:
- **Central Hub**: Node 3’s role as a connector suggests it is critical for network resilience or efficiency.
- **Red Path Significance**: The red edges may indicate a fail-safe route, primary data flow, or a vulnerability (e.g., single points of failure).
- **Complete Graph**: The black lines imply full mesh connectivity, which is resource-intensive but maximizes redundancy.
No numerical data or categorical labels are available for further analysis. The diagram focuses on visualizing topology rather than quantitative metrics.
</details>
Figure 8: A sample input graph instance and its solution to the hamiltonian-path problem. Vertices are represented by yellow circles and the hamiltonian path is represented by the red line.
Hamiltonian path is a well-known problem in graph theory in which we have to find a path in an un-directed and an un-weighted graph such that each vertex is visited exactly once by the path. We consider the decision variant of this problem which is equally hard in terms of computational complexity. The box below shows the formal rules for this problem expressed in natural language.
We are given an un-directed and un-weighted graph. - We have to determine if the graph contains a path that visits every vertex exactly once.
#### A.2.3 Example Problem: Dosun Fuwari
<details>
<summary>extracted/6211530/Images/dosun-fuwari-example.png Details</summary>

### Visual Description
## Diagram: Grid Transformation Process
### Overview
The image depicts a two-step visual transformation between two 4x4 grid structures. The left grid contains black squares in specific positions, while the right grid replaces these with black circles in altered positions. A black arrow indicates a directional relationship from left to right.
### Components/Axes
- **Left Grid**:
- 4x4 grid of white squares.
- Black squares located at:
- Row 2, Column 2
- Row 4, Column 4
- **Right Grid**:
- 4x4 grid of white squares.
- Black circles located at:
- Row 1, Column 2
- Row 2, Column 4
- Row 3, Column 3
- Row 4, Column 1
- **Arrow**:
- Black arrow pointing from the left grid to the right grid, positioned centrally between the two grids.
### Detailed Analysis
- **Left Grid**:
- Two black squares occupy diagonal positions (2,2) and (4,4).
- All other squares are white.
- **Right Grid**:
- Four black circles occupy positions (1,2), (2,4), (3,3), and (4,1).
- No overlapping or adjacent black circles.
- **Arrow**:
- Positioned horizontally between the two grids, aligned with the center of the grids.
### Key Observations
1. The transformation replaces black squares with black circles.
2. The positions of the black elements shift significantly:
- Original square at (2,2) → circle at (1,2) (up one row).
- Original square at (4,4) → circle at (3,3) (diagonal shift).
- Additional circles appear at (2,4) and (4,1), not present in the original grid.
3. No textual labels, legends, or axis markers are present.
### Interpretation
The diagram illustrates a positional transformation of elements from squares to circles, with a non-trivial rearrangement of their locations. The absence of textual labels suggests the transformation is purely visual or symbolic, possibly representing a process like data encoding, state transition, or pattern reconfiguration. The diagonal and orthogonal shifts imply a systematic rule governing the movement of elements, though the exact logic is not explicitly defined in the image. The use of circles instead of squares may indicate a change in data type or representation (e.g., from discrete to continuous values).
</details>
Figure 9: Conversion of an input dosun fuwari problem instance to its solution.
Dosun Fuwari (Nikoli) as shown in Figure 9 is another example problem from FCoReBench. We are given a square board with regions (cells enclosed in bold lines) and we have to fill the board with balloons and iron balls such that one balloon and one iron ball is placed in each region. Balloons are light and float, so they must be placed in one of the cells at the top, in a cell right under a black cell (filled-in cell), or under other balloons. Iron balls are heavy and sink, so they must be placed in one of the cells at the bottom, or in a cell right over a black cell or over other iron balls. The box given below gives the more formal description of the rules of dosun fuwari in natural language.
We are given a partially filled n*n square board. We are also given subgrids of the input board. Cells in the input board can either be empty or filled (that is, nothing can be placed in them, they are blackened) or can be balloons or iron balls. - The only thing we can do is place balloons or iron balls in some of or all of the empty cells - Each subgrid specified in the input should have exactly one balloon and iron ball in the solved board - Because balloons are buoyant, they should be positioned either in one of the cells located at the top of the board or in a cell directly below a filled cell (i.e., one of the blackened cells in the input) or below other balloons. - Iron balls, being dense, will sink and should therefore be positioned either directly on one of the cells located at the bottom of the input board, or on a cell directly above a filled cell (i.e., one of the blackened cells in the input), or above another iron ball.
### A.3 Natural Language Description of Input and Output format
For many problems we consider input-output instances are typically not represented in text. For each problem we describe a straightforward conversion of the input and output space to text in natural language. The following sections consider examples of a few problems from FCoReBench.
#### A.3.1 Example Problem: Survo
<details>
<summary>extracted/6211530/Images/survo-input-representation.png Details</summary>

### Visual Description
## Grid Transformation Diagram: Data Mapping Process
### Overview
The image shows a two-part system: a 4x5 grid on the left with labeled rows (1-3) and columns (A-D), and a rounded rectangular grid on the right with numerical values. An arrow indicates a transformation or mapping relationship between the two grids. The right grid contains a 4x5 matrix of numbers with specific patterns in its last column.
### Components/Axes
**Left Grid:**
- **Rows:** Labeled 1 (top), 2, 3 (bottom)
- **Columns:** Labeled A (left), B, C, D (right)
- **Structure:** 4 columns × 4 rows (including empty top row)
- **Key Values:**
- Row 1: B=6, D=30
- Row 2: A=8, D=18
- Row 3: C=3, D=30
- Bottom row: A=27, B=16, C=10, D=25
**Right Grid:**
- **Structure:** 4 rows × 5 columns (no explicit labels)
- **Key Values:**
- Row 1: 0, 6, 0, 0, 30
- Row 2: 8, 0, 0, 0, 18
- Row 3: 0, 0, 3, 0, 30
- Row 4: 27, 16, 10, 25 (matches left grid's bottom row)
### Detailed Analysis
1. **Left Grid Patterns:**
- Column D contains 30 in rows 1 and 3, 18 in row 2
- Bottom row (row 4) contains 27, 16, 10, 25
- Row 1 has 6 in column B
- Row 2 has 8 in column A
- Row 3 has 3 in column C
2. **Right Grid Patterns:**
- Last column (column 5) contains 30, 18, 30 in rows 1-3
- Row 4 matches left grid's bottom row exactly
- First three rows show non-zero values only in specific positions:
- Row 1: Column 2 = 6
- Row 2: Column 1 = 8
- Row 3: Column 3 = 3
### Key Observations
1. **Column D Mapping:** The right grid's last column (column 5) mirrors the left grid's column D values (30, 18, 30) in rows 1-3
2. **Row 4 Consistency:** The right grid's fourth row is an exact copy of the left grid's bottom row
3. **Positional Mapping:** Non-zero values in the right grid's first three rows align with the left grid's filled cells:
- Left B1 (6) → Right B1 (6)
- Left A2 (8) → Right A2 (8)
- Left C3 (3) → Right C3 (3)
### Interpretation
This appears to be a data transformation process where:
1. The left grid represents raw data with specific values in certain positions
2. The right grid shows:
- A transformed version of the data (first three rows)
- A direct copy of the bottom row (row 4)
3. The transformation seems to:
- Preserve column D values in the last column
- Maintain positional relationships for non-zero values
- Zero out other positions except for specific mappings
4. The bottom row's consistency suggests it might represent base values or constants that remain unchanged through the transformation
The system likely demonstrates a data processing workflow where specific values are preserved or transformed while maintaining structural relationships between the original and processed data.
</details>
Figure 10: Representation of input instances of survo as text.
Figure 10 represents the conversion of the inputs to survo, originally represented as grid images to text. Here empty cells are denoted by 0’s and the filled cells have corresponding values. For a given $m\times n$ board, each row has $m+1$ space separated integers with the first m integers representing the first row of the input board and the $(m+1)^{th}$ integer representing the row sum. The last row contains $n$ integers represent the column sums. The box below describes this conversion more formally in natural language.
Input Format: - The input will have $m+1$ lines - The first $m$ lines will have $n+1$ space-separated integers - Each of these $m$ lines represents one row of the partially solved input board (n integers), followed by the required row sum (a single integer) - The last line of the input will have $n$ space-separated integers each of which represents the required column sum in the solved board Sample Input: 0 6 0 0 0 30 8 1 0 0 0 17 0 9 3 0 30 27 16 10 25
Output Format: - The output should have $m$ lines, each representing one row of the solved board - Each of these $m$ lines should have $n$ space-separated integers representing the cells of the solved board - Each integer should be from $1$ to $m*n$ Sample Output: 12 6 2 10 8 1 5 4 7 9 3 11
#### A.3.2 Example Problem: Dosun Fuwari
<details>
<summary>extracted/6211530/Images/dosun-fuwari-input-representation.png Details</summary>

### Visual Description
## Grid-to-Binary Conversion Diagram: 4x4 Grid with Positional Encoding
### Overview
The image depicts a 4x4 grid of squares with three black squares, connected via an arrow to a table listing binary numbers and their decimal equivalents. The grid appears to encode positional data into binary values, with the table providing a reference for decoding these positions.
### Components/Axes
- **Grid Structure**:
- 4x4 grid of squares (16 total cells).
- Black squares located at:
- Row 2, Column 2 (0-based indexing)
- Row 3, Column 1
- Row 4, Column 4 (note: this position is outside the 4x4 grid; likely a typo or error in the image)
- **Table Structure**:
- Left column: Binary numbers (4-bit and 2-bit values).
- Right column: Decimal equivalents.
- Binary numbers include:
- 0000 (0), 0100 (4), 0000 (0), 0001 (1)
- 01 (1), 10 (2), 12 (1100), 13 (1101), 14 (1110), 15 (1111), 11 (3)
### Detailed Analysis
- **Grid-to-Binary Mapping**:
- Each black square's position (row, column) is encoded as a 4-bit binary number:
- Row (2 bits) + Column (2 bits) = 4-bit value.
- Example: (2,2) → Row 2 (10) + Column 2 (10) = 1010 (10 in decimal).
- (3,1) → Row 3 (11) + Column 1 (01) = 1101 (13 in decimal).
- (4,4) is invalid for a 4x4 grid (rows/columns 0–3), suggesting a possible error in the image.
- The table lists all possible 4-bit combinations (0–15) and their decimal equivalents, with some entries grouped by bit length (e.g., 0–1, 2–3, 4–8, etc.).
- **Table Observations**:
- The first four rows of the table show 4-bit binary numbers (e.g., 0000, 0100, 0001).
- Subsequent rows list shorter binary numbers (e.g., 01, 10) and their decimal equivalents (1, 2).
- The final row includes 11 (3) and 15 (15), indicating a mix of 2-bit and 4-bit values.
### Key Observations
1. **Positional Encoding**: The grid uses a 4-bit positional encoding scheme where each cell's row and column indices are concatenated to form a binary number.
2. **Inconsistent Grid Size**: The black square at (4,4) exceeds the 4x4 grid's bounds (rows/columns 0–3), indicating a potential error in the diagram.
3. **Table Completeness**: The table includes all 16 possible 4-bit values (0–15) but groups them irregularly (e.g., 0–1, 2–3, 4–8, etc.).
### Interpretation
- **Purpose**: The diagram demonstrates how positional data in a grid can be converted into binary/decimal values, likely for applications like data encoding, error correction, or spatial indexing.
- **Outliers**: The (4,4) position in the grid is invalid, suggesting a possible typo or mislabeling. If corrected to (3,3), it would map to 1111 (15), aligning with the table.
- **Trends**: The table systematically lists all 4-bit combinations, with decimal values increasing as binary numbers grow (e.g., 0000=0, 0001=1, ..., 1111=15).
- **Significance**: This encoding method could be used in computer science for tasks like memory addressing, image compression, or error detection in grid-based systems.
</details>
Figure 11: Representation of inputs instances to dosun-fuwari as text.
Figure 11 represents conversion of the inputs to dosun fuwari, originally represented as grid images to text. Here the first few lines represent the input board followed by a string ’—–’ which acts as a separator following which each of the lines has space-separated integers representing the subgrids of the input board. Cells are numbered in row-major order starting from 0, and this numbering is used to represent cells in each of the lines describing the subgrids. In the first few lines representing the input board, 0’s represent the empty cells that must be filled. 1’s denote the blackened cell, 2s denote the balloons and 3’s denote the iron balls. The box below describes these rules more formally in natural language
Input-Format: - The first few lines represent the input board, followed by a line containing --------, which acts as a separator, followed by several lines where each line represents one subgrid - Each of the lines representing the input board will have space-separated integers ranging from 0 to 3 - 0 denotes empty cells, 1 denotes a filled cell (blackened cell), 2 denotes a cell with a balloon, 3 denotes a cell with an iron ball - After the board, there is a separator line containing -------- - Each of the following lines has space-separated elements representing the subgrids on the input board - Each of these lines has integers representing cells of a subgrid - Cells are numbered in row-major order starting from 0, and this numbering is used to represent cells in each of the lines describing the subgrids Sample-Input: 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 -------- 0 1 2 3 6 10 4 8 12 5 9 13 14 15 7 11 Output Format: - The output should contain as many lines as the size of the input board, each representing one row of the solved board - Each row should have n space separate integers (ranging from 0-3) where n is the size of the input board - Empty cells will be denoted by 0s, filled cells (blackened) by 1s, balloons by 2s and iron balls by 3s Sample-Output: 2 3 0 2 2 1 0 2 0 2 3 3 3 0 3 1
#### A.3.3 Example Problem: Hamiltonian Path
<details>
<summary>extracted/6211530/Images/hamiltonian-path-input-representation.png Details</summary>

### Visual Description
## Network Diagram: Node Connections and Adjacency Matrix
### Overview
</details>
Figure 12: Representation of input instances to hamiltonian-path as text.
Figure 12 represents the conversion of inputs to hamiltonian-path, originally represented as graph image to text. The first line denotes the number of vertices present in the graph followed by which each node of the graph will be numbered from 0 - N-1. Each of the subsequent lines represents an edge of the graph and will contain two space-separated integers (according to the numbering defined previously). The output is a single word (YES/NO) indicating if a hamiltonian path exists in the graph. The box below describes this more formally in natural language.
Input Format: - The first line will contain a single integer N, the number of nodes in the graph - The nodes of the graph will be numbered from 0 to N-1 - Each of the subsequent lines will represent an edge of the graph and will contain two space-separated integers (according to the numbering defined above) Sample-Input: 5 0 1 1 2 2 3 3 4 Output Format: - The output should contain a single line with a single word - The word should be YES if a path exists in the input graph according to constraints specified above and NO otherwise Sample Output: YES
Table 7: Names of problems in FCoReBench, number of samples in the training set, number of samples in the test set, average size of input instances in training set, average size of input instances in test set and computational complexity. The brackets in the 4th column describe how input instance sizes are measured. ? in the computational complexity column indicates that results are not available for the corresponding problem.
| 3-Partition (Non Decision) | 15 | 30 | 12 (array size) | 17.7 | NP-Hard |
| --- | --- | --- | --- | --- | --- |
| 3-Partition (Decision) | 15 | 30 | 12 (array size) | 17.7 | NP-Complete |
| Binario | 15 | 50 | 4.0 $\times$ 4.0 (grid size) | 6.96 $\times$ 6.96 | NP-Hard [De Biasi, 2013] |
| Car-Sequencing | 15 | 30 | 6.96, 3.66, 4.33 (# of cars, # of options, # of classes) | 9.06, 5.66, 6.33 | NP-Hard [Kis, 2004] |
| Clique Cover | 15 | 30 | 6.26, 9.4 (# of nodes, # of edges) | 12.9, 31.4 | NP-Complete |
| Cryptarithmetic | 15 | 30 | 4.32 (Average # of digits in the two operands ) | 4.26 | NP-Hard [Epstein, 1987] |
| Dosun Fuwari | 15 | 30 | 3.066 $\times$ 3.066 (grid size) | 5.23 $\times$ 5.23 | NP-Hard [Iwamoto and Ibusuki, 2018] |
| Futoshiki | 15 | 47 | 5 $\times$ 5 (grid size) | 7.57 $\times$ 7.57 | NP-Hard [Lloyd et al., 2022] |
| Fill-a-pix | 15 | 35 | $2.87\times 2.87$ (grid size) | $4.1\times 4.1$ | NP-Hard [HIGUCHI and KIMURA, 2019] |
| Flow-Shop | 15 | 30 | 6.06, 3.4 (# of jobs, #num of machines) | 3.83, 9.13 | NP-Complete [Garey et al., 1976a] |
| Factory Workers | 15 | 30 | 5.73, 12.66 (# of factories, # of workers) | 12.35, 30.0 | ? |
| Graph Coloring | 15 | 30 | 5.13, 6.8 (# of nodes, # of edges) | 9, 21.06 | NP-Complete [Gent et al., 2017] |
| Hamiltonian Path | 15 | 30 | 5.93, 8.6 (# of nodes, # of edges) | 13.0, 19.77 | NP-Complete |
| Hamiltonian Cycle | 15 | 30 | 5.93, 8.6 (# of nodes, # of edges) | 11.07, 18.67 | NP-Complete |
| Hidato | 15 | 45 | $2.87\times 2.87$ (grid size) | $4.1\times 4.1$ | NP-Hard [Itai et al., 1982] |
| Independent Set | 12 | 30 | 5.8, 7.2 (# of nodes, # of edges) | 14.2, 29.8 | NP-Complete |
| Inshi-No-Heya | 15 | 49 | 5.0 $\times$ 5.0 (grid size) | 6.5 $\times$ 6.5 | ? |
| Job-Shop | 15 | 30 | 3.66, 3.66 (# of jobs, # of machines) | 9, 9 | NP-Complete [Garey et al., 1976b] |
| K-Clique | 15 | 31 | 4.87, 7.6 (# of nodes, # of edges) | 8.84, 26.97 | NP-Complete |
| Keisuke | 15 | 30 | 4.33 $\times$ 4.33 (grid size) | 5.83 $\times$ 5.83 | ? |
| Ken Ken | 15 | 20 | 3.26 $\times$ 3.26 (grid size) | 5.2 $\times$ 5.2 | NP-Hard [Haraguchi and Ono, 2015] |
| Knapsack | 15 | 30 | 4.8 (array size) | 24.56 | NP-Hard |
| K Metric Centre | 15 | 30 | 4.5 (# of nodes) | 7 | NP-Hard |
| Latin Square | 15 | 50 | 6 $\times$ 6.0 (grid size) | 14.3 $\times$ 14.3 | NP-Hard [Colbourn, 1984] |
| Longest Path Problem | 15 | 30 | 6.2, 5.87 (# of nodes, # of edges) | 12.6, 16.3 | NP-Complete |
| Magic Square | 15 | 30 | 3.0 $\times$ 3.0 (grid size) | 4.33 $\times$ 4.33 | ? |
| Minimum Dominating Set | 15 | 30 | 6.0, 17.73 (# of nodes, # of edges) | 14.53, 45.0 | NP-Complete |
| N-Queens | 15 | 30 | 3.8 $\times$ 3.8 (grid size) | 6.33 $\times$ 6.33 | NP-Hard [Gent et al., 2017] |
| Number Link | 15 | 50 | 4 $\times$ 4 (grid size) | 7.1 $\times$ 7.1 | NP-Hard |
| Partition Problem | 15 | 35 | 7.06 (array size) | 15 | NP-Complete |
| PRP | 15 | 30 | 4.93, 12.6 (# of units, # of days) | 6.7, 23.9 | ? |
| Shinro | 15 | 30 | 5.13 $\times$ 5.13 (grid size) | 9.2 $\times$ 9.2 | ? |
| Subset Sum | 15 | 30 | 3.67 (array size) | 11.87 | NP-Complete |
| Summle | 15 | 20 | 2.33 (# of equations) | 3.75 | ? |
| Sudoku | 15 | 50 | 4.0 $\times$ 4.0 (grid size) | 13.3 $\times$ 13.3 | NP-Hard [YATO and SETA, 2003] |
| Sujiko | 15 | 45 | 3.0 $\times$ 3.0 (grid size) | 4.0 $\times$ 4.0 | ? |
| Survo | 15 | 47 | 13.5 (area of grid) | 20.25 | ? |
| Symmetric Sudoku | 15 | 30 | 4 $\times$ 4 (grid size) | 6.5 $\times$ 6.5 | ? |
| Sliding Tiles | 15 | 30 | $2.66\times 2.66$ , 6.13 (grid size, search depth) | $3.63\times 3.63$ , 8.83 | NP-Complete [Demaine and Rudoy, 2018] |
| Vertex Cover | 14 | 30 | 6.4, 13.4 (# of nodes, # of edges) | 12.6, 40.4 | NP-Complete |
## Appendix B Prompt Templates
In this section we provide prompt templates used for our experiments on FCoReBench, including the templates for the baselines we experimented with, SymPro-LM as well as prompt templates for providing feedback.
### B.1 Few-Shot Prompt Template
Task: <Description of the Rules of the problems>
Input-Format: <Description of Textual Representation of Inputs> <Input Few Shot Example-1> <Input Few Shot Example-2> ........................ ........................ <Input Few Shot Example-n>
Output-Format
<Description of Textual Representation of Outputs> <Output of Few Shot Example-1> <Output of Few Shot Example-2> ............................ ............................ <Output of Few Shot Example-n>
Input problem instance to be solved: <Problem Instance from the Test Set>
### B.2 PAL Prompt Template
The following box describes the base prompt template used for PAL experiments with FCoReBench.
Write a Python program to solve the following problem: Task: <Description of the Rules of the problem> Input-Format: <Description of Textual Representation of Inputs> Sample-Input: <Sample Input from Feedback Set> Output-Format: <Description of Textual Representation of Outputs> Sample-Output: <Output of Sample Input from Feedback Set> Don’t write anything apart from the Python program; use Python comments if needed. The Python program is expected to read the input from input.txt and write the output to a file named output.txt. The Python program must only use standard Python libraries.
### B.3 SymPro-LM Template
#### B.3.1 Base Prompt
Write a Python program to solve the following problem: Task: <Description of the Rules of the problem>
Input-Format: <Description of Textual Representation of Inputs> Sample-Input: <Sample Input from Feedback Set>
Output-Format: <Description of Textual Representation of Outputs> Sample-Output: <Output of Sample Input from Feedback Set>
The Python program must read the input from input.txt and convert that particular input to the corresponding constraints, which it should pass to the Z3 solver, and then it should use the Z3 solver’s output to write the solution to a file named output.txt
Don’t write anything apart from the Python program; use Python comments if needed.
### B.4 Feedback Prompt Templates
These prompt templates are used to provide feedback in the case of SymPro-LM or PAL.
#### B.4.1 Programming Errors
Your code is incorrect and produces the following runtime error:<RUN TIME ERROR> for the following input: <INPUT> rewrite your code and fix the mistake
#### B.4.2 Verification Error
Your code is incorrect, when run on the input: <INPUT> the output produced is <OUTPUT-GENERATED> which is incorrect whereas one of the correct output is <GOLD-OUTPUT>. Rewrite your code and fix the mistake.
#### B.4.3 Timeout Error
Your code was inefficient and took more than <TIME-LIMIT> seconds to execute for the following input: <INPUT>. Rewrite the code and optimize it.
### B.5 Logic-LM Prompt Template
The following box describes the prompt for Logic-LM experiments with FCoReBench, the prompt is used to convert the input to its symbolic representation.
Task: <Description of the Rules of the problem> Input-Format: <Description of Textual Representation of Inputs> Sample-Input: <Sample Input from Feedback Set> Output-Format: <Description of Textual Representation of Outputs> Sample-Output: <Output of Sample Input from Feedback Set> Input problem to be solved: <Problem Instance from the Test Set> The task is to declare variables and the corresponding constraints on them in SMT2 for the input mentioned above. The variables and constraints should be such that once the variables are solved for, one can use the solution to the variables (which satisfies the constraints) to get to the output in the desired format for the above mentioned input. Only Write the SMT2 code and nothing else. Write the complete set of SMT2 variables and constraints. Enclose SMT2 code in ‘‘‘smt2 ‘‘‘
### B.6 ToT
In this section we give an example of the ToT prompts used for experiments on FCoReBench. We use latin square as the running example.
#### B.6.1 Propose Prompt
This prompt is called for each search node to get the possible next states.
Task: We are given a nxn partially solved board and have to solve it according to the following rules: - We need to replace the 0s with numbers from 1-n. - Non-zero numbers on the board cannot be replaced. - Each number from 1-n must appear exactly once in each column and row in the solved board Given a board, decide which cell to fill in next and the number to fill it with, each possible next step is separated by a new line. You can output up-to 3 next steps. If the input board is fully filled or no valid next step exists output only ’END’. Sample-Input-1: 1 0 3 2 0 0 0 1 2 Possible next steps for Sample Input-1: 1 2 3 2 0 0 0 1 2 1 0 3 2 0 0 3 1 2 1 0 3 2 3 0 0 1 2 Sample-Input-2: 1 2 3 2 3 1 3 1 2 Possible next steps for Sample Input-2: END Input: <node from the search tree> Possible next steps for Input:
#### B.6.2 Value Prompt
This prompt is called for each search node to evaluate how likely it is to get to the solution from that node. We use this to prune the search tree.
Task: We are given a nxn partially solved board and have to solve it according to the following rules: - We need to replace the 0s with numbers from 1-n. - Non-zero numbers already on the board cannot be replaced. - Each number from 1-n must appear exactly once in each column and row in the solved board. Given a partially filled board, evaluate how likely it is to reach a valid solution (sure/likely/impossible) Output-Format: The output should have two lines as follows: <Reasoning> <Sure/Likely/Impossible> Sample-Input-1: 0 0 0 0 0 0 0 0 0 Board is empty, hence it is always possible to get to a solution. Sure Sample-Input-2: 1 0 3 2 0 0 0 1 2 No constraint is violated till now and it is likely to get to a solution. Likely Sample-Input-3: 1 1 3 2 0 0 0 1 2 Constraint violated in first row. Impossible Input: <node from the search tree>
## Appendix C Experimental Details
### C.1 FCoReBench
All methods are evaluated zero-shot, meaning no in-context demonstrations for the task are provided to the LLM. We choose the zero-shot setting for FCoReBench because of the structured nature of problems, making it unfair to provide demonstrations of highly related problems instances to the LLM. The LLM is only given a description of the rules of the problem and the task it has to perform. For PAL and SymPro-LM we present results with 10 solved examples for feedback.
#### C.1.1 ToT prompting
We evaluate ToT prompting [Yao et al., 2023] on 3 problems in FCoReBench. Our implementation closely resembles the official implementation which we adapt for grid based logical puzzles. We use a BFS based approach with propose and value prompts. An example prompt for latin square can be found in Appendix B.6. Problems in our benchmark have huge branching factors, to reduce experimentation cost, we greedily prune the search frontier to 5 nodes at each depth based on scores assigned by the LLM during the value stage. Additionally during the propose stage we prompt the LLM to output at most 3 possible next steps. The temperature is set to 0.0 for reproducibility. Unlike the original implementation problems in our benchmark can have varying search depths, hence we explicitly ask the LLM to output ’END’ once a terminal node is reached. At any depth if a terminal node is amongst the best nodes we terminate the search and return the terminal nodes at that depth, otherwise we search till a maximum search depth governed by the problem instance.
### C.2 Other Datasets
We evaluate SymPro-LM on 3 other datasets apart from FCoReBench. Our evaluation closely follows Logic-LM’s evaluation [Pan et al., 2023]. For baselines we use the same prompts as Logic-LM. Logic-LM did not evaluate PAL, for which we write prompts on our own similar to the CoT prompts used by Logic-LM. For SymPro-LM we write prompts on our own. We use the same in-context examples as used for Logic-LM. We instruct the LLM to write a Python program to parse the input problem, setup variables/constraints and pass these to a symbolic solver, call the solver and using the solver’s output print the final answer. For LogicalDeduction we use the python-constraints https://github.com/python-constraint/python-constraint package which is a CSP solver. For other datasets we use the Z3-solver https://pypi.org/project/z3-solver/. Since all problems are single correct MCQ questions we use accuracy as our metric. Like Logic-LM if there is an error during execution of the program generated by the LLM we fall back on using chain-of-thought to predict the answer. The following sections provide descriptions for the datasets used.
#### C.2.1 PrOntoQA
PrOntoQA [Saparov and He, 2023a] is a recent synthetic dataset created to analyze the deductive reasoning capacity of LLMs. We use the hardest fictional characters version of the dataset, based on the results in [Saparov and He, 2023a]. Each version is divided into different subsets depending on the number of reasoning hops required. We use the hardest 5-hop subset for evaluation. Each question in PrOntoQA aims to validate a new fact’s veracity, such as “True or false: Alex is not shy.” The following box provides an example:
Context: Each jompus is fruity. Every jompus is a wumpus. Every wumpus is not transparent. Wumpuses are tumpuses. Tumpuses are mean. Tumpuses are vumpuses. Every vumpus is cold. Each vumpus is a yumpus. Yumpuses are orange. Yumpuses are numpuses. Numpuses are dull. Each numpus is a dumpus. Every dumpus is not shy. Impuses are shy. Dumpuses are rompuses. Each rompus is liquid. Rompuses are zumpuses. Alex is a tumpus Question: True or false: Alex is not shy. Options: A) True B) False
#### C.2.2 ProofWriter
ProofWriter [Tafjord et al., 2021] is another commonly used dataset for deductive logical reasoning. Compared with PrOntoQA, the problems are expressed in a more naturalistic language form. We use the open-world assumption (OWA) subset in which each example is a (problem, goal) pair and the label is one of PROVED, DISPROVED, UNKNOWN. The dataset is divided into five parts each part requiring 0, $\leq$ 1, $\leq$ 2, $\leq$ 3, and $\leq$ 5 hops of reasoning, respectively. We evaluate the hardest depth-5 subset. To reduce overall experimentation costs, we randomly sample 600 examples in the test set and ensure a balanced label distribution. The following box provides an example:
Context: Anne is quiet. Erin is furry. Erin is green. Fiona is furry. Fiona is quiet. Fiona is red. Fiona is rough. Fiona is white. Harry is furry. Harry is quiet. Harry is white. Young people are furry. If Anne is quiet then Anne is red. Young, green people are rough. If someone is green then they are white. If someone is furry and quiet then they are white. If someone is young and white then they are rough. All red people are young. Question: Based on the above information, is the following statement true, false, or unknown? Anne is white. Options: A) True B) False C) Uncertain
#### C.2.3 LogicalDeduction
LogicalDeduction bench authors, 2023 is a challenging logical reasoning task from the BigBench collaborative benchmark. The problems are mostly about deducing the order of a sequence of objects from a minimal set of conditions. We use the full test set consisting of 300 examples. The following box provides an example:
Context: The following paragraphs each describe a set of three objects arranged in a fixed order. The statements are logically consistent within each paragraph. In an antique car show, there are three vehicles: a station wagon, a convertible, and a minivan. The station wagon is the oldest. The minivan is newer than the convertible. Question: Which of the following is true? Options: A) The station wagon is the second-newest. B) The convertible is the second-newest. C) The minivan is the second-newest.
### C.3 Hardware Details
All experiments were conducted on an Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz, 32 cores, 64-bit, with 512 KiB L1 cache, 16 MiB L2 cache, and 22 MiB L3 cache. We accessed GPT-4-Turbo and GPT-3.5-Turbo by invoking both models via the OpenAI API. Mixtral 8x7B was also accessed by using the Mistral AI API although the model weights are available publicly. We preferred the API, over running the model locally given the ease of setup because all our other experiments were with APIs.
## Appendix D Additional Results
### D.1 Inference Time
The following tables describes the average inference time for test set instances of a few illustrative problems in FCoReBench.
| Sudoku Latin Square Cryptarithmetic | 2.01 5.46 0.83 | 0.215 0.2 0.73 |
| --- | --- | --- |
| Independent Set | 1.438 | 0.106 |
| Minimum Dominating Set | 0.98 | 0.112 |
| Sujiko | 0.742 | 0.102 |
| Vertex Cover | 1.58 | 0.105 |
Table 8: Average inference time in seconds of SymPro-LM and PAL for test set instances for selected problems in FCoReBench
SymPro-LM performs much better compared to PAL because PAL programs often tend to be brute force and inefficient whereas the solver can exploit the nature of the input-instance while performing the reasoning with SymPro-LM.
## Appendix E Examples
### E.1 SymPro-LM
#### E.1.1 FCoReBench
This section includes example programs generated by SymPro-LM for some illustrative problems in FCoReBench. Each program reads the input from a file, generates the corresponding constraints, calls the solver internally and then uses the solution from the solver to write the output in the desired format to a file.
Figure 13: SymPro-LM example: correct program for sudoku generated by GPT-4-Turbo.
Figure 14: SymPro-LM example: correct program for keisuke generated by GPT-4-Turbo.
Figure 15: SymPro-LM example: correct program for hamiltonian path generated by GPT-4-Turbo.
Figure 16: SymPro-LM example: correct program for vertex-cover generated by GPT-4-Turbo.
Figure 17: SymPro-LM example: snippet of incorrect program for binairo generated by GPT-4-Turbo and same snippet after correction by feedback.
Figure 18: SymPro-LM example: correct program for subset-sum generated by GPT-4-Turbo.
#### E.1.2 Other Datasets
Figure 19: SymPro-LM PrOntaQA Example Program.
Figure 20: SymPro-LM ProofWriter Example Program.
Figure 21: SymPro-LM LogicalDeduction Example Program.
### E.2 PAL
This section includes example programs generated by PAL for some illustrative problems in FCoReBench. Each program reads the input from a file, performs the reasoning and writes the output to another text file.
Figure 22: PAL example: correct program for sudoku generated by GPT-4-Turbo.
Figure 23: PAL example: correct program for futoshiki generated by GPT-4-Turbo.
Figure 24: PAL example: correct program for hamiltonian path generated by GPT-4-Turbo.
Figure 25: PAL example: correct program for vertex cover generated by GPT-4-Turbo.
Figure 26: PAL example: correct program for subset sum generated by GPT-4-Turbo.
## Appendix F Logic-LM
This section describes example runs of Logic-LM for certain problems in FCoReBench.
Figure 27: Logic-LM example: correct constraints for a sudoku instance generated by GPT-4-Turbo.
Figure 28: Logic-LM example: correct constraints for a subset sum instance generated by GPT-4-Turbo.
Figure 29: Logic-LM example: correct constraints for graph coloring instance generated by GPT-4-Turbo.
Figure 30: Logic-LM example: syntax error (highlighted by a comment) in constraints for sudoku instance generated by GPT-3.5-Turbo.
Figure 31: Logic-LM example: errors (highlighted comments) in constraints for sudoku instance generated by GPT-4-Turbo.