# FCoReBench: Can Large Language Models Solve Challenging First-Order Combinatorial Reasoning Problems?
Abstract
Can the large language models (LLMs) solve challenging first-order combinatorial reasoning problems such as graph coloring, knapsack, and cryptarithmetic? By first-order, we mean these problems can be instantiated with potentially an infinite number of problem instances of varying sizes. They are also challenging being NP-hard and requiring several reasoning steps to reach a solution. While existing work has focused on coming up with datasets with hard benchmarks, there is limited work which exploits the first-order nature of the problem structure. To address this challenge, we present FCoReBench, a dataset of $40$ such challenging problems, along with scripts to generate problem instances of varying sizes and automatically verify and generate their solutions. We first observe that LLMs, even when aided by symbolic solvers, perform rather poorly on our dataset, being unable to leverage the underlying structure of these problems. We specifically observe a drop in performance with increasing problem size. In response, we propose a new approach, SymPro-LM, which combines LLMs with both symbolic solvers and program interpreters, along with feedback from a few solved examples, to achieve huge performance gains. Our proposed approach is robust to changes in the problem size, and has the unique characteristic of not requiring any LLM call during inference time, unlike earlier approaches. As an additional experiment, we also demonstrate SymPro-LM ’s effectiveness on other logical reasoning benchmarks.
1 Introduction
Recent works have shown that large language models (LLMs) can reason like humans (Wei et al., 2022a), and solve diverse natural language reasoning tasks, without the need for any fine-tuning (Wei et al., 2022c; Zhou et al., 2023; Zheng et al., 2023). We note that, while impressive, these tasks are simple reasoning problems, generally requiring only a handful of reasoning steps to reach a solution.
We are motivated by the goal of assessing the reasoning limits of modern-day LLMs. In this paper, we study computationally intensive, first-order combinatorial problems posed in natural language. These problems (e.g., sudoku, knapsack, graph coloring, cryptarithmetic) have long served as important testbeds to assess the intelligence of AI systems (Russell and Norvig, 2010), and strong traditional AI methods have been developed for them. Can LLMs solve these directly? If not, can they solve these with the help of symbolic AI systems like SMT solvers? To answer these questions, we release a dataset named FCoReBench, consisting of $40$ such problems (see Figure 1).
We refer to such problems as fcore (f irst-order co mbinatorial re asoning) problems. Fcore problems can be instantiated with any number of instances of varying sizes, e.g., 9 $×$ 9 and 16 $×$ 16 sudoku. Most of the problems in FCoReBench are NP-hard and solving them will require extensive planning and search over a large number of combinations. We provide scripts to generate instances for each problem and verify/generate their solutions. Across all problems we generate 1354 test instances of varying sizes for evaluation and also provide 596 smaller sized solved instances as a training set. We present a detailed comparison with existing benchmarks in the related work (Section 2).
Not surprisingly, our initial experiments reveal that even the largest LLMs can only solve less than a third of these instances. We then turn to recent approaches that augment LLMs with tools for better reasoning. Program-aided Language models (PAL) (Gao et al., 2023) use LLMs to generate programs, offloading execution to a program interpreter. Logic-LM (Pan et al., 2023) and SAT-LM (Ye et al., 2023) use LLMs to convert questions to symbolic representations, and external symbolic solvers perform the actual reasoning. Our experiments show that, by themselves, their performances are not that strong on FCoReBench. At the same time, both these methods demonstrate complementary strengths – PAL can handle first-order structures well, whereas Logic-LM is better at complex reasoning. In response, we propose a new approach named SymPro-LM, which combines the powers of both PAL and symbolic solvers with LLMs to effectively solve fcore problems. In particular, the LLM generates an instance-agnostic program for an fcore problem that converts any problem instance to a symbolic representation. This program passes this representation to a symbolic solver, which returns a solution back to the program. The program then converts the symbolic solution to the desired output representation, as per the natural language instruction. Interestingly, in contrast to LLMs with symbolic solvers, once this program is generated, inference on new fcore instances (of any size) can be done without any LLM calls.
SymPro-LM outperforms few-shot prompting by $21.61$ , PAL by $3.52$ and Logic-LM by $16.83$ percent points on FCoReBench, with GPT-4-Turbo as the LLM. Given the structured nature of fcore problems, we find that utilizing feedback from small sized solved examples to correct the programs generated for just four rounds yields a further $21.02$ percent points gain for SymPro-LM, compared to $12.5$ points for PAL.
We further evaluate SymPro-LM on three (non-first order) logical reasoning benchmarks from literature (Tafjord et al., 2021; bench authors, 2023; Saparov and He, 2023a). SymPro-LM consistently outperforms existing baselines by large margins on two datasets, and is competitive on the third, underscoring the value of integrating LLMs with symbolic solvers through programs. We perform additional analyses to understand impact of hyperparameters on SymPro-LM and its errors. We release the dataset and code for further research. We summarize our contributions below:
- We formally define the task of natural language first-order combinatorial reasoning and present FCoReBench, a corresponding benchmark.
- We provide a thorough evaluation of LLM prompting techniques for fcore problems, offering new insights into existing techniques.
- We propose a novel approach, SymPro-LM, demonstrating its effectiveness on fcore problems as well as other datasets, along with an in-depth analysis of its performance.
<details>
<summary>extracted/6211530/Images/puzzle-bench.png Details</summary>

### Visual Description
\n
## Visual Representation: Collection of Computational Problems
### Overview
The image presents a series of visual representations of classic computational problems. From left to right, these are: Knapsack Problem, Graph Coloring, KenKen, Cryptarithmetic, Shinro, and Job-Shop Scheduling. Each problem is depicted with a visual example, rather than a formal chart or diagram with quantifiable data. The image serves as an illustrative overview of these problems, not a data-rich analysis.
### Components/Axes
The image consists of six distinct sections, each labeled with the name of a computational problem. There are no axes or legends in the traditional sense. Each section presents a specific instance of the problem.
### Detailed Analysis or Content Details
1. **Knapsack Problem:**
- A yellow backpack is shown.
- Items with weights are presented: 54 kg (green), 12 kg (tan), 15 kg (blue), and an unspecified weight (grey).
- A question mark (?) is positioned above the backpack, indicating the goal of selecting items to maximize value within the weight limit.
- A dashed line surrounds the backpack, visually defining the capacity constraint.
2. **Graph Coloring:**
- A graph with 6 nodes is displayed.
- Edges connect the nodes, colored red, green, and blue.
- The problem aims to assign colors to nodes such that no adjacent nodes share the same color.
3. **KenKen:**
- A 3x3 grid is presented with numbers and mathematical operations.
- The grid contains the following constraints:
- Top-left cell: 123 with a "+" sign above it.
- Middle-left cell: 312 with a "+" sign above it.
- Bottom-left cell: 231 with a "+" sign above it.
- The goal is to fill the grid with numbers 1-3 such that the sums in each row, column, and cage match the given constraints.
4. **Cryptarithmetic:**
- A classic cryptarithmetic puzzle is shown:
```
S E N D
+ M O R E
= M O N E Y
```
- The goal is to assign unique digits to each letter such that the equation holds true.
5. **Shinro:**
- A 5x5 grid is presented with numbered cells.
- The numbers 1 and 2 appear multiple times.
- The grid contains symbols (arrows, circles, etc.) indicating movement rules.
- The goal is to fill the grid with numbers 1 and 2 such that each number appears only once in each row and column, and the arrows indicate the direction of the next number.
6. **Job-Shop Scheduling:**
- A Gantt chart-like representation of job-shop scheduling is shown.
- Two machines, M1 and M2, are represented.
- Four jobs are scheduled on these machines.
- The x-axis represents time in minutes, ranging from 0 to 35.
- Each job is represented by a colored block, indicating its processing time on each machine.
- Job 1 is purple, Job 2 is teal, Job 3 is blue, and Job 4 is pink.
### Key Observations
The image is a collection of problem statements, not a dataset. There are no numerical trends or distributions to analyze. Each section presents a unique problem with its own set of constraints and objectives.
### Interpretation
The image serves as a visual catalog of different computational problems. It highlights the diversity of challenges in computer science and operations research. The problems range from optimization (Knapsack) to constraint satisfaction (Graph Coloring, KenKen, Cryptarithmetic, Shinro) to scheduling (Job-Shop Scheduling). The visual representations are intended to convey the essence of each problem without delving into the details of algorithms or solutions. The image is a pedagogical tool, designed to introduce these problems to someone unfamiliar with them. It does not contain data that can be statistically analyzed.
</details>
Figure 1: Illustrative examples of problems in FCoReBench (represented as images for illustration).
2 Related Work
Neuro-Symbolic AI: Our work falls in the broad category of neuro-symbolic AI (Yu et al., 2023) which builds models leveraging the complementary strengths of neural and symbolic methods. Several prior works build neuro-symbolic models for solving combinatorial reasoning problems (Palm et al., 2018; Wang et al., 2019; Paulus et al., 2021; Nandwani et al., 2022a, b). These develop specialized problem-specific modules (that are typically not size-invariant), which are trained over large training datasets. In contrast, SymPro-LM uses LLMs, and bypasses problem-specific architectures, generalizes to problems of varying sizes, and is trained with very few solved instances.
Reasoning with Language Models: The previous paradigm to reasoning was fine-tuning of LLMs (Clark et al., 2021; Tafjord et al., 2021; Yang et al., 2022), but as LLMs scaled, they have been found to reason well, when provided with in-context examples without any fine-tuning (Brown et al., 2020; Wei et al., 2022b). Since then, many prompting approaches have been developed that leverage in-context learning. Prominent ones include Chain of Thought (CoT) prompting (Wei et al., 2022c; Kojima et al., 2022), Least-to-Most prompting (Zhou et al., 2023), Progressive-Hint prompting (Zheng et al., 2023) and Tree-of-Thoughts (ToT) prompting (Yao et al., 2023).
Tool Augmented Language Models: Augmenting LLMs with external tools has emerged as a way to solve complex reasoning problems (Schick et al., 2023; Paranjape et al., 2023). The idea is to offload a part of the task to specialized external tools, thereby reducing error rates. Program-aided Language models (Gao et al., 2023) invoke a Python interpreter over a program generated by an LLM. Logic-LM (Pan et al., 2023) and SAT-LM (Ye et al., 2023) integrate reasoning of symbolic solvers with LLMs, which convert the natural language problem into a symbolic representation. SymPro-LM falls in this category and combines LLMs with both program interpreters and symbolic solvers.
Logical Reasoning Benchmarks: There are several reasoning benchmarks in literature, such as LogiQA (Liu et al., 2020) for mixed reasoning, GSM8K (Cobbe et al., 2021) for arithmetic reasoning, FOLIO (Han et al., 2022) for first-order logic, PrOntoQA (Saparov and He, 2023b) and ProofWriter (Tafjord et al., 2021) for deductive reasoning, AR-LSAT (Zhong et al., 2021) for analytical reasoning. These dataset are not first-order i.e. each problem is accompanied with a single instance (despite the rules potentially being described in first-order logic). We propose FCoReBench, which substantially extends the complexity of these benchmarks by investigating computationally hard, first-order combinatorial reasoning problems. Among recent works, NLGraph (Wang et al., 2023) studies structured reasoning problems but is limited to graph based problems, and has only 8 problems in its dataset. On the other hand, NPHardEval (Fan et al., 2023) studies problems from the lens of computational complexity, but works with a relatively small set of 10 problems. In contrast we study the more broader area of first-order reasoning, we investigate the associated complexities of structured reasoning, and have a much large problem set (sized 40). Specifically, all the NP-Hard problems in these two datasets are also present in our benchmark.
3 Problem Setup: Natural Language First-order Combinatorial Reasoning
<details>
<summary>extracted/6211530/Images/puzzle-bench-example-sudoku.png Details</summary>

### Visual Description
\n
## Document: Sudoku Rule and Format Description
### Overview
This document outlines the rules, input format, and output format for a Sudoku solver, presented as a technical specification. It includes natural language descriptions of the rules (NL(C)), input format (NL(X)), and output format (NL(Y)), along with solved examples (Dp).
### Components/Axes
The document is structured into four main sections, each with a heading:
1. **Natural Language Description of Rules (NL(C))**: Describes the constraints of a valid Sudoku grid.
2. **Natural Language Description of Input Format (NL(X))**: Describes the expected format of the input grid.
3. **Natural Language Description of Output Format (NL(Y))**: Describes the expected format of the solved grid.
4. **Solved Examples in their Textual Representation (Dp)**: Provides example input grids and their corresponding solved outputs.
### Detailed Analysis or Content Details
**1. Natural Language Description of Rules (NL(C))**
- Empty cells of the grid must be filled using numbers from 1 to n.
- Each row must have each number from 1 to n exactly once.
- Each column must have each number from 1 to n exactly once.
- Each of the n non-overlapping sub-grids of size √n x √n must have each number from 1 to n exactly once.
- n is a perfect square.
**2. Natural Language Description of Input Format (NL(X))**
- There are n rows and n columns representing the n x n unsolved grid.
- Each row represents the corresponding unsolved row of the grid.
- Each row has n space separated numbers ranging from 0 to n representing the corresponding cells in the grid.
- Empty cells are indicated by 0s.
- The other filled cells will have numbers from 1 to n.
**3. Natural Language Description of Output Format (NL(Y))**
- There are n rows and n columns representing the n x n solved grid.
- Each row represents the corresponding solved row of the grid.
- Each row has n space separated numbers ranging from 1 to n representing the corresponding cells of the solved grid.
**4. Solved Examples in their Textual Representation (Dp)**
**Input-1 / Output-1**
Input-1:
```
0 3 1 2
1 0 4 3
2 1 0 4
3 4 2 0
```
Output-1:
```
4 3 1 2
1 2 4 3
2 1 4 3
3 4 2 1
```
**Input-2 / Output-2**
Input-2:
```
0 3 1 2
2 0 0 4
3 0 0 1
0 0 4 0
```
Output-2:
```
4 3 1 2
2 1 3 4
3 4 2 1
1 2 4 3
```
### Key Observations
The document provides a formal description of the Sudoku puzzle, suitable for implementation in a computer program. The examples demonstrate a 4x4 Sudoku grid, implying that 'n' is 4 in these cases. The use of '0' to represent empty cells is consistent across input and output formats.
### Interpretation
This document serves as a specification for a Sudoku solver. It clearly defines the rules of the game, the expected input format, and the desired output format. The inclusion of solved examples aids in understanding and validating the implementation. The constraint that 'n' must be a perfect square suggests that the solver is designed to handle Sudoku grids of varying sizes (e.g., 4x4, 9x9, 16x16), although the examples only illustrate the 4x4 case. The document is written in a formal, technical style, indicating its intended audience is developers or researchers working on Sudoku solving algorithms.
</details>
Figure 2: FCoReBench Example: Filling a $n× n$ Sudoku board along with its rules, input-output format, and a couple of sample input-output pairs.
A first-order combinatorial reasoning problem $\mathcal{P}$ has three components: a space of legal input instances ( $\mathcal{X}$ ), a space of legal outputs ( $\mathcal{Y}$ ), and a set of constraints ( $\mathcal{C}$ ) that every input-output pair must satisfy. E.g., for sudoku, $\mathcal{X}$ is the space of partially-filled grids with $n× n$ cells, $\mathcal{Y}$ is the space of fully-filled grids of the same size, and $\mathcal{C}$ comprises row, column, and box alldiff constraints, with input cell persistence. To communicate a structured problem instance (or its output) to an NLP system, it must be serialized in text. We overload $\mathcal{X}$ and $\mathcal{Y}$ to also denote the formats for these serialized input and output instances. Two instances for sudoku are shown in Figure 2 (grey box). We are also provided (serialized) training data of input-output instance pairs, $\mathcal{D}_{\mathcal{P}}$ $=\{(x^{(i)},y^{(i)})\}_{i=1}^{N}$ , where $x^{(i)}∈\mathcal{X},y^{(i)}∈\mathcal{Y}$ , such that $(x^{(i)},y^{(i)})$ honors all constraints in $\mathcal{C}$ .
Further, we verbalize all three components – input-output formats and constraints – in natural language instructions. We denote these instructions by $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ , and $NL(\mathcal{C})$ , respectively. Figure 2 illustrates these for sudoku. With this notation, we summarize our setup as follows. For an fcore problem $\mathcal{P}=\langle\mathcal{X},\mathcal{Y},\mathcal{C}\rangle$ , we are provided $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ , $NL(\mathcal{C})$ and training data $\mathcal{D}_{\mathcal{P}}$ , and our goal is to learn a function $\mathcal{F}$ , which maps any (serialized) $x∈\mathcal{X}$ to its corresponding (serialized) solution $y∈\mathcal{Y}$ such that $(x,y)$ honors all constraints in $\mathcal{C}$ .
4 FCoReBench: Dataset Construction
First, we shortlisted computationally challenging first-order problems from various sources. We manually scanned Wikipedia https://en.wikipedia.org/wiki/List_of_NP-complete_problems for NP-hard algorithmic problems and logical-puzzles. We also took challenging logical-puzzles from other publishing houses (e.g., Nikoli), 2 and real world problems from the operations research community and the industrial track of the annual SAT competition https://www.nikoli.co.jp/en/puzzles/, https://satcompetition.github.io/. From this set, we selected problems (1) that can be described in natural language (we remove problems where some rules are inherently visual), and (2) for whom, the training and test datasets can be created with a reasonable programming effort. This led to $40$ fcore problems (see Table 7 for a complete list), of which 30 are known to be NP-hard and others have unknown complexity. 10 problems are graph-based (e.g., graph coloring), 18 are grid based (e.g., sudoku), 5 are set-based (e.g., knapsack), 5 are real-world settings (e.g. car sequencing) and 2 are miscellaneous (e.g., cryptarithmetic).
Two authors of the paper having formal background in automated reasoning and logic then created the natural language instructions and the input-output format for each problem. First, for each problem one author created the input-output formats and the instructions for them ( $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ ). Second, the same author then created the natural language rules ( $NL(\mathcal{C})$ ) by referring to the respective sources and re-writing the rules. These rules were verified by the other author making sure that they were correct i.e. the meaning of the problem did not change and they were unambiguous. The rules were re-written to ensure that an LLM cannot easily invoke its prior knowledge about the same problem. For the same reason, the name of the problem was hidden.
In the case of errors in the natural language descriptions, feedback was given to the author who wrote the descriptions to correct them. In our case typically there were no corrections required except 3 problems where the descriptions were corrected within a single round of feedback. A third independent annotator was employed who was tasked with reading the natural language descriptions and solving the input instances in the training set. The solutions were then verified to make sure that the rules were written and comprehensible by a human correctly. The annotator was able to solve all instances correctly highlighting that the descriptions were correct. The guidelines utilized to re-write the rules from their respective sources were to use crisp and concise English without utilizing technical jargon and avoiding ambiguities. The rules were intended to be understood by any person with a reasonable comprehension of the language and did not contain any formal specifications or mathematical formulas. Appendices A.2 and A.3 have detailed examples of rules and formats, respectively.
Next, we created train/test data for each problem. These instances are generated programmatically by scripts written by the authors. For each problem, one author also wrote a solver and a verification script, and the other verified that these scripts and suggested corrections if needed. In all but one case the other author found the scripts to be correct. These scripts (after correction) were also verified through manually curated test cases. These scripts were then used to ensure the feasibility of instances.
Since a single problem instance can potentially have multiple correct solutions (Nandwani et al., 2021) – all solutions are provided for each training input. The instances in the test set are typically larger in size than those in training. Because of their size, test instances may have too many solutions, and computing all of them can be expensive. Instead, the verification script can be used, which outputs the correctness of a candidate solution for any test instance. The scripts are a part of the dataset and can be used to generate any number of instances of varying complexity for each problem to easily extend the dataset. Keeping the prohibitive experimentation costs with LLMs in mind, we generate around 15 training instances and around 34 test instances on average per problem. In total FCoReBench has 596 training instances and 1354 test instances.
5 SymPro-LM
Preliminaries: In the following, we assume that we have access to an LLM $\mathcal{L}$ , which can work with various prompting strategies, a program interpreter $\mathcal{I}$ , which can execute programs written in its language and a symbolic solver $\mathcal{S}$ , which takes as input a pair of the form $(E,V)$ , where $E$ is set of equations (constraints) specified in the language of $\mathcal{S}$ , and $V$ is a set of (free) variables in $E$ , and produces an assignment $\mathcal{A}$ to the variables in $V$ that satisfies the set of equations in $E$ . Given the an fcore problem $\mathcal{P}=\langle\mathcal{X},\mathcal{Y},\mathcal{C}\rangle$ described by $NL(\mathcal{C})$ , $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ and $\mathcal{D_{\mathcal{P}}}$ , we would like to make effective use of $\mathcal{L}$ , $\mathcal{I}$ and $\mathcal{S}$ , to learn the mapping $\mathcal{F}$ , which takes any input $x∈\mathcal{X}$ , and maps it to $y∈\mathcal{Y}$ , such that $(x,y)$ honors the constraints in $\mathcal{C}$ .
Background: We consider the following possible representations for $\mathcal{F}$ which cover existing work.
- Exclusively LLM: Many prompting strategies (Wei et al., 2022c; Zhou et al., 2023) make exclusive use of $\mathcal{L}$ to represent $\mathcal{F}$ . $\mathcal{L}$ is supplied with a prompt consisting of the description of $\mathcal{P}$ via $NL(\mathcal{C})$ , $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ , the input $x$ , along with specific instructions on how to solve the problem and asked to output $y$ directly. This puts the entire burden of discovering $\mathcal{F}$ on the LLM.
- LLM $→$ Program: In strategies such as PAL (Gao et al., 2023), the LLM is prompted to output a program, which then is interpreted by $\mathcal{I}$ on the input $x$ , to produce the output $y$ .
- LLM + Solver: Strategies such as Logic-LM (Pan et al., 2023) and Sat-LM (Ye et al., 2023) make use of both the LLM $\mathcal{L}$ and the symbolic solver $\mathcal{S}$ . The primary goal of $\mathcal{L}$ is to to act as an interface for translating the problem description for $\mathcal{P}$ and the input $x$ , to the language of the solver $\mathcal{S}$ . The primary burden of solving the problem is on $\mathcal{S}$ , whose output is then parsed as $y$ .
5.1 Our Approach
<details>
<summary>extracted/6211530/Images/puzzle-lm.png Details</summary>

### Visual Description
\n
## Diagram: System Architecture for Program Synthesis with Feedback
### Overview
The image depicts a system architecture for program synthesis, involving a Large Language Model (LLM), a Symbolic Solver, a Python Program, and a Feedback Agent. The diagram illustrates the flow of information and interactions between these components to generate and refine programs based on natural language descriptions.
### Components/Axes
The diagram consists of four main components, each represented by a rounded rectangle:
* **LLM (Large Language Model):** Yellow, positioned in the top-left.
* **Symbolic Solver:** Purple, positioned in the top-right, with a "Z3" label.
* **Python Program:** Blue, positioned in the bottom-right, with a small wrench icon.
* **Feedback Agent:** Green, positioned in the bottom-left, with a thumbs-down icon.
The diagram also includes text labels describing the inputs and outputs of the system:
* **Natural Language Description of Rules, Input-Output Format of P:** Located at the top center. The formula `NL(C), NL(x), NL(y)` is also present.
* **(E<sub>x</sub>, V<sub>x</sub>):** Labeling the arrow from the Symbolic Solver to the Python Program.
* **A<sub>x</sub>:** Labeling the arrow from the Symbolic Solver.
* **x:** Labeling the arrow to the Python Program.
* **y:** Labeling the arrow from the Feedback Agent.
* **ŷ:** Labeling the arrow from the Feedback Agent.
Arrows indicate the direction of information flow. Solid arrows represent direct data transfer, while dashed arrows represent feedback loops.
### Detailed Analysis or Content Details
The diagram shows the following information flow:
1. **LLM Input:** The LLM receives a natural language description of rules and input-output format, represented as `NL(C), NL(x), NL(y)`.
2. **LLM to Symbolic Solver:** The LLM sends information to the Symbolic Solver.
3. **Symbolic Solver Output:** The Symbolic Solver generates a solution, represented as `(E<sub>x</sub>, V<sub>x</sub>)`, and sends it to the Python Program. It also outputs `A<sub>x</sub>`.
4. **Python Program Input:** The Python Program receives the solved input `x`.
5. **Python Program Output:** The Python Program generates a predicted output `ŷ`.
6. **Feedback Agent Input:** The Feedback Agent receives both the predicted output `ŷ` and the gold output `y`.
7. **Feedback Agent to LLM:** The Feedback Agent provides feedback to the LLM, indicated by a dashed arrow with a thumbs-down icon.
8. **Feedback Agent to Python Program:** The Feedback Agent also provides feedback to the Python Program, indicated by a dashed arrow.
### Key Observations
The diagram highlights a closed-loop system where the LLM generates a program, the Symbolic Solver refines it, the Python Program executes it, and the Feedback Agent evaluates the results, providing feedback to improve the program. The use of both solid and dashed arrows indicates a combination of direct data flow and iterative refinement. The inclusion of "Z3" suggests the use of a specific symbolic solver.
### Interpretation
This diagram illustrates a sophisticated program synthesis pipeline. The LLM acts as the initial program generator, leveraging natural language understanding. The Symbolic Solver likely performs formal verification or optimization of the generated program. The Python Program provides a concrete execution environment. The Feedback Agent is crucial for iterative improvement, comparing the program's output to a desired output and guiding the LLM towards a correct solution. The dashed arrows indicate a reinforcement learning or iterative refinement process. The system aims to bridge the gap between natural language specifications and executable code, potentially automating the programming process. The presence of both `x` and `ŷ` suggests a comparison between the solved input and the predicted output, which is the basis for the feedback loop. The diagram suggests a system designed for high accuracy and robustness in program generation.
</details>
Figure 3: SymPro-LM: Solid lines indicate the main flow and dotted lines indicate feedback pathways.
Our approach can be seen as a combination of LLM $→$ Program and LLM+Solver strategies described above. While the primary role of the LLM is to do the interfacing between the natural language description of the problem $\mathcal{P}$ , the task of solving the actual problem is delegated to the solver $\mathcal{S}$ as in LLM+Solver strategy. But unlike them, where the LLM directly calls the solver, we now prompt it to write a program, $\psi$ , which can work with any given input $x∈\mathcal{X}$ of any size. This allows us to get rid of the LLM calls at inference time, resulting in a "lifted" implementation. The program $\psi$ internally represents the specification of the problem. It takes as argument an input $x$ , and then converts it according to the inferred specification of the problem to a set of equations $(E_{x},V_{x})$ in the language of the solver $\mathcal{S}$ to get the solution to the original problem. The solver $S$ then outputs an assignment $A_{x}$ in its own representation, which is then passed back to the program $\psi$ , which converts it back to the desired output format specified by $\mathcal{Y}$ and produces output $\hat{y}$ . Broadly, our pipeline consists of the 3 components which we describe next in detail.
- Prompting LLMs: The LLM is prompted with $NL(\mathcal{C})$ , $NL(\mathcal{X})$ , $NL(\mathcal{Y})$ (see Figure 2) to generate an input-agnostic program $\psi$ . The LLM is instructed to write $\psi$ to read an input from a file, convert it to a symbolic representation according to the inferred specification of the problem, pass the symbolic representation to the solver and then use the solution from the solver to generate the output in the desired format. The LLM is also prompted with information about the solver and its underlying language. Optionally we can also provide the LLM with a subset of $\mathcal{D}_{\mathcal{P}}$ (see Appendix B.3 for exact prompts).
- Symbolic Solver: $\psi$ can convert any input instance $x$ to $(E_{x},V_{x})$ which it passes to the symbolic solver. The solver is agnostic to how the representation $(E_{x},V_{x})$ was created and tries to find an assignment $A_{x}$ to $V_{x}$ which satisfies $E_{x}$ which is passed back to $\psi$ (see Appendix E.1 for sample programs generated).
- Generating the Final Output: $\psi$ then uses $\mathcal{A}_{x}$ to generate the predicted output $\hat{y}$ . This step is need because the symbolic representation was created by $\psi$ and it must recover the desired output representation from $\mathcal{A}_{x}$ , which might not be straightforward for all problem representations.
Refinement via Solved Examples: We make use of $\mathcal{D}_{\mathcal{P}}$ to verify and (if needed) make corrections to $\psi$ . For each $(x,y)∈\mathcal{D}_{\mathcal{P}}$ (solved input-output pair), we run $\psi$ on $x$ to generate the prediction $\hat{y}$ , during which the following can happen: 1) Errors during execution of $\psi$ ; 2) The solver is unable to find $\mathcal{A}_{x}$ under a certain time limit; 3) $\hat{y}≠ y$ , i.e. the predicted output is incorrect; 4) $\hat{y}=y$ , i.e. the predicted output is correct. If for any training input one of the first three cases occur we provide automated feedback to the LLM through prompts to improve and generate a new program. This process is repeated till all training examples are solved correctly or till a maximum number of feedback rounds is reached. The feedback is simple in nature and includes the nature of the error, the actual error from the interpreter/symbolic solver and the input instance on which the error was generated. For example, in the case where the output doesn’t match the gold output we prompt the LLM with the solved example it got wrong and the expected solution. Appendix B contains details of feedback prompts.
It is possible that a single run of SymPro-LM (along with feedback) is unable to generate the correct solution for all training examples – so, we restart SymPro-LM multiple times for a given problem. Given the probabilistic nature of LLMs a new program is generated at each restart and a new feedback process continues. For the final program, we pick the best program generated during these runs, as judged by the accuracy on the training set. Figure 3 describes our entire approach diagrammatically.
SymPro-LM for Non-First Order Reasoning Datasets: For datasets that are not first-order in nature, a single program does not exist which can solve all problems, hence we prompt the LLM to generate a new program for each test set instance. Thus we cannot use feedback from solved examples and we only use feedback to correct syntactic mistakes (if any). The prompt contains an instruction to write a program which will use a symbolic solver to solve the problem. Additionally, we provide details about the solver to be used. The prompt also contains in-context examples demonstrating sample programs for other logical reasoning questions. The LLM should parse the logical reasoning question and extract the corresponding facts/rules which it needs to pass to the solver (via the program). Once the solver returns with an answer, it is passed back to the program to generate the final output.
6 Experimental Setup
Our experiments answer these research questions. (1) How does SymPro-LM compare with other LLM-based reasoning approaches on fcore problems? (2) How useful is using feedback from solved examples and multiple runs for fcore problems? (3) How does SymPro-LM compare with other methods on other existing (non-first order) logical reasoning benchmarks? (4) What is the nature of errors made by SymPro-LM and other baselines?
Baselines: On FCoReBench, we compare our method with 4 baselines: 1) Standard LLM prompting, which leverages in-context learning to directly answer the questions; 2) Program-aided Language Models, which use imperative programs for reasoning and offload the solution step to a program interpreter; 3) Logic-LM, which offloads the reasoning to a symbolic solver. 4) Tree-of-Thoughts (ToT) Yao et al. (2023), which is a search based prompting technique. These techniques (Yao et al., 2023; Hao et al., 2023) involve considerable manual effort for writing specialized prompts for each problem and are estimated to be 2-3 orders of magnitude more expensive than other baselines. We thus decide to present a separate comparison with ToT on a subset of FCoReBench (see Appendix C.1.1 for more details regarding ToT experiments). We use Z3 (De Moura and Bjørner, 2008) an efficient SMT solver for experiments with Logic-LM and SymPro-LM. We use the Python interpreter for experiments with PAL and SymPro-LM. We also evaluate refinement for PAL and SymPro-LM by using 5 runs each with 4 rounds of feedback on solved examples for each problem. We evaluate refinement for Logic-LM by providing 4 rounds of feedback to correct syntactic errors in constraints (if any) for each problem instance. We decide not to evaluate SAT-LM given its conceptual similarity to Logic-LM having being proposed concurrently.
Models: We experiment with 3 LLMs: GPT-4-Turbo (gpt-4-0125-preview) (OpenAI, 2023) which is a SOTA LLM by OpenAI, GPT-3.5-Turbo (gpt-3.5-turbo-0125), a relatively smaller LLM by OpenAI and Mixtral 8x7B (open-mixtral-8x7b) (Jiang et al., 2024), an open-source mixture-of-experts model developed by Mistral AI. We set the temperature to $0 0$ for few-shot prompting and Logic-LM for reproducibility and to $0.7$ to sample several runs for PAL and SymPro-LM.
Prompting LLMs: Each method’s prompt includes the natural language description of the problem’s rules and the input-output format, along with two solved examples. No additional intermediate supervision (e.g., SMT or Python program) is given in the prompt. For few-shot prompting we directly prompt the LLM to solve each test set instance separately. For PAL we prompt the LLM to write an input-agnostic Python program which reads the input from a file, reasons to solve the input and then writes the solution to another file, the program generated is run on each testing set instance. For Logic-LM for each test set instance we prompt the LLM to convert it into its symbolic representation which is then fed to a symbolic solver, the prompt additionally contains the description of the language of the solver. We then prompt the LLM with the solution from the solver and ask it to generate the output in the desired format (see Section 5). Prompt templates are detailed in Appendix B and other experimental details can be found in Appendix C.
Metrics: For each problem, we use the associated verification script to check the correctness of the candidate solution for each test instance. This script computes the accuracy as the fraction of test instances solved correctly, using binary marking assigning 1 to correct solutions and 0 for incorrect ones. We report the macro-average of test set accuracies across all problems in FCoReBench.
Additional Datasets: Apart from FCoReBench, we also evaluate SymPro-LM on 3 additional logical reasoning datasets: (1) LogicalDeduction from the BigBench (bench authors, 2023) benchmark, (2) ProofWriter (Tafjord et al., 2021) and (3) PrOntoQA (Saparov and He, 2023a). In addition to other baselines, we also compare with Chain-of-Thought (CoT) prompting (Wei et al., 2022c), as it performs significantly better than standard prompting for such datasets. Recall that these benchmarks are not first-order in nature i.e. each problem is accompanied with a single instance (despite the rules potentially being first-order) and hence we have to run SymPro-LM (and other methods) separately for each test instance (see Appendix C.2 for more details).
7 Results
Table 1 describes the main results for FCoReBench. Unsurprisingly, GPT-4-Turbo is hugely better than other LLMs. Mixtral 8x7B struggles on our benchmark indicating that smaller LLMs (even with mixture of experts) are not as effective at complex reasoning. Mixtral in general does badly, often doing worse than random (especially when used without refinement). PAL and SymPro-LM tend to perform better than other baselines benefiting from the vast pre-training of LLMs on code (Chen et al., 2021). Logic-LM performs rather poorly with smaller LLMs indicating that they struggle to invoke symbolic solvers directly.
Table 1: Results for FCoReBench. - / + indicate before / after refinement. Performance for random guessing is 20.13%.
| Mixtral 8x7B GPT-3.5-Turbo GPT-4-Turbo | 25.06% 27.02% 29.33% | 14.98% 32.66% 47.42% | 36.09% 49.19% 66.40% | 0.21% 6.04% 34.11% | 2.04% 6.58% 38.51% | 8.08% 17.08% 50.94% | 30.09% 50.35% 83.37% |
| --- | --- | --- | --- | --- | --- | --- | --- |
Hereafter, we focus primarily on GPT-4-Turbo’s performance, since it is far superior to other models. SymPro-LM outperforms few-shot prompting and Logic-LM across all problems in FCoReBench. On average the improvements are by an impressive $54.04\%$ against few-shot prompting and by $44.86\%$ against Logic-LM (with refinement). Few-shot prompting solve less than a third of the problems with GPT-4-Turbo, suggesting that even the largest LLMs cannot directly perform complex reasoning. While Logic-LM performs better, it still isn’t that good either, indicating that combining LLMs with symbolic solvers is not enough for such reasoning problems.
Table 2: Logic-LM’s performance on FCoReBench evaluated with refinement.
| Correct Output Incorrect Output Timeout Error | 6.58% 62.11% 2.375% | 38.51% 52.06% 2.49% |
| --- | --- | --- |
| Syntactic Error | 29.04% | 6.91% |
Table 3: Error analysis at a program level for GPT-4-Turbo before and after refinement for PAL and SymPro-LM. Results are averaged over all runs for a problem and further over all problems in FCoReBench.
| Incorrect Program Semantically Incorrect Program Python Runtime Error | 70% / 57% 62% / 49.5% 7% / 4.5% | 58% / 38% 29% / 20.5% 13.5% / 5.5% |
| --- | --- | --- |
| Timeout | 1% / 3% | 15.5% / 12% |
Further qualitative analysis suggests that Logic-LM gets confused in handling the structure of fcore problems. As problem instance size grows, it tends to make syntactic mistakes with smaller LLMs (Table 3). With larger LLMs, syntactic mistakes reduce, but constraints still remain semantically incorrect and do not get corrected through feedback.
Often this is because LLMs are error-prone when enumerating combinatorial constraints, i.e., they struggle with executing implicit for-loops and conditionals (see Appendix F). In contrast, SymPro-LM and PAL manage first order structures well, since writing code for a loop/conditional is not that hard, and the correct loop-execution is done by a program interpreter. These (size-invariant) programs then get used independently without any LLM call at inference time to solve any input instance – easily generalizing to larger instances – highlighting the benefit of using a program interpreter for such combinatorial problems.
At the same time, PAL is also not as effective on FCoReBench. Table 4 compares the effect of feedback and multiple runs on PAL and SymPro-LM. SymPro-LM outperforms PAL by $16.97\%$ on FCoReBench (with refinement). When LLMs are forced to write programs for performing complicated reasoning, they tend to produce brute-force solutions that often are either incorrect or slow (see Table- 8 in the appendix). This highlights the value of offloading reasoning to a symbolic solver. Interestingly, feedback from solved examples and re-runs is more effective (Table 3) for SymPro-LM, as also shown by larger gains with increasing number of feedback rounds and runs (Table 4). We hypothesize that this is because declarative programs (generated by SymPro-LM) are easier to correct, than imperative programs (produced by PAL).
Table 4: Comparative analysis between PAL and SymPro-LM on FCoReBench for GPT-4-Turbo.
| PAL SymPro-LM | 47.42% 50.94% $\uparrow$ 3.52% | 54.00% 62.54% $\uparrow$ 8.54% | 57.09% 68.52% $\uparrow$ 11.43% | 58.82% 71.12% $\uparrow$ 12.3% | 59.92% 71.96% $\uparrow$ 12.04% |
| --- | --- | --- | --- | --- | --- |
(a) Effect of feedback rounds for a single run
| PAL SymPro-LM | 59.92% 71.96% $\uparrow$ 12.04% | 62.54% 77.21% $\uparrow$ 14.67% | 63.95% 80.06% $\uparrow$ 16.11% | 65.19% 82.06% $\uparrow$ 16.87% | 66.40% 83.37% $\uparrow$ 16.97% |
| --- | --- | --- | --- | --- | --- |
(b) Effect of multiple runs each with 4 feedback rounds
Table 5: Accuracy and cost comparison between ToT prompting and SymPro-LM with GPT-4-Turbo for 3 problems in FCoReBench. Costs are per test instance for ToT and one time costs per problem for SymPro-LM.
| Latin Squares 4x4 Magic Square | 3x3 32.5% 3x3 | 46.33% $0.5135 26.25% | $0.1235 100% $0.4325 | 100% $0.02 100% | $0.02 $0.02 |
| --- | --- | --- | --- | --- | --- |
| 4x4 | 8% | $0.881 | 100% | $0.02 | |
| Sujiko | 3x3 | 7.5% | $0.572 | 100% | $0.02 |
| 4x4 | 0% | $1.676 | 100% | $0.02 | |
Comparison with ToT Prompting: Table 5 compares SymPro-LM with ToT prompting on 3 problems. SymPro-LM is far superior in terms of cost and accuracy, indicating that even the largest LLMs cannot do complex reasoning on problems with large search depths and branching factors, despite being called multiple times with search-based prompting. Due to its programmatic nature, SymPro-LM generalizes even better to larger instances and is also hugely cost effective, as there is no need to call an LLM for each instance separately. We do not perform further experiments with ToT prompting, due to cost considerations.
<details>
<summary>extracted/6211530/Images/size-vs-algo.png Details</summary>

### Visual Description
## Line Chart: Puzzle Solving Accuracy vs. Board Size
### Overview
The image presents three line charts, each representing the accuracy of different puzzle-solving methods across varying board sizes. The puzzles are Sudoku, Sujiko, and Magic Square. The accuracy is measured in percentage (%), and the board size is specified as dimensions (e.g., 4x4, 9x9). Four methods are compared: Few-Shot, Logic-LM, PAL, and SymPro-LM.
### Components/Axes
* **X-axis (all charts):** Board Size. The specific board sizes vary for each puzzle type.
* Sudoku: 4x4, 9x9, 16x16, 25x25, 36x36
* Sujiko: 4x4, 5x5, 3x3
* Magic Square: 4x4, 5x5
* **Y-axis (all charts):** Accuracy (%). Scale ranges from 0% to 100%.
* **Legend (top-right):**
* Few-Shot (Orange)
* Logic-LM (Purple)
* PAL (Blue)
* SymPro-LM (Green)
* **Chart Titles (top-center):**
* Sudoku
* Sujiko
* Magic Square
### Detailed Analysis or Content Details
**Sudoku Chart:**
* **Few-Shot (Orange):** Starts at approximately 40% accuracy for 4x4 boards, decreases sharply to approximately 10% for 9x9 boards, and continues to decline, reaching approximately 0% for 36x36 boards. The trend is strongly downward.
* **Logic-LM (Purple):** Begins at approximately 60% for 4x4 boards, increases slightly to approximately 65% for 9x9 boards, then decreases to approximately 40% for 16x16 boards, and continues to decline to approximately 10% for 36x36 boards. The trend is initially slightly upward, then strongly downward.
* **PAL (Blue):** Starts at approximately 85% for 4x4 boards, decreases to approximately 70% for 9x9 boards, then declines more rapidly to approximately 40% for 25x25 boards, and reaches approximately 10% for 36x36 boards. The trend is downward.
* **SymPro-LM (Green):** Maintains a consistently high accuracy of approximately 100% across all board sizes (4x4 to 36x36).
**Sujiko Chart:**
* **Few-Shot (Orange):** Starts at approximately 30% for 4x4 boards, decreases to approximately 10% for 5x5 boards, and then increases to approximately 20% for 3x3 boards. The trend is initially downward, then upward.
* **Logic-LM (Purple):** Begins at approximately 60% for 4x4 boards, increases to approximately 80% for 5x5 boards, and then decreases to approximately 15% for 3x3 boards. The trend is initially upward, then downward.
* **PAL (Blue):** Starts at approximately 80% for 4x4 boards, increases to approximately 90% for 5x5 boards, and then decreases to approximately 60% for 3x3 boards. The trend is initially upward, then downward.
* **SymPro-LM (Green):** Maintains a consistently high accuracy of approximately 100% across all board sizes (4x4, 5x5, 3x3).
**Magic Square Chart:**
* **Few-Shot (Orange):** Starts at approximately 40% for 4x4 boards, decreases to approximately 10% for 5x5 boards. The trend is downward.
* **Logic-LM (Purple):** Begins at approximately 60% for 4x4 boards, decreases to approximately 10% for 5x5 boards. The trend is downward.
* **PAL (Blue):** Starts at approximately 80% for 4x4 boards, decreases to approximately 40% for 5x5 boards. The trend is downward.
* **SymPro-LM (Green):** Maintains a consistently high accuracy of approximately 100% across all board sizes (4x4, 5x5).
### Key Observations
* SymPro-LM consistently achieves 100% accuracy across all puzzle types and board sizes.
* Few-Shot consistently performs the worst, with accuracy decreasing as board size increases.
* Logic-LM and PAL show varying performance depending on the puzzle type and board size.
* For Sudoku and Sujiko, accuracy generally decreases with increasing board size for all methods except SymPro-LM.
* Sujiko shows a slight increase in accuracy for Few-Shot, Logic-LM, and PAL when moving from 5x5 to 3x3 boards.
### Interpretation
The data suggests that SymPro-LM is the most effective method for solving these puzzles, regardless of board size. This could be due to its superior algorithm or its ability to handle the complexity of larger boards. Few-Shot appears to struggle with increasing board size, indicating it may not generalize well to more complex instances. Logic-LM and PAL demonstrate a more nuanced performance, with accuracy varying depending on the puzzle type and board size. The slight increase in accuracy for some methods when moving from 5x5 to 3x3 Sujiko boards could be due to the specific characteristics of Sujiko puzzles, potentially making smaller boards easier to solve. The charts provide a clear comparison of the performance of different puzzle-solving methods, highlighting the strengths and weaknesses of each approach. The consistent performance of SymPro-LM suggests it is a robust and reliable solution for these types of puzzles.
</details>
Figure 4: Effect of increasing problem instance size on baselines and SymPro-LM for GPT-4-Turbo.
Effect of Problem Instance Size: We now report performance of SymPro-LM and other baselines against varying problem instance sizes (see Figure 4) for 3 problems in FCoReBench (sudoku, sujiko and magic-square). Increasing the problem instance size increases the number of variables, accompanying constraints and reasoning steps required to reach the solution. We observe that being programmatic SymPro-LM and PAL, are relatively robust against increase in size of input instances. In comparison, performance of Logic-LM and few-shot prompting declines sharply. PAL programs are often inefficient and may see performance drop when they fail to find a solution within the time limit.
<details>
<summary>extracted/6211530/Images/feedback-effect.png Details</summary>

### Visual Description
## Line Chart: Test Accuracy vs. Feedback Rounds
### Overview
This line chart displays the test accuracy of five different methods (K-Clique, Keisuke, Number Link, Shinro, and Sujiko) and their average, as a function of the number of feedback rounds. The chart shows how the accuracy of each method changes with increasing feedback.
### Components/Axes
* **X-axis:** Number of Feedback Rounds (0, 1, 2, 3, 4)
* **Y-axis:** Test Accuracy (%) (Scale from 0 to 100)
* **Legend:** Located in the bottom-right corner, identifies each line with a unique color and label:
* K-Clique (Blue)
* Keisuke (Gray)
* Number Link (Green)
* Shinro (Orange)
* Sujiko (Purple)
* Average (Red)
* **Data Markers:** Each data point is marked with a symbol corresponding to its method.
* **Gridlines:** Vertical gridlines are present to aid in reading values.
### Detailed Analysis
Here's a breakdown of each line's trend and approximate data points, verified against the legend colors:
* **K-Clique (Blue):** The line slopes sharply upward from 0 to 1 feedback rounds, then plateaus.
* 0 Rounds: ~68%
* 1 Round: ~95%
* 2 Rounds: ~98%
* 3 Rounds: ~99%
* 4 Rounds: ~99%
* **Keisuke (Gray):** The line shows a moderate upward trend, leveling off after 2 rounds.
* 0 Rounds: ~55%
* 1 Round: ~62.54%
* 2 Rounds: ~68.52%
* 3 Rounds: ~71.12%
* 4 Rounds: ~71.96%
* **Number Link (Green):** The line remains relatively flat, with a slight increase.
* 0 Rounds: ~-5%
* 1 Round: ~5%
* 2 Rounds: ~8%
* 3 Rounds: ~10%
* 4 Rounds: ~12%
* **Shinro (Orange):** The line shows a moderate upward trend, with a slight curve.
* 0 Rounds: ~40%
* 1 Round: ~45%
* 2 Rounds: ~50%
* 3 Rounds: ~55%
* 4 Rounds: ~58%
* **Sujiko (Purple):** The line rises quickly and plateaus near 100% accuracy.
* 0 Rounds: ~80%
* 1 Round: ~95%
* 2 Rounds: ~100%
* 3 Rounds: ~100%
* 4 Rounds: ~100%
* **Average (Red):** The line shows a consistent upward trend, but at a decreasing rate.
* 0 Rounds: ~50.94%
* 1 Round: ~62.54%
* 2 Rounds: ~68.52%
* 3 Rounds: ~71.12%
* 4 Rounds: ~71.96%
### Key Observations
* K-Clique and Sujiko achieve very high accuracy (close to 100%) with only a few feedback rounds.
* Number Link consistently exhibits the lowest accuracy across all feedback rounds.
* The average accuracy increases with feedback rounds, but the rate of increase diminishes.
* Keisuke and Shinro show moderate improvement with feedback, but their accuracy remains lower than K-Clique and Sujiko.
### Interpretation
The chart demonstrates the effectiveness of iterative feedback in improving the test accuracy of different methods. The significant performance gains observed in K-Clique and Sujiko suggest that these methods are particularly sensitive to feedback and can quickly converge to high accuracy. The relatively flat performance of Number Link indicates that this method may be less responsive to feedback or inherently more difficult to improve. The diminishing rate of improvement in the average accuracy suggests that there may be a limit to the benefits of additional feedback rounds. This data could be used to optimize the feedback process for each method, potentially focusing more feedback on methods like Number Link and less on those that quickly reach high accuracy. The differences in performance between the methods could also indicate underlying differences in their algorithms or their suitability for the given task.
</details>
(a) Effect of feedback
<details>
<summary>extracted/6211530/Images/effect-runs.png Details</summary>

### Visual Description
## Line Chart: Performance Comparison Across Runs
### Overview
This line chart compares the performance of six different methods (Car-Sequencing, Dosun Fuwari, K-Metric-Centre, Number Link, Survo, and Average) across five runs. The performance is measured in percentage, and the x-axis represents the number of runs. The chart displays the trend of each method's performance as the number of runs increases.
### Components/Axes
* **X-axis:** Number of Runs (1 to 5)
* **Y-axis:** Percentage (Scale from 0 to 100)
* **Legend:** Located in the top-right corner, identifying each line with a unique color and label.
* Car-Sequencing (Blue)
* Dosun Fuwari (Orange)
* K-Metric-Centre (Red)
* Number Link (Grey)
* Survo (Purple)
* Average (Green)
* **Data Markers:** Each line has markers at each data point (run number) to indicate the exact percentage value.
* **Percentage Labels:** Percentage values are displayed at each data point for runs 1, 2, 3, 4, and 5.
### Detailed Analysis
Here's a breakdown of each line's trend and data points:
* **Car-Sequencing (Blue):** The line is relatively flat, indicating minimal performance change with increasing runs.
* Run 1: ~0%
* Run 2: ~5%
* Run 3: ~5%
* Run 4: ~5%
* Run 5: ~8%
* **Dosun Fuwari (Orange):** The line shows a decreasing trend initially, then plateaus.
* Run 1: ~72%
* Run 2: ~77%
* Run 3: ~80%
* Run 4: ~82%
* Run 5: ~83%
* **K-Metric-Centre (Red):** The line exhibits a strong upward trend, indicating significant performance improvement with each run.
* Run 1: ~7%
* Run 2: ~22%
* Run 3: ~43%
* Run 4: ~58%
* Run 5: ~73%
* **Number Link (Grey):** The line is relatively flat, with a slight upward trend.
* Run 1: ~40%
* Run 2: ~50%
* Run 3: ~60%
* Run 4: ~70%
* Run 5: ~80%
* **Survo (Purple):** The line is relatively flat, with a slight upward trend.
* Run 1: ~40%
* Run 2: ~50%
* Run 3: ~60%
* Run 4: ~70%
* Run 5: ~80%
* **Average (Green):** The line shows a consistent upward trend, but less steep than K-Metric-Centre.
* Run 1: ~35%
* Run 2: ~45%
* Run 3: ~60%
* Run 4: ~70%
* Run 5: ~80%
### Key Observations
* K-Metric-Centre demonstrates the most significant improvement in performance with increasing runs.
* Car-Sequencing shows minimal change in performance across all runs.
* Dosun Fuwari shows a steady increase in performance, but plateaus after Run 3.
* Number Link and Survo show similar performance trends.
* The Average line provides a benchmark for comparison.
### Interpretation
The chart suggests that the K-Metric-Centre method benefits significantly from repeated runs, potentially due to iterative refinement or learning. Car-Sequencing appears to be stable but doesn't improve with more runs. Dosun Fuwari shows initial gains but reaches a performance limit. The Average line indicates the overall performance trend across all methods. The data suggests that for tasks where performance is critical, K-Metric-Centre is the most promising method, but it requires multiple runs to achieve optimal results. The similarity between Number Link and Survo suggests they may be based on similar principles or have comparable performance characteristics. The flat line of Car-Sequencing suggests it may be a simpler or less adaptable method. The percentage labels provide a clear indication of the performance level at each run, allowing for a direct comparison between the different methods.
</details>
(b) Effect of multiple runs
<details>
<summary>extracted/6211530/Images/solved-examples-count.png Details</summary>

### Visual Description
\n
## Line Chart: Performance Improvement with Feedback Rounds
### Overview
This line chart illustrates the performance improvement (measured as the percentage of solved examples) across different numbers of feedback rounds. The chart displays five lines, each representing a different initial number of solved examples. The x-axis represents the number of feedback rounds (0 to 4), and the y-axis represents the percentage of solved examples (from 50% to 78%).
### Components/Axes
* **X-axis Title:** Number of Feedback Rounds
* **Y-axis Title:** Percentage of Solved Examples
* **Y-axis Scale:** Linear, ranging from approximately 50 to 78, with increments of 2.
* **Legend:** Located in the top-left corner.
* Orange Line: 0
* Yellow Line: 1
* Green Line: 4
* Red Line: 7
* Purple Line: 10
### Detailed Analysis
The chart shows the percentage of solved examples increasing with each feedback round for all initial conditions.
* **Orange Line (Initial 0 solved):** Starts at 50.94% at 0 feedback rounds. It increases to approximately 68.52% at 2 rounds, 70.89% at 3 rounds, and 71.96% at 4 rounds. The line shows a consistent upward trend, with diminishing returns after 3 rounds.
* **Yellow Line (Initial 1 solved):** Starts at 50.94% at 0 feedback rounds. It increases to approximately 68.28% at 2 rounds, 70.22% at 3 rounds, and 71.73% at 4 rounds. Similar to the orange line, it shows a consistent upward trend with diminishing returns.
* **Green Line (Initial 4 solved):** Starts at 50.94% at 0 feedback rounds. It increases to approximately 67.90% at 2 rounds, 66.10% at 3 rounds, and 70.62% at 4 rounds. This line shows a slight dip between rounds 2 and 3, before recovering in round 4.
* **Red Line (Initial 7 solved):** Starts at 50.94% at 0 feedback rounds. It increases to approximately 64.48% at 2 rounds, 66.11% at 3 rounds, and 71.33% at 4 rounds. This line shows a consistent upward trend.
* **Purple Line (Initial 10 solved):** Starts at 50.94% at 0 feedback rounds. It increases to approximately 62.31% at 1 round, 62.54% at 2 rounds, 60.35% at 3 rounds, and 62.11% at 4 rounds. This line shows a more erratic pattern, with a slight decrease between rounds 2 and 3.
The percentage values are displayed directly above each data point on the lines.
### Key Observations
* All lines demonstrate an increasing trend in the percentage of solved examples as the number of feedback rounds increases.
* The initial number of solved examples appears to influence the rate of improvement. Lines starting with lower initial values (0, 1) show a more pronounced increase in the early feedback rounds.
* The rate of improvement diminishes with each subsequent feedback round for most lines.
* The green line (initial 4 solved) exhibits a slight dip in performance between rounds 2 and 3, which is an anomaly.
* The purple line (initial 10 solved) shows the least consistent improvement, with a slight decrease between rounds 2 and 3.
### Interpretation
The data suggests that providing feedback rounds is an effective strategy for improving performance, regardless of the initial level of understanding. The initial number of solved examples influences the magnitude of improvement, with those starting with lower performance showing more significant gains from each feedback round. The diminishing returns observed with increasing feedback rounds suggest that there is a point at which additional feedback provides less benefit. The anomaly observed in the green line could be due to various factors, such as the specific examples used or individual learning styles. The purple line's erratic behavior might indicate that individuals with a higher initial understanding require different types of feedback or have reached a plateau in their learning. Overall, the chart highlights the importance of iterative feedback in the learning process.
</details>
(c) Effect of # of solved examples
Figure 5: Effect of feedback and multiple runs with GPT-4-Turbo. (a) and (b) show results with 10 solved examples for feedback where dashed lines show results for individual problems in FCoReBench, with coloured lines highlighting specific problems and the red bold line represents the average effect across all problems. (c) shows the effect of number of solved examples used for feedback in a single run.
Effect of Feedback on Solved Examples: Figure 5(a) describes the effect of multiple rounds of feedback for SymPro-LM. Feedback helps performance significantly; utilizing 4 feedback rounds improves performance by $21.02\%$ . Even the largest LLMs commit errors, making it important to verify and correct their work. But feedback on its own is not enough, a single run might end-up in a wrong reasoning path, which is not corrected by feedback making it important to utilize multiple runs for effective reasoning. Utilizing 5 runs improves the performance by additional $11.41\%$ (Figure 5(b)) after which the gains tend to saturate. Performance also increases with an increase in the number of solved examples (Figure 5(c)). Each solved example helps in detecting and correcting different errors. However, performance tends to saturate at 7 solved examples and no new errors are discovered/corrected, even with additional training data.
7.1 Results on Other Datasets
Table 6: Results for baselines & SymPro-LM on other benchmarks. Best results with each LLM are highlighted.
| Logical Deduction ProofWriter PrOntoQA | 39.66 % 40.50 % 49.60 % | 50.66 % 57.16 % 83.20 % | 66.33 % 50.5 % 98.40 % | 71.00 % 70.16 % 72.20 % | 78.00 % 74.167 % 97.40 % | 65.33 % 46.5 % 83.00 % | 76.00 % 61.66 % 98.80 % | 81.66 % 76.29 % 99.80 % | 82.67 % 74.83 % 91.20 % | 94.00 % 89.83 % 97.80 % |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
Table 6 reports the performance on non-first order datasets. SymPro-LM outperforms all other baselines on ProofWriter and LogicalDeduction, particularly Logic-LM. This showcases the value of integrating LLMs with symbolic solvers through programs, even for standard reasoning tasks. These experiments suggest that LLMs translate natural language questions into programs using solvers much more effectively than into symbolic formulations directly. We attribute this to the vast pre-training of LLMs on code (Brown et al., 2020; Chen et al., 2021). For instance, on the LogicalDeduction benchmark, while Logic-LM does not make syntactic errors during translation it often makes logical errors. These errors significantly decrease when LLMs are prompted to produce programs instead (Figure 6(b)). Error analysis on ProofWriter and PrOntoQA reveals that for more complex natural language questions, LLMs also start making syntactic errors during translation as the number of rules/facts start increasing. With SymPro-LM these errors are vastly reduced because, apart from the benefit from pre-training, LLMs also start utilizing programming constructs like dictionaries and loops to make most out of the structure in these problems (Figure 6(a)). PAL and CoT perform marginally better on PrOntoQA because the reasoning style for problems in this dataset involves forward-chain reasoning which aligns with PAL’s and CoT’s style of reasoning. Integrating symbolic solvers is not as useful for this dataset, but still achieves competitive performance.
<details>
<summary>extracted/6211530/Images/prontoQA-example.png Details</summary>

### Visual Description
\n
## Diagram: Logic-LM vs. SymPro-LM
### Overview
The image presents a comparison between two logical reasoning systems, Logic-LM and SymPro-LM, in how they approach a given problem. The problem statement is presented at the top, followed by the respective implementations of the problem in each system. The diagram visually contrasts the two approaches, highlighting differences in representation and rule application.
### Components/Axes
The diagram is divided into three main sections:
1. **Problem Statement (Top):** A block of text outlining a series of logical statements about various entities (jumpuses, wumpuses, dumpuses, etc.).
2. **Logic-LM (Left):** Displays the problem representation using predicates and rules.
3. **SymPro-LM (Right):** Displays the problem representation using dictionaries and code snippets.
The diagram also includes:
* **Title:** "Problem" at the very top.
* **Section Headers:** "Logic-LM" and "SymPro-LM" clearly labeling the two sides.
* **Predicates (Logic-LM):** A list of predicates with descriptions.
* **Facts (Logic-LM):** A list of facts.
* **Rules (Logic-LM):** A list of rules.
* **Properties (SymPro-LM):** A dictionary defining properties of entities.
* **Code Snippet (SymPro-LM):** Python-like code for adding rules.
### Detailed Analysis or Content Details
**Problem Statement:**
The problem statement consists of the following assertions:
* Jumpuses are not dull.
* Every wumpus is opaque.
* Wumpuses are dumpuses.
* Every dumpus is not floral.
* Dumpuses are numpuses.
* Each numpus is not luminous.
* Each numpus is a vumpus.
* Every vumpus is large.
* Vumpuses are tumpuses.
* Every tumpus is not orange.
* Every tumpus is a zumpus.
* Zumpuses are dull.
* Every zumpus is an impus.
* Every impus is spicy.
* Every impus is a rompous.
* Rompuses are not temperate.
* Every rompous is a yumpus.
* Sam is a dumpus.
* Is Sam Dull?
**Logic-LM:**
* **Predicates:**
* `Wumpus($x, bool)`: Does x belong to Wumpus?
* `Opaque($x, bool)`: Is x opaque?
* `Numpus($x, bool)`: Does x belong to Numpus?
* `Vumpus($x, bool)`: Does x belong to Vumpus?
* `Large($x, bool)`: Is x large?
* `Tumpus($x, bool)`: Does x belong to Tumpus?
* **Facts:**
* `Dumpus(Sam, True)`
* **Rules:**
* `Wumpuses($x, True) >>> Opaque($x, True)`
* `Wumpuses($x, True) >>> Dumpus($x, True)`
* `Vumpuses($x, True) >>> Large($x, True)`
* `Vumpuses($x, True) >>> Tumpus($x, True)`
**SymPro-LM:**
* **Properties:**
* `"jumpus": {"dull": False}`
* `"wumpus": {"opaque": True, "dumpus": True}`
* `"dumpus": {"floral": False, "numpus": True}`
* `"vumpus": {"large": True, "tumpus": True}`
* **Code Snippet:**
```python
# Add rules using for loops and dicts
for entity, props in properties.items():
for prop, value in props.items():
if value:
s.add(Implies(Bool(f'{entity}_Sam'), Bool(f'{prop}_Sam')))
else:
s.add(Implies(Bool(f'{entity}_Sam'), Not(Bool(f'{prop}_Sam'))))
```
### Key Observations
* Logic-LM uses a symbolic representation with predicates and rules, resembling a traditional logic programming approach.
* SymPro-LM uses a dictionary-based representation of properties and a code snippet to generate rules dynamically.
* The code snippet in SymPro-LM iterates through the properties and creates rules based on whether the property value is True or False.
* Both systems aim to represent the same logical problem but employ different methodologies.
### Interpretation
The diagram illustrates two distinct approaches to knowledge representation and reasoning. Logic-LM adopts a declarative style, explicitly defining predicates and rules. This approach is more human-readable and easier to understand for those familiar with logic programming. SymPro-LM, on the other hand, uses a more programmatic approach, leveraging dictionaries and code to generate rules. This approach is potentially more flexible and scalable, especially when dealing with a large number of entities and properties.
The choice between these two approaches depends on the specific requirements of the application. If interpretability and ease of debugging are paramount, Logic-LM might be preferred. If scalability and automation are more important, SymPro-LM might be a better choice. The diagram highlights the trade-offs between these two approaches and provides a valuable comparison for researchers and developers in the field of logical reasoning. The question "Is Sam Dull?" is the ultimate goal of both systems, but the path to answering it differs significantly.
</details>
(a) PrOntoQA
<details>
<summary>extracted/6211530/Images/logicaldeduction-example.png Details</summary>

### Visual Description
\n
## Diagram: Logic and Symbolic Problem Solving Comparison
### Overview
The image presents a comparison between two problem-solving approaches, "Logic-LM" and "SymPro-LM", applied to a logic puzzle. The puzzle describes the relative ages of five vehicles in an antique car show. The diagram showcases the internal representation and steps taken by each approach to solve the problem.
### Components/Axes
The diagram is divided into three main sections:
1. **Problem Statement (Top):** A text block outlining the logic puzzle.
2. **Logic-LM & SymPro-LM Blocks (Center):** Two side-by-side blocks, each representing one approach. Each block contains sections labeled "Domain", "Variables", and "Constraints".
3. **Visual Separation:** The blocks are visually separated by a light-grey background and rounded corners.
### Detailed Analysis or Content Details
**1. Problem Statement:**
The text reads: "In an antique car show, there are five vehicles: a truck, a motorcycle, a limousine, a station wagon, and a sedan. The limousine is older than the truck. The sedan is newer than the motorcycle. The station wagon is the oldest. The limousine is newer than the sedan. Which of the following is the second-oldest?"
**2. Logic-LM Block (Left):**
* **Domain:**
* "1: oldest"
* "5: newest"
* "domain: [1, 2, 3, 4, 5]"
* **Variables:**
* "truck [IN] [1, 2, 3, 4, 5]"
* "motorcycle [IN] [1, 2, 3, 4, 5]"
* "limousine [IN] [1, 2, 3, 4, 5]"
* "station\_wagon [IN] [1, 2, 3, 4, 5]"
* "sedan [IN] [1, 2, 3, 4, 5]"
* **Constraints:**
* "…: station wagon is the oldest."
* "station\_wagon = 1"
* "…: The limousine is older than the truck."
* "limousine > truck"
* "…: limousine is newer than the sedan."
* "limousine > sedan"
* "…"
**3. SymPro-LM Block (Right):**
* **Domain:**
* "# DOMAIN"
* "# 1 is oldest, 5 is newest"
* "domain = [1, 2, 3, 4, 5]"
* "problem.addVariables(['truck', 'motorcycle', 'limousine', 'station_wagon', 'sedan'], domain)"
* **Constraints:**
* "# station wagon is the oldest"
* "problem.addConstraint(lambda station_wagon: station_wagon == 1, ('station_wagon',))"
* "# limousine is older than the truck"
* "problem.addConstraint(lambda limousine, truck: limousine < truck, ('limousine', 'truck'))"
* "…"
### Key Observations
* Both approaches define the same domain (1-5 representing age order).
* Both approaches define the same variables (the five vehicles).
* The "Logic-LM" block uses a more concise, mathematical notation for constraints (e.g., "limousine > truck").
* The "SymPro-LM" block uses a more verbose, programmatic notation, employing `problem.addConstraint` with lambda functions.
* There is a discrepancy in the constraint "limousine is older than the truck". Logic-LM uses `limousine > truck`, while SymPro-LM uses `limousine < truck`. This is likely an error in the SymPro-LM representation.
### Interpretation
The diagram illustrates two different ways to represent and solve a constraint satisfaction problem. "Logic-LM" appears to use a more direct, declarative approach, while "SymPro-LM" uses a more procedural, code-based approach. The difference in constraint representation (">" vs. "<") highlights a potential issue with the "SymPro-LM" implementation, suggesting a possible error in translating the natural language problem into code. The diagram demonstrates the internal workings of each system, showing how the problem is broken down into its constituent parts (domain, variables, constraints) and how these parts are represented and manipulated. The use of lambda functions in "SymPro-LM" suggests a functional programming paradigm. The ellipsis ("…") in both blocks indicates that the full constraint sets are not shown, implying that more constraints are involved in the complete solution.
</details>
(b) LogicalDeduction
Figure 6: Examples highlighting benefits of integrating LLMs with symbolic solver through programs.
8 Discussion
We analyze FCoReBench to identify where LLMs excel and where the largest models still struggle. Based on SymPro-LM ’s performance, we categorize FCoReBench problems into three broad groups.
1) Problems that SymPro-LM solved with 100% accuracy without any feedback. 8 such problems exist out of the $40$ , including vertex-cover and latin-square. These problems have a one-to-one correspondence between the natural language description of the rules and the program for generating the constraints and the LLM essentially has to perform a pure translation task which they excel at.
2) Problems that SymPro-LM solved with 100% accuracy but after feedback from solved examples. There are 20 such problems. They typically do not have a one-to-one correspondence between rule descriptions and code, thus requiring some reasoning to encode the problem in the solver’s language. For eg. one must define auxiliary variables and/or compose several primitives to encode a single natural language rule. GPT-4-Turbo initially misses constraints or encodes the problem incorrectly, but with feedback, it can spot its mistakes and corrects its programs. Examples include k-clique and binairo. In binairo, for example, GPT-4-Turbo incorrectly encodes the constraints for ensuring all columns and rows to be distinct but fixes this mistake after feedback (see Figure 17 in the appendix). LLMs can leverage their vast pre-training to discover non-trivial encodings for several interesting problems and solved examples can help guide LLMs to correct solutions in case of mistakes.
3) Problems with performance below 100% that are not corrected through feedback or utilizing multiple runs. For these 12 problems, LLM finds it difficult to encode some natural language constraint into SMT. Examples include number-link and hamiltonian path, where GPT-4-Turbo is not able to figure out how to encode existence of paths as SMT constraints. In our opinion, these conversions are peculiar, and may be hard even for average CS students. We hope that further analysis of these 12 domains opens up research directions for neuro-symbolic reasoning with LLMs.
9 Conclusion and Limitations
We investigate the reasoning abilities of LLMs on structured first-order combinatorial reasoning problems. We formally define the task, and we present FCoReBench, a novel benchmark of $40$ such problems and find that existing tool-augmented techniques, such as Logic-LM and PAL fare poorly. In response, we propose SymPro-LM – a new technique to aid LLMs with both program interpreters and symbolic solvers. It uses LLMs to convert text into executable code, which is then processed by interpreters to define constraints, allowing symbolic solvers to efficiently tackle the reasoning tasks. Our extensive experiments show that SymPro-LM ’s integrated approach leads to superior performance on our dataset as well as existing benchmarks. Error analysis reveals that SymPro-LM struggles for a certain class of problems where conversion to symbolic representation is not straightforward. In such cases simple feedback strategies do not improve reasoning; exploring methods to alleviate such problems is a promising direction for future work. Another future work direction is to extend this dataset to include images of inputs and outputs, instead of serialized text representations, and assess the reasoning abilities of vision-language models, like GPT4-V.
Limitations: While we study a wide variety of fcore problems, more such problems always exist and adding these to FCoReBench remains a direction of future work. Additionally we assume that input instances and their outputs have a fixed pre-defined (serialized) representation, which may not always be easy to find. Another limitation is that encoding of many problems in the solver’s language can potentially be complicated. Our method relies on the pre-training of LLMs to achieve this without any training/fine-tuning, and addressing this is a direction for future work.
References
- bench authors [2023] BIG bench authors. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023. ISSN 2835-8856.
- Brown et al. [2020] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
- Chen et al. [2021] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021.
- Clark et al. [2021] Peter Clark, Oyvind Tafjord, and Kyle Richardson. Transformers as soft reasoners over language. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI’20, 2021. ISBN 9780999241165.
- Cobbe et al. [2021] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021.
- Colbourn [1984] Charles J. Colbourn. The complexity of completing partial latin squares. Discrete Applied Mathematics, 8(1):25–30, 1984. ISSN 0166-218X. doi: https://doi.org/10.1016/0166-218X(84)90075-1.
- De Biasi [2013] Marzio De Biasi. Binary puzzle is np–complete, 07 2013.
- De Moura and Bjørner [2008] Leonardo De Moura and Nikolaj Bjørner. Z3: An efficient smt solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 337–340. Springer, 2008.
- Demaine and Rudoy [2018] Erik D. Demaine and Mikhail Rudoy. Theoretical Computer Science, 732:80–84, 2018. ISSN 0304-3975. doi: https://doi.org/10.1016/j.tcs.2018.04.031.
- Epstein [1987] D. Epstein. On the np-completeness of cryptarithms. ACM SIGACT News, 18(3):38–40, 1987. doi: 10.1145/24658.24662.
- Fan et al. [2023] Lizhou Fan, Wenyue Hua, Lingyao Li, Haoyang Ling, and Yongfeng Zhang. Nphardeval: Dynamic benchmark on reasoning ability of large language models via complexity classes. arXiv preprint arXiv:2312.14890, 2023.
- Gao et al. [2023] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning, pages 10764–10799. PMLR, 2023.
- Garey et al. [1976a] M. R. Garey, D. S. Johnson, and Ravi Sethi. The complexity of flowshop and jobshop scheduling. Mathematics of Operations Research, 1(2):117–129, 1976a. doi: 10.1287/moor.1.2.117.
- Garey et al. [1976b] M. R. Garey, D. S. Johnson, and Ravi Sethi. The complexity of flowshop and jobshop scheduling. Mathematics of Operations Research, 1(2):117–129, 1976b. ISSN 0364765X, 15265471.
- Gent et al. [2017] Ian P. Gent, Christopher Jefferson, and Peter Nightingale. Complexity of n-queens completion. Journal of Artificial Intelligence Research (JAIR), 58:1–16, 2017. doi: 10.1613/jair.5512.
- Han et al. [2022] Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, David Peng, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Shafiq R. Joty, Alexander R. Fabbri, Wojciech Kryscinski, Xi Victoria Lin, Caiming Xiong, and Dragomir Radev. FOLIO: natural language reasoning with first-order logic. CoRR, abs/2209.00840, 2022. doi: 10.48550/ARXIV.2209.00840.
- Hao et al. [2023] Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. Reasoning with language model is planning with world model, 2023.
- Haraguchi and Ono [2015] Kazuya Haraguchi and Hirotaka Ono. How simple algorithms can solve latin square completion-type puzzles approximately. Journal of Information Processing, 23(3):276–283, 2015. doi: 10.2197/ipsjjip.23.276.
- HIGUCHI and KIMURA [2019] Yuta HIGUCHI and Kei KIMURA. Np-completeness of fill-a-pix and completeness of its fewest clues problem. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E102.A(11):1490–1496, 2019. doi: 10.1587/transfun.E102.A.1490.
- Itai et al. [1982] Alon Itai, Christos H. Papadimitriou, and Jayme Luiz Szwarcfiter. Hamilton paths in grid graphs. SIAM Journal on Computing, 11(4):676–686, 1982. doi: 10.1137/0211056.
- Iwamoto and Ibusuki [2018] Chuzo Iwamoto and Tatsuaki Ibusuki. Dosun-fuwari is np-complete. Journal of Information Processing, 26:358–361, 2018. doi: 10.2197/ipsjjip.26.358.
- Jiang et al. [2024] Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mixtral of experts, 2024.
- Kis [2004] Tamás Kis. On the complexity of the car sequencing problem. Operations Research Letters, 32(4):331–335, 2004. ISSN 0167-6377. doi: https://doi.org/10.1016/j.orl.2003.09.003.
- Kojima et al. [2022] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.
- Liu et al. [2020] Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. In Christian Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3622–3628. ijcai.org, 2020. doi: 10.24963/IJCAI.2020/501.
- Lloyd et al. [2022] Huw Lloyd, Matthew Crossley, Mark Sinclair, and Martyn Amos. J-pop: Japanese puzzles as optimization problems. IEEE Transactions on Games, 14(3):391–402, 2022. doi: 10.1109/TG.2021.3081817.
- Nandwani et al. [2021] Yatin Nandwani, Deepanshu Jindal, Mausam, and Parag Singla. Neural learning of one-of-many solutions for combinatorial problems in structured output spaces. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
- Nandwani et al. [2022a] Yatin Nandwani, Vidit Jain, Mausam, and Parag Singla. Neural models for output-space invariance in combinatorial problems. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022a.
- Nandwani et al. [2022b] Yatin Nandwani, Rishabh Ranjan, Mausam, and Parag Singla. A solver-free framework for scalable learning in neural ILP architectures. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022b.
- OpenAI [2023] OpenAI. Gpt-4 technical report, 2023.
- Palm et al. [2018] Rasmus Berg Palm, Ulrich Paquet, and Ole Winther. Recurrent relational networks. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 3372–3382, 2018.
- Pan et al. [2023] Liangming Pan, Alon Albalak, Xinyi Wang, and William Wang. Logic-lm: Empowering large language models with symbolic solvers for faithful logical reasoning. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 3806–3824. Association for Computational Linguistics, 2023.
- Paranjape et al. [2023] Bhargavi Paranjape, Scott M. Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Túlio Ribeiro. ART: automatic multi-step reasoning and tool-use for large language models. CoRR, abs/2303.09014, 2023. doi: 10.48550/ARXIV.2303.09014.
- Paulus et al. [2021] Anselm Paulus, Michal Rolínek, Vít Musil, Brandon Amos, and Georg Martius. Comboptnet: Fit the right np-hard problem by learning integer programming constraints. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 8443–8453. PMLR, 2021.
- Russell and Norvig [2010] Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 3 edition, 2010.
- Saparov and He [2023a] Abulhair Saparov and He He. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023a.
- Saparov and He [2023b] Abulhair Saparov and He He. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023b.
- Schick et al. [2023] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023. doi: 10.48550/ARXIV.2302.04761.
- Tafjord et al. [2021] Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. Proofwriter: Generating implications, proofs, and abductive statements over natural language. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 3621–3634. Association for Computational Linguistics, 2021. doi: 10.18653/V1/2021.FINDINGS-ACL.317.
- Wang et al. [2023] Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, and Yulia Tsvetkov. Can language models solve graph problems in natural language? In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
- Wang et al. [2019] Po-Wei Wang, Priya Donti, Bryan Wilder, and Zico Kolter. SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 6545–6554. PMLR, 09–15 Jun 2019.
- Wei et al. [2022a] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022, 2022a.
- Wei et al. [2022b] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022, 2022b.
- Wei et al. [2022c] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022c.
- Yang et al. [2022] Kaiyu Yang, Jia Deng, and Danqi Chen. Generating natural language proofs with verifier-guided search. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 89–105, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.7.
- Yao et al. [2023] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. CoRR, abs/2305.10601, 2023. doi: 10.48550/ARXIV.2305.10601.
- YATO and SETA [2003] Takayuki YATO and Takahiro SETA. Complexity and completeness of finding another solution and its application to puzzles. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E86-A, 05 2003.
- Ye et al. [2023] Xi Ye, Qiaochu Chen, Isil Dillig, and Greg Durrett. Satlm: Satisfiability-aided language models using declarative prompting. In Proceedings of NeurIPS, 2023.
- Yu et al. [2023] Dongran Yu, Bo Yang, Dayou Liu, Hui Wang, and Shirui Pan. A survey on neural-symbolic learning systems. Neural Networks, 166:105–126, 2023. doi: 10.1016/J.NEUNET.2023.06.028.
- Zheng et al. [2023] Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. Progressive-hint prompting improves reasoning in large language models. CoRR, abs/2304.09797, 2023. doi: 10.48550/ARXIV.2304.09797.
- Zhong et al. [2021] Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Jiahai Wang, Jian Yin, Ming Zhou, and Nan Duan. AR-LSAT: investigating analytical reasoning of text. CoRR, abs/2104.06598, 2021.
- Zhou et al. [2023] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V. Le, and Ed H. Chi. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
Appendix A FCoReBench
A.1 Dataset Details and Statistics
Our dataset namely FCoReBench has $40$ different fcore problems that have been collected from various sources. Some of these problems are logical-puzzles from publishing houses like Nikoli, some problems are from operations research literature, some are from the annual SAT competition and other problems are well-known computational problems from Computer Science literature such as hamiltonian path and minimum-dominating set. Table 7 gives the details of all problems in our dataset. To create our training and test sets, we write scripts to synthetically generate problem instances. These can be used to extend the dataset as needed with any number of instances of any size. For experimentation, we generate some solved training instances and a separate set of testing instances. Each problem also has a natural language description of its rules, and a natural language description of the input-format which specify how input problem instances and their solutions are represented in text. The next few sections give illustrative examples and other details.
A.2 Natural Language Description of Rules
This section describes how we create the natural language description of rules for problems in FCoReBench. We extract rules from the sources such as the Wikipedia/Nikoli pages of the corresponding problems. These rules are reworded by a human expert to reduce dataset contamination. Another human expert ensures that there are no ambiguities in the reworded description of the rules. The rules are generalized, when needed (for eg. from a $9× 9$ Sudoku to a $n× n$ Sudoku). The following sections provide few examples.
A.2.1 Example Problem: Survo
<details>
<summary>extracted/6211530/Images/survo-example.png Details</summary>

### Visual Description
\n
## Data Table: Matrix Transformation
### Overview
The image presents two 3x4 data tables (matrices) with numerical values. An arrow indicates a transformation from the left table to the right table. Both tables have row labels '1', '2', and '3', and column labels 'A', 'B', 'C', and 'D'. The bottom-right cells of each table contain a sum of the values in their respective rows.
### Components/Axes
* **Rows:** Labeled 1, 2, and 3.
* **Columns:** Labeled A, B, C, and D.
* **Data:** Numerical values within the table cells.
* **Row Sums:** Values in the bottom-right corner of each table, representing the sum of the row values.
### Detailed Analysis or Content Details
**Left Table:**
* Row 1: A = 6, B = N/A, C = N/A, D = 30. Row Sum = 27
* Row 2: A = 8, B = 1, C = N/A, D = 18. Row Sum = 16
* Row 3: A = 9, B = 3, C = N/A, D = 30. Row Sum = 10
* Column Totals: A = 27, B = 16, C = 10, D = 25
**Right Table:**
* Row 1: A = 12, B = 6, C = 2, D = 30. Row Sum = 27
* Row 2: A = 8, B = 1, C = 5, D = 18. Row Sum = 16
* Row 3: A = 7, B = 9, C = 3, D = 30. Row Sum = 10
* Column Totals: A = 27, B = 16, C = 10, D = 25
**Transformation:**
* The transformation appears to involve adding values to specific cells.
* A1: 6 -> 12 (+6)
* B1: N/A -> 6 (+6)
* C1: N/A -> 2 (+2)
* A2: 8 -> 8 (+0)
* B2: 1 -> 1 (+0)
* C2: N/A -> 5 (+5)
* A3: 9 -> 7 (-2)
* B3: 3 -> 9 (+6)
* C3: N/A -> 3 (+3)
### Key Observations
* The row sums remain constant throughout the transformation.
* The column sums remain constant throughout the transformation.
* The transformation appears to be adding values to the first and second rows, and modifying the third row.
* The transformation fills in the missing values in the first table.
### Interpretation
The image demonstrates a matrix transformation that preserves row and column sums. This suggests a possible application in linear algebra or data manipulation where maintaining certain constraints (like sums) is crucial. The transformation could represent a redistribution of values within the matrix while keeping the overall totals constant. The filling of the missing values in the second table suggests a completion or imputation process. The specific pattern of additions and subtractions could be related to a defined mathematical operation or a specific data processing rule.
</details>
Figure 7: Conversion of an input survo problem instance to its solution.
Survo (Figure 7) is an example problem from FCoReBench. The task is to fill a $m× n$ rectangular board with numbers from $1-m*n$ such that each row and column sums to an intended target. (Survo-Wikipedia). The box given below describes the rules of Survo more formally in natural language. {mdframed}
We are given a partially filled $m× n$ rectangular board, intended row sums and column sums. - Empty cells are to be filled with numbers - Numbers in the solved board can range from $1$ to $m*n$ - Numbers present in filled cells on the input board cannot be removed - Each number from 1 to m*n must appear exactly once on the solved board - All the empty cells should be filled such that each row and each column of the solved board must sum to the respective row sum and column sum as specified in the input
A.2.2 Example Problem: Hamiltonian Path
<details>
<summary>extracted/6211530/Images/hamiltonian-path-example.png Details</summary>

### Visual Description
\n
## Diagram: Network Graph
### Overview
The image depicts a network graph consisting of seven nodes (represented by yellow circles) connected by edges (lines). There are two distinct sets of edges: black and red. The graph appears to be planar, meaning it can be drawn on a plane without any edges crossing.
### Components/Axes
There are no explicit axes or labels in the image. The components are:
* **Nodes:** Seven yellow circles.
* **Edges:** Black and red lines connecting the nodes.
### Detailed Analysis or Content Details
The graph can be described as follows:
1. **Node Arrangement:** The nodes are arranged in a roughly hexagonal shape, with one node positioned slightly inside the hexagon.
2. **Black Edges:** The black edges form a triangular structure connecting three nodes on the outer perimeter of the hexagon.
3. **Red Edges:** The red edges form a more complex, roughly hexagonal path connecting all seven nodes. The red edges create a closed loop encompassing all nodes.
4. **Node Connectivity:**
* The top-left node is connected to the top-center node and the bottom-left node via red edges.
* The top-center node is connected to the top-right node and the bottom-center node via red edges.
* The top-right node is connected to the bottom-right node via a red edge.
* The bottom-right node is connected to the bottom-center node via a red edge.
* The bottom-center node is connected to the bottom-left node via a red edge.
* The bottom-left node is connected to the inner node via a red edge.
* The inner node is connected to the top-left node via a red edge.
* The bottom-left, bottom-center, and bottom-right nodes are connected by black edges forming a triangle.
### Key Observations
* The red edges create a complete cycle, connecting all nodes in a closed loop.
* The black edges form a smaller, isolated triangle within the larger network.
* The inner node is only connected to the outer nodes via red edges.
* The graph is relatively simple, with a clear distinction between the two types of edges.
### Interpretation
The diagram likely represents a network or system where nodes represent entities and edges represent relationships between them. The two colors of edges could signify different types of relationships. For example:
* **Red Edges:** Might represent strong or primary connections, or a flow of resources/information. The closed loop suggests a self-contained system or a cycle of interactions.
* **Black Edges:** Might represent weaker or secondary connections, or a specific type of interaction that is limited to a subset of the nodes. The triangular structure could represent a stable or foundational relationship.
The placement of the inner node, connected only by red edges, suggests it plays a central role in the network, acting as a hub or intermediary. The overall structure suggests a system with both interconnected and isolated components. Without further context, it's difficult to determine the specific meaning of the network, but the diagram provides a visual representation of its structure and relationships. The diagram does not provide any quantitative data, only a qualitative representation of connections.
</details>
Figure 8: A sample input graph instance and its solution to the hamiltonian-path problem. Vertices are represented by yellow circles and the hamiltonian path is represented by the red line.
Hamiltonian path is a well-known problem in graph theory in which we have to find a path in an un-directed and an un-weighted graph such that each vertex is visited exactly once by the path. We consider the decision variant of this problem which is equally hard in terms of computational complexity. The box below shows the formal rules for this problem expressed in natural language.
{mdframed}
We are given an un-directed and un-weighted graph. - We have to determine if the graph contains a path that visits every vertex exactly once.
A.2.3 Example Problem: Dosun Fuwari
<details>
<summary>extracted/6211530/Images/dosun-fuwari-example.png Details</summary>

### Visual Description
\n
## Diagram: Grid Transformation
### Overview
The image depicts a transformation of a 5x4 grid into a 4x4 grid. The initial grid contains three filled (black) cells. The transformed grid contains four cells, some filled (black) and some empty (white), with four additional cells containing a white circle with a black outline. An arrow indicates the direction of the transformation.
### Components/Axes
The image consists of two grids and an arrow. There are no explicit axes or labels. The grids are composed of cells, some of which are filled with black, some are empty, and some contain a white circle with a black outline.
### Detailed Analysis or Content Details
**Grid 1 (Left):**
* Dimensions: 5 cells high x 4 cells wide.
* Filled Cells (Black):
* Row 2, Column 2
* Row 3, Column 2
* Row 4, Column 4
* Empty Cells (White): All other cells.
**Arrow:**
* Points from the right side of Grid 1 to the left side of Grid 2, indicating a transformation.
**Grid 2 (Right):**
* Dimensions: 4 cells high x 4 cells wide.
* Filled Cells (Black):
* Row 1, Column 2
* Row 2, Column 4
* Row 3, Column 2
* Row 4, Column 3
* Cells with White Circle and Black Outline:
* Row 1, Column 4
* Row 2, Column 1
* Row 3, Column 4
* Row 4, Column 1
* Empty Cells (White): All other cells.
### Key Observations
* The transformation reduces the grid size from 5x4 to 4x4.
* The number of filled cells increases from 3 to 4.
* Four new cells are introduced with a white circle and black outline.
* The filled cells do not appear to follow a simple pattern during the transformation.
### Interpretation
The diagram likely represents a process or operation that modifies a grid structure. The filled cells could represent active elements or data points, while the cells with the white circle and black outline might represent newly added or modified elements. The transformation suggests a change in the system's state or configuration. Without further context, it's difficult to determine the specific meaning of the transformation. It could represent a data compression algorithm, a state transition in a cellular automaton, or a visual representation of a mathematical operation. The addition of the circled cells suggests a new type of element is being introduced during the transformation. The lack of labels or context makes a definitive interpretation impossible.
</details>
Figure 9: Conversion of an input dosun fuwari problem instance to its solution.
Dosun Fuwari (Nikoli) as shown in Figure 9 is another example problem from FCoReBench. We are given a square board with regions (cells enclosed in bold lines) and we have to fill the board with balloons and iron balls such that one balloon and one iron ball is placed in each region. Balloons are light and float, so they must be placed in one of the cells at the top, in a cell right under a black cell (filled-in cell), or under other balloons. Iron balls are heavy and sink, so they must be placed in one of the cells at the bottom, or in a cell right over a black cell or over other iron balls. The box given below gives the more formal description of the rules of dosun fuwari in natural language.
{mdframed}
We are given a partially filled n*n square board. We are also given subgrids of the input board. Cells in the input board can either be empty or filled (that is, nothing can be placed in them, they are blackened) or can be balloons or iron balls. - The only thing we can do is place balloons or iron balls in some of or all of the empty cells - Each subgrid specified in the input should have exactly one balloon and iron ball in the solved board - Because balloons are buoyant, they should be positioned either in one of the cells located at the top of the board or in a cell directly below a filled cell (i.e., one of the blackened cells in the input) or below other balloons. - Iron balls, being dense, will sink and should therefore be positioned either directly on one of the cells located at the bottom of the input board, or on a cell directly above a filled cell (i.e., one of the blackened cells in the input), or above another iron ball.
A.3 Natural Language Description of Input and Output format
For many problems we consider input-output instances are typically not represented in text. For each problem we describe a straightforward conversion of the input and output space to text in natural language. The following sections consider examples of a few problems from FCoReBench.
A.3.1 Example Problem: Survo
<details>
<summary>extracted/6211530/Images/survo-input-representation.png Details</summary>

### Visual Description
\n
## Data Transformation: Matrix Representation
### Overview
The image depicts a transformation of a partially filled 4x4 matrix into a 4x5 matrix. The left matrix appears to represent sparse data, while the right matrix represents the same data in a more complete, rectangular format. An arrow indicates the transformation process.
### Components/Axes
The left matrix has row labels 1, 2, 3, and a fourth row with no explicit label. Column labels are A, B, C, and D. The right matrix has row labels 1, 2, 3, and 4. Column labels are not explicitly provided, but can be inferred as 1, 2, 3, 4, and 5.
### Detailed Analysis or Content Details
**Left Matrix (Sparse Representation):**
* Row 1: A = 6, D = 30
* Row 2: B = 8, D = 18
* Row 3: B = 3, D = 30
* Row 4: A = 27, B = 16, C = 10, D = 25
**Right Matrix (Complete Representation):**
* Row 1: 0, 6, 0, 0, 30
* Row 2: 8, 0, 0, 0, 18
* Row 3: 0, 0, 3, 0, 30
* Row 4: 27, 16, 10, 25, 0
**Transformation Logic:**
The transformation appears to map the values from the sparse matrix into the corresponding positions in the complete matrix. Missing values in the sparse matrix are represented as 0 in the complete matrix. The column order is maintained.
### Key Observations
The transformation effectively converts a sparse matrix into a dense matrix by filling in missing values with zeros. This is a common operation in data processing and machine learning. The original matrix contains only 7 non-zero values, while the transformed matrix contains 20 values.
### Interpretation
This image demonstrates a method for representing sparse data in a complete matrix format. This is useful for algorithms that require a fixed-size input or for performing matrix operations that are not well-defined for sparse matrices. The transformation preserves the original data while adding zero-valued entries to fill the matrix. The transformation is a simple padding operation, and the choice of zero as the padding value is arbitrary; other values could be used depending on the application. The transformation is likely used to prepare the data for a matrix operation or a machine learning algorithm that requires a complete matrix.
</details>
Figure 10: Representation of input instances of survo as text.
Figure 10 represents the conversion of the inputs to survo, originally represented as grid images to text. Here empty cells are denoted by 0’s and the filled cells have corresponding values. For a given $m× n$ board, each row has $m+1$ space separated integers with the first m integers representing the first row of the input board and the $(m+1)^{th}$ integer representing the row sum. The last row contains $n$ integers represent the column sums. The box below describes this conversion more formally in natural language.
{mdframed}
Input Format: - The input will have $m+1$ lines - The first $m$ lines will have $n+1$ space-separated integers - Each of these $m$ lines represents one row of the partially solved input board (n integers), followed by the required row sum (a single integer) - The last line of the input will have $n$ space-separated integers each of which represents the required column sum in the solved board Sample Input: 0 6 0 0 0 30 8 1 0 0 0 17 0 9 3 0 30 27 16 10 25
Output Format: - The output should have $m$ lines, each representing one row of the solved board - Each of these $m$ lines should have $n$ space-separated integers representing the cells of the solved board - Each integer should be from $1$ to $m*n$ Sample Output: 12 6 2 10 8 1 5 4 7 9 3 11
A.3.2 Example Problem: Dosun Fuwari
<details>
<summary>extracted/6211530/Images/dosun-fuwari-input-representation.png Details</summary>

### Visual Description
\n
## Diagram: Quadtree Decomposition
### Overview
The image depicts a visual representation of quadtree decomposition. A square is recursively divided into four equal quadrants until certain criteria are met (in this case, a threshold of black pixels within a quadrant). The diagram shows the initial square and the resulting quadtree structure represented as a matrix of numbers.
### Components/Axes
The diagram consists of two main parts:
1. **Initial Square:** A square divided into smaller squares, some of which are shaded black.
2. **Quadtree Representation:** A matrix of numbers representing the decomposition. The matrix is arranged in a hierarchical manner, starting with the top-left quadrant and proceeding row by row.
### Detailed Analysis or Content Details
The initial square is divided into 16 smaller squares arranged in a 4x4 grid. Some of these squares are filled with black, while others are white. The quadtree representation shows the decomposition process.
The matrix can be interpreted as follows:
* **First four rows:** `0 0 0 0`, `0 1 0 0`, `0 0 0 0`, `0 0 0 1`. These represent the initial 4x4 grid. The '1' indicates the presence of a black square in that quadrant.
* **Subsequent rows:** `0 1`, `2 3 6 10`, `4 8 12`, `5 9 13 14 15`, `7 11`. These rows represent the further decomposition of quadrants that contain black squares. The numbers likely represent the indices or identifiers of the resulting quadrants.
Specifically:
* Row 1: `0 0 0 0` - All quadrants are white.
* Row 2: `0 1 0 0` - Quadrant 2 (second element) contains a black square.
* Row 3: `0 0 0 0` - All quadrants are white.
* Row 4: `0 0 0 1` - Quadrant 4 (last element) contains a black square.
* Row 5: `0 1` - Quadrant 2 is further decomposed into two quadrants, with the second quadrant containing a black square.
* Row 6: `2 3 6 10` - Quadrant 2 is further decomposed into four quadrants, with quadrants 2, 3, 6, and 10 containing black squares.
* Row 7: `4 8 12` - Quadrant 2 is further decomposed into three quadrants, with quadrants 4, 8, and 12 containing black squares.
* Row 8: `5 9 13 14 15` - Quadrant 2 is further decomposed into five quadrants, with quadrants 5, 9, 13, 14, and 15 containing black squares.
* Row 9: `7 11` - Quadrant 2 is further decomposed into two quadrants, with quadrants 7 and 11 containing black squares.
### Key Observations
The decomposition continues until quadrants are either entirely white or contain a single black square. The numbers in the matrix seem to be a unique identifier for each quadrant as it is further subdivided. The decomposition is not uniform; some quadrants are subdivided more than others, depending on the distribution of black squares.
### Interpretation
This diagram illustrates the principle of quadtree decomposition, a hierarchical data structure commonly used in image processing, spatial indexing, and computer graphics. The process recursively divides a two-dimensional space into four equal quadrants, allowing for efficient representation and manipulation of spatial data. The presence of black squares triggers further decomposition, while white quadrants remain undivided. The numbers in the matrix likely serve as unique identifiers for each quadrant, enabling efficient traversal and access to the data. The diagram demonstrates how a complex image can be represented in a hierarchical manner, facilitating tasks such as compression, searching, and rendering. The uneven decomposition suggests that the distribution of black squares is not uniform, leading to varying levels of detail in different regions of the image.
</details>
Figure 11: Representation of inputs instances to dosun-fuwari as text.
Figure 11 represents conversion of the inputs to dosun fuwari, originally represented as grid images to text. Here the first few lines represent the input board followed by a string ’—–’ which acts as a separator following which each of the lines has space-separated integers representing the subgrids of the input board. Cells are numbered in row-major order starting from 0, and this numbering is used to represent cells in each of the lines describing the subgrids. In the first few lines representing the input board, 0’s represent the empty cells that must be filled. 1’s denote the blackened cell, 2s denote the balloons and 3’s denote the iron balls. The box below describes these rules more formally in natural language {mdframed}
Input-Format: - The first few lines represent the input board, followed by a line containing --------, which acts as a separator, followed by several lines where each line represents one subgrid - Each of the lines representing the input board will have space-separated integers ranging from 0 to 3 - 0 denotes empty cells, 1 denotes a filled cell (blackened cell), 2 denotes a cell with a balloon, 3 denotes a cell with an iron ball - After the board, there is a separator line containing -------- - Each of the following lines has space-separated elements representing the subgrids on the input board - Each of these lines has integers representing cells of a subgrid - Cells are numbered in row-major order starting from 0, and this numbering is used to represent cells in each of the lines describing the subgrids Sample-Input: 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 -------- 0 1 2 3 6 10 4 8 12 5 9 13 14 15 7 11 Output Format: - The output should contain as many lines as the size of the input board, each representing one row of the solved board - Each row should have n space separate integers (ranging from 0-3) where n is the size of the input board - Empty cells will be denoted by 0s, filled cells (blackened) by 1s, balloons by 2s and iron balls by 3s Sample-Output: 2 3 0 2 2 1 0 2 0 2 3 3 3 0 3 1
A.3.3 Example Problem: Hamiltonian Path
<details>
<summary>extracted/6211530/Images/hamiltonian-path-input-representation.png Details</summary>

### Visual Description
\n
## Diagram: Graph to Adjacency List Conversion
### Overview
The image depicts a graph with six nodes connected by edges, and an arrow pointing to a list of number pairs. The diagram illustrates the conversion of a graph representation into an adjacency list representation.
### Components/Axes
The image consists of two main components:
1. **Graph:** A visual representation of a graph with six nodes (yellow circles) and edges (black and red lines).
2. **Adjacency List:** A list of number pairs enclosed in a rounded rectangle.
### Detailed Analysis or Content Details
The graph has six nodes, labeled implicitly as 0 through 5. The edges connect these nodes as follows:
* Node 0 is connected to nodes 1, 2, and 5.
* Node 1 is connected to nodes 0, 2, and 5.
* Node 2 is connected to nodes 0, 1, and 3.
* Node 3 is connected to nodes 2 and 4.
* Node 4 is connected to nodes 3 and 5.
* Node 5 is connected to nodes 0, 1, and 4.
The adjacency list contains the following pairs:
* 5 0
* 0 1
* 1 2
* 2 3
* 3 4
* 4 5
* 0 5
* 1 5
* 1 4
### Key Observations
The adjacency list represents the connections in the graph. Each pair (A, B) in the list indicates that there is an edge from node A to node B. The list appears to represent the edges in a specific order, potentially based on the node number. The graph contains both black and red edges, but the adjacency list does not differentiate between them.
### Interpretation
The diagram demonstrates how a graph can be represented using an adjacency list. An adjacency list is a common data structure used in graph algorithms to store the connections between nodes. Each node in the graph has a list of its adjacent nodes. The adjacency list provides an efficient way to traverse the graph and perform operations such as finding the shortest path or detecting cycles. The diagram shows a simple example of this conversion, where the edges in the graph are translated into pairs of node numbers in the adjacency list. The absence of color differentiation in the adjacency list suggests that the color of the edges in the graph is not relevant to the adjacency list representation.
</details>
Figure 12: Representation of input instances to hamiltonian-path as text.
Figure 12 represents the conversion of inputs to hamiltonian-path, originally represented as graph image to text. The first line denotes the number of vertices present in the graph followed by which each node of the graph will be numbered from 0 - N-1. Each of the subsequent lines represents an edge of the graph and will contain two space-separated integers (according to the numbering defined previously). The output is a single word (YES/NO) indicating if a hamiltonian path exists in the graph. The box below describes this more formally in natural language. {mdframed}
Input Format: - The first line will contain a single integer N, the number of nodes in the graph - The nodes of the graph will be numbered from 0 to N-1 - Each of the subsequent lines will represent an edge of the graph and will contain two space-separated integers (according to the numbering defined above) Sample-Input: 5 0 1 1 2 2 3 3 4 Output Format: - The output should contain a single line with a single word - The word should be YES if a path exists in the input graph according to constraints specified above and NO otherwise Sample Output: YES
Table 7: Names of problems in FCoReBench, number of samples in the training set, number of samples in the test set, average size of input instances in training set, average size of input instances in test set and computational complexity. The brackets in the 4th column describe how input instance sizes are measured. ? in the computational complexity column indicates that results are not available for the corresponding problem.
| 3-Partition (Non Decision) | 15 | 30 | 12 (array size) | 17.7 | NP-Hard |
| --- | --- | --- | --- | --- | --- |
| 3-Partition (Decision) | 15 | 30 | 12 (array size) | 17.7 | NP-Complete |
| Binario | 15 | 50 | 4.0 $×$ 4.0 (grid size) | 6.96 $×$ 6.96 | NP-Hard [De Biasi, 2013] |
| Car-Sequencing | 15 | 30 | 6.96, 3.66, 4.33 (# of cars, # of options, # of classes) | 9.06, 5.66, 6.33 | NP-Hard [Kis, 2004] |
| Clique Cover | 15 | 30 | 6.26, 9.4 (# of nodes, # of edges) | 12.9, 31.4 | NP-Complete |
| Cryptarithmetic | 15 | 30 | 4.32 (Average # of digits in the two operands ) | 4.26 | NP-Hard [Epstein, 1987] |
| Dosun Fuwari | 15 | 30 | 3.066 $×$ 3.066 (grid size) | 5.23 $×$ 5.23 | NP-Hard [Iwamoto and Ibusuki, 2018] |
| Futoshiki | 15 | 47 | 5 $×$ 5 (grid size) | 7.57 $×$ 7.57 | NP-Hard [Lloyd et al., 2022] |
| Fill-a-pix | 15 | 35 | $2.87× 2.87$ (grid size) | $4.1× 4.1$ | NP-Hard [HIGUCHI and KIMURA, 2019] |
| Flow-Shop | 15 | 30 | 6.06, 3.4 (# of jobs, #num of machines) | 3.83, 9.13 | NP-Complete [Garey et al., 1976a] |
| Factory Workers | 15 | 30 | 5.73, 12.66 (# of factories, # of workers) | 12.35, 30.0 | ? |
| Graph Coloring | 15 | 30 | 5.13, 6.8 (# of nodes, # of edges) | 9, 21.06 | NP-Complete [Gent et al., 2017] |
| Hamiltonian Path | 15 | 30 | 5.93, 8.6 (# of nodes, # of edges) | 13.0, 19.77 | NP-Complete |
| Hamiltonian Cycle | 15 | 30 | 5.93, 8.6 (# of nodes, # of edges) | 11.07, 18.67 | NP-Complete |
| Hidato | 15 | 45 | $2.87× 2.87$ (grid size) | $4.1× 4.1$ | NP-Hard [Itai et al., 1982] |
| Independent Set | 12 | 30 | 5.8, 7.2 (# of nodes, # of edges) | 14.2, 29.8 | NP-Complete |
| Inshi-No-Heya | 15 | 49 | 5.0 $×$ 5.0 (grid size) | 6.5 $×$ 6.5 | ? |
| Job-Shop | 15 | 30 | 3.66, 3.66 (# of jobs, # of machines) | 9, 9 | NP-Complete [Garey et al., 1976b] |
| K-Clique | 15 | 31 | 4.87, 7.6 (# of nodes, # of edges) | 8.84, 26.97 | NP-Complete |
| Keisuke | 15 | 30 | 4.33 $×$ 4.33 (grid size) | 5.83 $×$ 5.83 | ? |
| Ken Ken | 15 | 20 | 3.26 $×$ 3.26 (grid size) | 5.2 $×$ 5.2 | NP-Hard [Haraguchi and Ono, 2015] |
| Knapsack | 15 | 30 | 4.8 (array size) | 24.56 | NP-Hard |
| K Metric Centre | 15 | 30 | 4.5 (# of nodes) | 7 | NP-Hard |
| Latin Square | 15 | 50 | 6 $×$ 6.0 (grid size) | 14.3 $×$ 14.3 | NP-Hard [Colbourn, 1984] |
| Longest Path Problem | 15 | 30 | 6.2, 5.87 (# of nodes, # of edges) | 12.6, 16.3 | NP-Complete |
| Magic Square | 15 | 30 | 3.0 $×$ 3.0 (grid size) | 4.33 $×$ 4.33 | ? |
| Minimum Dominating Set | 15 | 30 | 6.0, 17.73 (# of nodes, # of edges) | 14.53, 45.0 | NP-Complete |
| N-Queens | 15 | 30 | 3.8 $×$ 3.8 (grid size) | 6.33 $×$ 6.33 | NP-Hard [Gent et al., 2017] |
| Number Link | 15 | 50 | 4 $×$ 4 (grid size) | 7.1 $×$ 7.1 | NP-Hard |
| Partition Problem | 15 | 35 | 7.06 (array size) | 15 | NP-Complete |
| PRP | 15 | 30 | 4.93, 12.6 (# of units, # of days) | 6.7, 23.9 | ? |
| Shinro | 15 | 30 | 5.13 $×$ 5.13 (grid size) | 9.2 $×$ 9.2 | ? |
| Subset Sum | 15 | 30 | 3.67 (array size) | 11.87 | NP-Complete |
| Summle | 15 | 20 | 2.33 (# of equations) | 3.75 | ? |
| Sudoku | 15 | 50 | 4.0 $×$ 4.0 (grid size) | 13.3 $×$ 13.3 | NP-Hard [YATO and SETA, 2003] |
| Sujiko | 15 | 45 | 3.0 $×$ 3.0 (grid size) | 4.0 $×$ 4.0 | ? |
| Survo | 15 | 47 | 13.5 (area of grid) | 20.25 | ? |
| Symmetric Sudoku | 15 | 30 | 4 $×$ 4 (grid size) | 6.5 $×$ 6.5 | ? |
| Sliding Tiles | 15 | 30 | $2.66× 2.66$ , 6.13 (grid size, search depth) | $3.63× 3.63$ , 8.83 | NP-Complete [Demaine and Rudoy, 2018] |
| Vertex Cover | 14 | 30 | 6.4, 13.4 (# of nodes, # of edges) | 12.6, 40.4 | NP-Complete |
Appendix B Prompt Templates
In this section we provide prompt templates used for our experiments on FCoReBench, including the templates for the baselines we experimented with, SymPro-LM as well as prompt templates for providing feedback.
B.1 Few-Shot Prompt Template
{mdframed}
Task: <Description of the Rules of the problems>
Input-Format: <Description of Textual Representation of Inputs> <Input Few Shot Example-1> <Input Few Shot Example-2> ........................ ........................ <Input Few Shot Example-n>
Output-Format
<Description of Textual Representation of Outputs> <Output of Few Shot Example-1> <Output of Few Shot Example-2> ............................ ............................ <Output of Few Shot Example-n>
Input problem instance to be solved: <Problem Instance from the Test Set>
B.2 PAL Prompt Template
The following box describes the base prompt template used for PAL experiments with FCoReBench. {mdframed}
Write a Python program to solve the following problem: Task: <Description of the Rules of the problem> Input-Format: <Description of Textual Representation of Inputs> Sample-Input: <Sample Input from Feedback Set> Output-Format: <Description of Textual Representation of Outputs> Sample-Output: <Output of Sample Input from Feedback Set> Don’t write anything apart from the Python program; use Python comments if needed. The Python program is expected to read the input from input.txt and write the output to a file named output.txt. The Python program must only use standard Python libraries.
B.3 SymPro-LM Template
B.3.1 Base Prompt
{mdframed}
Write a Python program to solve the following problem: Task: <Description of the Rules of the problem>
Input-Format: <Description of Textual Representation of Inputs> Sample-Input: <Sample Input from Feedback Set>
Output-Format: <Description of Textual Representation of Outputs> Sample-Output: <Output of Sample Input from Feedback Set>
The Python program must read the input from input.txt and convert that particular input to the corresponding constraints, which it should pass to the Z3 solver, and then it should use the Z3 solver’s output to write the solution to a file named output.txt
Don’t write anything apart from the Python program; use Python comments if needed.
B.4 Feedback Prompt Templates
These prompt templates are used to provide feedback in the case of SymPro-LM or PAL.
B.4.1 Programming Errors
Your code is incorrect and produces the following runtime error:<RUN TIME ERROR> for the following input: <INPUT> rewrite your code and fix the mistake
B.4.2 Verification Error
Your code is incorrect, when run on the input: <INPUT> the output produced is <OUTPUT-GENERATED> which is incorrect whereas one of the correct output is <GOLD-OUTPUT>. Rewrite your code and fix the mistake.
B.4.3 Timeout Error
Your code was inefficient and took more than <TIME-LIMIT> seconds to execute for the following input: <INPUT>. Rewrite the code and optimize it.
B.5 Logic-LM Prompt Template
The following box describes the prompt for Logic-LM experiments with FCoReBench, the prompt is used to convert the input to its symbolic representation. {mdframed}
Task: <Description of the Rules of the problem> Input-Format: <Description of Textual Representation of Inputs> Sample-Input: <Sample Input from Feedback Set> Output-Format: <Description of Textual Representation of Outputs> Sample-Output: <Output of Sample Input from Feedback Set> Input problem to be solved: <Problem Instance from the Test Set> The task is to declare variables and the corresponding constraints on them in SMT2 for the input mentioned above. The variables and constraints should be such that once the variables are solved for, one can use the solution to the variables (which satisfies the constraints) to get to the output in the desired format for the above mentioned input. Only Write the SMT2 code and nothing else. Write the complete set of SMT2 variables and constraints. Enclose SMT2 code in ‘‘‘smt2 ‘‘‘
B.6 ToT
In this section we give an example of the ToT prompts used for experiments on FCoReBench. We use latin square as the running example.
B.6.1 Propose Prompt
This prompt is called for each search node to get the possible next states. {mdframed}
Task: We are given a nxn partially solved board and have to solve it according to the following rules: - We need to replace the 0s with numbers from 1-n. - Non-zero numbers on the board cannot be replaced. - Each number from 1-n must appear exactly once in each column and row in the solved board Given a board, decide which cell to fill in next and the number to fill it with, each possible next step is separated by a new line. You can output up-to 3 next steps. If the input board is fully filled or no valid next step exists output only ’END’. Sample-Input-1: 1 0 3 2 0 0 0 1 2 Possible next steps for Sample Input-1: 1 2 3 2 0 0 0 1 2 1 0 3 2 0 0 3 1 2 1 0 3 2 3 0 0 1 2 Sample-Input-2: 1 2 3 2 3 1 3 1 2 Possible next steps for Sample Input-2: END Input: <node from the search tree> Possible next steps for Input:
B.6.2 Value Prompt
This prompt is called for each search node to evaluate how likely it is to get to the solution from that node. We use this to prune the search tree. {mdframed}
Task: We are given a nxn partially solved board and have to solve it according to the following rules: - We need to replace the 0s with numbers from 1-n. - Non-zero numbers already on the board cannot be replaced. - Each number from 1-n must appear exactly once in each column and row in the solved board. Given a partially filled board, evaluate how likely it is to reach a valid solution (sure/likely/impossible) Output-Format: The output should have two lines as follows: <Reasoning> <Sure/Likely/Impossible> Sample-Input-1: 0 0 0 0 0 0 0 0 0 Board is empty, hence it is always possible to get to a solution. Sure Sample-Input-2: 1 0 3 2 0 0 0 1 2 No constraint is violated till now and it is likely to get to a solution. Likely Sample-Input-3: 1 1 3 2 0 0 0 1 2 Constraint violated in first row. Impossible Input: <node from the search tree>
Appendix C Experimental Details
C.1 FCoReBench
All methods are evaluated zero-shot, meaning no in-context demonstrations for the task are provided to the LLM. We choose the zero-shot setting for FCoReBench because of the structured nature of problems, making it unfair to provide demonstrations of highly related problems instances to the LLM. The LLM is only given a description of the rules of the problem and the task it has to perform. For PAL and SymPro-LM we present results with 10 solved examples for feedback.
C.1.1 ToT prompting
We evaluate ToT prompting [Yao et al., 2023] on 3 problems in FCoReBench. Our implementation closely resembles the official implementation which we adapt for grid based logical puzzles. We use a BFS based approach with propose and value prompts. An example prompt for latin square can be found in Appendix B.6. Problems in our benchmark have huge branching factors, to reduce experimentation cost, we greedily prune the search frontier to 5 nodes at each depth based on scores assigned by the LLM during the value stage. Additionally during the propose stage we prompt the LLM to output at most 3 possible next steps. The temperature is set to 0.0 for reproducibility. Unlike the original implementation problems in our benchmark can have varying search depths, hence we explicitly ask the LLM to output ’END’ once a terminal node is reached. At any depth if a terminal node is amongst the best nodes we terminate the search and return the terminal nodes at that depth, otherwise we search till a maximum search depth governed by the problem instance.
C.2 Other Datasets
We evaluate SymPro-LM on 3 other datasets apart from FCoReBench. Our evaluation closely follows Logic-LM’s evaluation [Pan et al., 2023]. For baselines we use the same prompts as Logic-LM. Logic-LM did not evaluate PAL, for which we write prompts on our own similar to the CoT prompts used by Logic-LM. For SymPro-LM we write prompts on our own. We use the same in-context examples as used for Logic-LM. We instruct the LLM to write a Python program to parse the input problem, setup variables/constraints and pass these to a symbolic solver, call the solver and using the solver’s output print the final answer. For LogicalDeduction we use the python-constraints https://github.com/python-constraint/python-constraint package which is a CSP solver. For other datasets we use the Z3-solver https://pypi.org/project/z3-solver/. Since all problems are single correct MCQ questions we use accuracy as our metric. Like Logic-LM if there is an error during execution of the program generated by the LLM we fall back on using chain-of-thought to predict the answer. The following sections provide descriptions for the datasets used.
C.2.1 PrOntoQA
PrOntoQA [Saparov and He, 2023a] is a recent synthetic dataset created to analyze the deductive reasoning capacity of LLMs. We use the hardest fictional characters version of the dataset, based on the results in [Saparov and He, 2023a]. Each version is divided into different subsets depending on the number of reasoning hops required. We use the hardest 5-hop subset for evaluation. Each question in PrOntoQA aims to validate a new fact’s veracity, such as “True or false: Alex is not shy.” The following box provides an example: {mdframed}
Context: Each jompus is fruity. Every jompus is a wumpus. Every wumpus is not transparent. Wumpuses are tumpuses. Tumpuses are mean. Tumpuses are vumpuses. Every vumpus is cold. Each vumpus is a yumpus. Yumpuses are orange. Yumpuses are numpuses. Numpuses are dull. Each numpus is a dumpus. Every dumpus is not shy. Impuses are shy. Dumpuses are rompuses. Each rompus is liquid. Rompuses are zumpuses. Alex is a tumpus Question: True or false: Alex is not shy. Options: A) True B) False
C.2.2 ProofWriter
ProofWriter [Tafjord et al., 2021] is another commonly used dataset for deductive logical reasoning. Compared with PrOntoQA, the problems are expressed in a more naturalistic language form. We use the open-world assumption (OWA) subset in which each example is a (problem, goal) pair and the label is one of PROVED, DISPROVED, UNKNOWN. The dataset is divided into five parts each part requiring 0, $≤$ 1, $≤$ 2, $≤$ 3, and $≤$ 5 hops of reasoning, respectively. We evaluate the hardest depth-5 subset. To reduce overall experimentation costs, we randomly sample 600 examples in the test set and ensure a balanced label distribution. The following box provides an example: {mdframed}
Context: Anne is quiet. Erin is furry. Erin is green. Fiona is furry. Fiona is quiet. Fiona is red. Fiona is rough. Fiona is white. Harry is furry. Harry is quiet. Harry is white. Young people are furry. If Anne is quiet then Anne is red. Young, green people are rough. If someone is green then they are white. If someone is furry and quiet then they are white. If someone is young and white then they are rough. All red people are young. Question: Based on the above information, is the following statement true, false, or unknown? Anne is white. Options: A) True B) False C) Uncertain
C.2.3 LogicalDeduction
LogicalDeduction bench authors, 2023 is a challenging logical reasoning task from the BigBench collaborative benchmark. The problems are mostly about deducing the order of a sequence of objects from a minimal set of conditions. We use the full test set consisting of 300 examples. The following box provides an example: {mdframed}
Context: The following paragraphs each describe a set of three objects arranged in a fixed order. The statements are logically consistent within each paragraph. In an antique car show, there are three vehicles: a station wagon, a convertible, and a minivan. The station wagon is the oldest. The minivan is newer than the convertible. Question: Which of the following is true? Options: A) The station wagon is the second-newest. B) The convertible is the second-newest. C) The minivan is the second-newest.
C.3 Hardware Details
All experiments were conducted on an Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz, 32 cores, 64-bit, with 512 KiB L1 cache, 16 MiB L2 cache, and 22 MiB L3 cache. We accessed GPT-4-Turbo and GPT-3.5-Turbo by invoking both models via the OpenAI API. Mixtral 8x7B was also accessed by using the Mistral AI API although the model weights are available publicly. We preferred the API, over running the model locally given the ease of setup because all our other experiments were with APIs.
Appendix D Additional Results
D.1 Inference Time
The following tables describes the average inference time for test set instances of a few illustrative problems in FCoReBench.
| Sudoku Latin Square Cryptarithmetic | 2.01 5.46 0.83 | 0.215 0.2 0.73 |
| --- | --- | --- |
| Independent Set | 1.438 | 0.106 |
| Minimum Dominating Set | 0.98 | 0.112 |
| Sujiko | 0.742 | 0.102 |
| Vertex Cover | 1.58 | 0.105 |
Table 8: Average inference time in seconds of SymPro-LM and PAL for test set instances for selected problems in FCoReBench
SymPro-LM performs much better compared to PAL because PAL programs often tend to be brute force and inefficient whereas the solver can exploit the nature of the input-instance while performing the reasoning with SymPro-LM.
Appendix E Examples
E.1 SymPro-LM
E.1.1 FCoReBench
This section includes example programs generated by SymPro-LM for some illustrative problems in FCoReBench. Each program reads the input from a file, generates the corresponding constraints, calls the solver internally and then uses the solution from the solver to write the output in the desired format to a file.
Figure 13: SymPro-LM example: correct program for sudoku generated by GPT-4-Turbo.
Figure 14: SymPro-LM example: correct program for keisuke generated by GPT-4-Turbo.
Figure 15: SymPro-LM example: correct program for hamiltonian path generated by GPT-4-Turbo.
Figure 16: SymPro-LM example: correct program for vertex-cover generated by GPT-4-Turbo.
Figure 17: SymPro-LM example: snippet of incorrect program for binairo generated by GPT-4-Turbo and same snippet after correction by feedback.
Figure 18: SymPro-LM example: correct program for subset-sum generated by GPT-4-Turbo.
E.1.2 Other Datasets
Figure 19: SymPro-LM PrOntaQA Example Program.
Figure 20: SymPro-LM ProofWriter Example Program.
Figure 21: SymPro-LM LogicalDeduction Example Program.
E.2 PAL
This section includes example programs generated by PAL for some illustrative problems in FCoReBench. Each program reads the input from a file, performs the reasoning and writes the output to another text file.
Figure 22: PAL example: correct program for sudoku generated by GPT-4-Turbo.
Figure 23: PAL example: correct program for futoshiki generated by GPT-4-Turbo.
Figure 24: PAL example: correct program for hamiltonian path generated by GPT-4-Turbo.
Figure 25: PAL example: correct program for vertex cover generated by GPT-4-Turbo.
Figure 26: PAL example: correct program for subset sum generated by GPT-4-Turbo.
Appendix F Logic-LM
This section describes example runs of Logic-LM for certain problems in FCoReBench.
Figure 27: Logic-LM example: correct constraints for a sudoku instance generated by GPT-4-Turbo.
Figure 28: Logic-LM example: correct constraints for a subset sum instance generated by GPT-4-Turbo.
Figure 29: Logic-LM example: correct constraints for graph coloring instance generated by GPT-4-Turbo.
Figure 30: Logic-LM example: syntax error (highlighted by a comment) in constraints for sudoku instance generated by GPT-3.5-Turbo.
Figure 31: Logic-LM example: errors (highlighted comments) in constraints for sudoku instance generated by GPT-4-Turbo.