# Efficient Rectification of Neuro-Symbolic Reasoning Inconsistencies by Abductive Reflection
**Authors**: Wen-Chao Hu, Wang-Zhou Dai, Yuan Jiang, Zhi-Hua Zhou
## Abstract
Neuro-Symbolic (NeSy) AI could be regarded as an analogy to human dual-process cognition, modeling the intuitive System 1 with neural networks and the algorithmic System 2 with symbolic reasoning. However, for complex learning targets, NeSy systems often generate outputs inconsistent with domain knowledge and it is challenging to rectify them. Inspired by the human Cognitive Reflection, which promptly detects errors in our intuitive response and revises them by invoking the System 2 reasoning, we propose to improve NeSy systems by introducing Abductive Reflection (ABL-Refl) based on the Abductive Learning (ABL) framework. ABL-Refl leverages domain knowledge to abduce a reflection vector during training, which can then flag potential errors in the neural network outputs and invoke abduction to rectify them and generate consistent outputs during inference. ABL-Refl is highly efficient in contrast to previous ABL implementations. Experiments show that ABL-Refl outperforms state-of-the-art NeSy methods, achieving excellent accuracy with fewer training resources and enhanced efficiency.
## 1 Introduction
Human decision-making is generally recognized as an interaction between two systems: System 1 quickly generates an intuitive response, and System 2 engages in further algorithmic and slow reasoning (Frederick 2005; Kahneman 2011). In Neuro-Symbolic (NeSy) Artificial Intelligence (AI), neural networks often resemble System 1 for rapid pattern recognition, and symbolic reasoning mirrors System 2 to leverage domain knowledge and handle complex problems thoughtfully, yet in a slower and more controlled way (Bengio 2019). Like human System 1 reasoning, when facing complicated tasks, neural networks often produce unreliable outputs which cause inconsistencies with domain knowledge. These inconsistencies can then be reconciled with the help of the symbolic reasoning counterpart (Hitzler 2022).
To achieve the above process, some methods relax symbolic domain knowledge as neural network constraints (Xu et al. 2018; Yang, Lee, and Park 2022), some attempt to approximate logical calculus using distributed representations within neural networks (Wang et al. 2019). However, a loss of full symbolic reasoning ability often occurs during these relaxation or approximation, hampering the ability of generating reliable output.
Abductive Learning (ABL) (Zhou 2019; Zhou and Huang 2022) is a framework for bridging machine learning and logical reasoning while preserving full expressive power in each side. In ABL, the machine learning component first converts raw data into primitive symbolic outputs. These outputs can be utilized by the symbolic reasoning component, which leverages domain knowledge and performs abduction to generate a revised, more reliable output. However, previous implementations of ABL require a highly discrete combinatorial consistency optimization before applying abduction, and this optimization has high complexity which encumbers, thereby severely limiting the efficiency and applicability to large-scale scenarios.
Human reasoning naturally exploits both sides efficiently, a hypothetical model for this process is called Cognitive Reflection, where the fast System 1 thinking is called to quickly generate an approximate over-all solution, and then seamlessly hands the complicated parts to System 2 (Frederick 2005). The key to this process is the reflection mechanism, which promptly detects which part in the intuitive response may contain inconsistencies with domain knowledge and invokes System 2 to rectify them. This reflection typically positively associates with System 2 capabilities, as both are closely linked to an individual’s mastery of domain knowledge (Sinayev and Peters 2015). Following the reflection, the process of the step-by-step formal reasoning becomes less complex: With a largely reduced search space, deriving the correct solution for System 2 becomes straightforward.
Inspired by this phenomenon, we propose a general enhancement, Abductive Reflection (ABL-Refl). Based on ABL framework, ABL-Refl preserves full expressive power of neural networks and symbolic reasoning, while replacing the time-consuming consistency optimization with the reflection mechanism, thereby significantly improves efficiency and applicability. Specifically, in ABL-Refl, a reflection vector is concurrently generated with the neural network intuitive output, which flags potential errors in the output and invokes symbolic reasoning to perform abduction, thereby rectifying these errors and generating a new output that is more consistent with domain knowledge. During model training, the training information for the reflection derives from domain knowledge. In essence, the reflection vector is abduced from domain knowledge and serves as an attention mechanism for narrowing the problem space of symbolic reasoning. The reflection can be trained unsupervisedly, requiring only the same amount of domain knowledge as state-of-the-art NeSy systems without generating extra training data.
We validate the effectiveness of ABL-Refl in solving Sudoku NeSy benchmarks in both symbolic and visual forms. Compared to previous NeSy methods, ABL-Refl performs significantly better, achieving higher reasoning accuracy efficiently with fewer training resources. We also compare our method to symbolic solvers, and show that the reduced search space in ABL-Refl improves the reasoning efficiency. Further experiments on solving combinatorial optimization on graphs validate that ABL-Refl can handle diverse types of data in varied dimensions, and exploit knowledge base in different forms.
## 2 Related Work
Recently, there has been notable progress in enhancing neural networks with reliable symbolic reasoning. Some methods use differentiable fuzzy logic (Serafini and Garcez 2016; Marra et al. 2020) or relax symbolic domain knowledge as constraints for neural network training (Xu et al. 2018; Yang, Lee, and Park 2022; Hoernle et al. 2022; Ahmed et al. 2022), while others learn constraints within neural networks by approximating logic reasoning with distributed representations (Amos and Kolter 2017; Selsam et al. 2018; Wang et al. 2019). These models tend to soften the requirements in symbolic reasoning, impacting the reliability of output generation. Models like DeepProbLog (Manhaeve et al. 2018) and NeurASP (Yang, Ishay, and Lee 2020) interpret the neural network output as a distribution over symbols and then apply a symbolic solver, incurring substantial computational costs. Abductive Learning (ABL) (Zhou 2019; Zhou and Huang 2022) attempts to integrate machine learning and logical reasoning in a balanced and mutually supporting way. It features an easy-to-use open-source toolkit (Huang et al. 2024) with many practical applications (Huang et al. 2020; Cai et al. 2021; Wang et al. 2021; Gao et al. 2024). However, the consistency optimization is with high complexity.
Another category of work related to our study also follows a similar process of prediction, error identification, and reasoning (Nair et al. 2020; Nye et al. 2021; Han et al. 2023). These methods are usually constrained in a narrow scope of domain knowledge, confined to specific mathematical problems or are bounded within a minimal world model.
Cornelio el al. (2023) generates a selection module to identify errors requiring symbolic reasoning rectification. In constrast to their approach which requires the preparation of a large synthetic dataset in advance, our approach automatically abduces the reflection vector during model training.
## 3 Abductive Reflection
This section presents problem setting and the Abductive Reflection (ABL-Refl) method.
### 3.1 Problem Setting
The main task of this paper is as follows: The input is raw data $\boldsymbol{x}$ , which can be in either symbolic or sub-symbolic form, and the target output is $\boldsymbol{y}=\left[y_{1},y_{2},\dots,y_{n}\right]$ , with each $y_{i}$ being a symbol from a set $\mathcal{Y}$ that contains all possible output symbols. We assume two key components at our disposal: neural network $f$ and domain knowledge base $\mathcal{KB}$ . $f$ can directly map $\boldsymbol{x}$ to $\boldsymbol{y}$ , and $\mathcal{KB}$ holds constraints between the symbols in $\boldsymbol{y}$ . $\mathcal{KB}$ can assume various forms, including propositional logic, first-order logic, mathematical or physical equations, etc., and can perform symbolic reasoning operations by exploiting the corresponding symbolic solver. The output $\boldsymbol{y}$ should adhere to the constraints in $\mathcal{KB}$ , otherwise it will inevitably contain errors that lead to inconsistencies with the domain knowledge and incorrect reasoning results.
This problem type has broad applications. For example, it can be used to solve Sudoku puzzles, where the output $\boldsymbol{{y}}$ consists of $n=81$ symbols from the set $\mathcal{Y}=\{1,2,\dots,9\}$ , and the constraints in $\mathcal{KB}$ are the rules of Sudoku. It can also be applied in deploying generative models for text generation, gene prediction, mathematical problem-solving, etc., producing outputs that adhere to intricate commonsense, biological, or mathematical logics in $\mathcal{KB}$ .
### 3.2 Brief Introduction to Abductive Learning
<details>
<summary>extracted/6187776/figs/ABL.jpg Details</summary>

### Visual Description
\n
## Diagram: Abduction and Consistency Optimization in a Neural Network
### Overview
This diagram illustrates a process for refining the output of a neural network using abduction and consistency optimization. The neural network generates an initial "intuitive" output, which is then refined based on a knowledge base (KB) and a consistency optimization step to produce a final output. The diagram depicts the flow of information from input to final output, highlighting the key components and processes involved.
### Components/Axes
The diagram consists of the following components:
* **Input (x):** The initial input to the neural network.
* **Neural Network f:** Represented as a grey box, composed of a Body Block (f1) and an Output Layer (f2).
* **Intuitive Output (ŷ):** A vertical list of predicted values, labeled ŷ1, ŷ2, ŷ3, ŷ4, ... ŷn. The list is enclosed in a dashed-line rectangle.
* **Knowledge Base (KB):** A cylindrical shape labeled "KB".
* **Abduction (δ):** A process represented by a vertical block labeled "Abduction δ".
* **Consistency Optimization:** A vertical block labeled "Consistency Optimization".
* **Final Output (ȳ):** A vertical list of refined values, labeled ȳ1, ȳ2, ȳ3, ȳ4, ... ȳn. The list is enclosed in a solid-line rectangle.
* **Arrows:** Indicate the flow of information and dependencies between components. Red dashed arrows connect the Intuitive Output to the Knowledge Base and Abduction block. Solid arrows connect the Abduction block to the Consistency Optimization block, and the Consistency Optimization block to the Final Output.
### Detailed Analysis or Content Details
The diagram shows the following flow:
1. **Input x** enters the **Neural Network f**.
2. The Neural Network processes the input through its **Body Block f1** and **Output Layer f2**.
3. The **Output Layer f2** generates an **Intuitive Output ŷ**, which is a list of 'n' predicted values: ŷ1, ŷ2, ŷ3, ŷ4, and so on, up to ŷn.
4. The **Intuitive Output ŷ** is used as a **query** to the **Knowledge Base KB**.
5. The **Knowledge Base KB** and the **Intuitive Output ŷ** feed into the **Abduction δ** process.
6. The **Abduction δ** process outputs values that are used in the **Consistency Optimization** step.
7. The **Consistency Optimization** step refines the **Intuitive Output ŷ** to produce the **Final Output ȳ**, which is a list of 'n' refined values: ȳ1, ȳ2, ȳ3, ȳ4, and so on, up to ȳn. The final output is represented as δ(ŷ1), δ(ŷ2), δ(ŷ3), δ(ŷ4), ... δ(ŷn).
### Key Observations
The diagram highlights a multi-stage process for generating a final output. The initial output of the neural network is not considered final but is instead refined through abduction and consistency optimization, leveraging a knowledge base. The use of dashed lines for the initial output and solid lines for the final output visually emphasizes the refinement process. The notation δ(ŷi) suggests a transformation or adjustment applied to each element of the intuitive output.
### Interpretation
This diagram represents a system designed to improve the reliability and accuracy of a neural network's output. The inclusion of a Knowledge Base and the Abduction/Consistency Optimization steps suggest that the system aims to address potential inconsistencies or inaccuracies in the initial neural network prediction. The Abduction process likely infers explanations or hypotheses based on the intuitive output and the knowledge base, while the Consistency Optimization step ensures that the final output aligns with the knowledge base and other constraints. This approach is particularly relevant in scenarios where the neural network's output needs to be interpretable, explainable, or consistent with prior knowledge. The diagram suggests a move beyond purely data-driven predictions towards a more knowledge-aware and reasoning-based system.
</details>
Figure 1: Abductive Learning (ABL) framework.
When Abductive Learning (ABL) receives an input $\boldsymbol{x}$ , it initially employs $f$ to map $\boldsymbol{x}$ into an intuitive output $\boldsymbol{\hat{y}}=\left[\hat{y}_{1},\hat{y}_{2},\dots,\hat{y}_{n}\right]$ . When $f$ is under-trained, $\boldsymbol{\hat{y}}$ might contain errors leading to inconsistencies with $\mathcal{KB}$ . ABL then tries to rectify them, and obtains a revised $\boldsymbol{\bar{y}}$ . As shown in Figure 1, the final output, $\boldsymbol{\bar{y}}$ , consists of two parts: the green part retains the results from neural network, and the blue part is the modified result obtained by abduction, a basic form of symbolic reasoning that seeks plausible explanations for observations based on $\mathcal{KB}$ .
Specifically, the process of obtaining $\boldsymbol{\bar{y}}$ can be divided into two sequential steps. The first step, consistency optimization, determines which positions in $\boldsymbol{\hat{y}}$ include elements that contain errors causing inconsistencies, so that performing abduction at these positions will yield a $\boldsymbol{\bar{y}}$ consistent with $\mathcal{KB}$ . Essentially, this process is pinpointing propositions (or ground atoms, etc.) which have incorrect truth assignments, and most neuro-symbolic tasks can be formalized into this form. Once these positions are determined, the second step is rectifying by abduction, which then becomes easy for $\mathcal{KB}$ and its corresponding symbolic solver.
#### Challenge.
In previous ABL, consistency optimization has always been a computational bottleneck. It operates as an external module using zeroth-order optimization methods, independent from both $f$ and $\mathcal{KB}$ (Dai et al. 2019; Zhou and Huang 2022). For each time of inference, it involves repetitively selecting various possible positions and querying the $\mathcal{KB}$ to see if a consistent result can be inferred. Each query involves an invocation of $\mathcal{KB}$ for slow symbolic reasoning. Also, since it is a complex combinatorial problem with a highly discrete nature, the number of such queries required escalates exponentially as data scale increases. This large number leads to a marked increase in time consumption, hence confines the applicability of ABL to only small datasets, usually those with output dimension $n$ less than 10.
### 3.3 Architecture
To address the challenges above, we propose Abductive Reflection (ABL-Refl). In this section, we will provide a detailed description of its architecture.
Let’s first revisit the role of the neural network $f$ when we map the input to symbols from the set $\mathcal{Y}$ . Typically, the raw data is first passed through the body block of the network, denoted by $f_{1}$ , resulting in a high-dimensional embedding which encapsulates a wealth of feature information of the raw data. The form of $f_{1}$ varies, including structures like recurrent layers, graph convolution layers, or Transformers, etc. The result of $f_{1}$ is subsequently passed into several layers, usually linear layers, denoted by $f_{2}$ , to obtain the intuitive output: $\boldsymbol{\hat{y}}=\text{argmax}(f_{2}(f_{1}(\boldsymbol{x})))\in\mathcal{Y} ^{n}$ .
<details>
<summary>extracted/6187776/figs/ABL-Refl.jpg Details</summary>

### Visual Description
\n
## Diagram: Abduction in Neural Networks
### Overview
This diagram illustrates a process of abduction within a neural network framework. It depicts how an input 'x' is processed through a neural network 'f', and how a reflection layer 'R' and knowledge base 'KB' are used to refine the output through an abduction process, resulting in a final output 'ŷ'. The diagram highlights the iterative refinement of the output based on a reflection and error removal process.
### Components/Axes
The diagram consists of the following components:
* **Input (x):** The initial input to the neural network.
* **Neural Network (f):** Represented as a light gray rounded rectangle, containing two layers:
* **Body Block (f1):** The core processing unit of the network.
* **Output Layer (f2):** The layer producing the initial output.
* **Reflection Layer (R):** A green component that generates a reflection 'r' of the network's output.
* **Intuitive Output (ŷ):** A column vector representing the initial output of the neural network, labeled with ŷ1, ŷ2, ŷ3, ŷ4, and continuing to ŷn.
* **Error-Removed Output (ĝ):** A column vector representing the output after error removal, labeled with ŷ2, ŷ3, ŷ4, and continuing to ŷn. A horizontal bar above ŷ2 indicates error removal.
* **Abduction (δ):** A teal-colored component representing the abduction process, taking the error-removed output and the knowledge base as input.
* **Knowledge Base (KB):** A light blue rectangle representing the knowledge base used in the abduction process.
* **Final Output (ŷ):** A column vector representing the final output after abduction, labeled with ŷ2, ŷ3, ŷ4, and continuing to ŷn. δ(ŷ4) is specifically noted.
* **Reflection (r):** A column vector representing the reflection generated by the Reflection Layer, with values alternating between 1 and 0, continuing in a similar pattern.
### Detailed Analysis or Content Details
The diagram shows a flow of information:
1. **Input (x)** enters the **Neural Network (f)**.
2. The **Body Block (f1)** processes the input.
3. The **Output Layer (f2)** produces an **Intuitive Output (ŷ)**, represented as a column vector with elements ŷ1 to ŷn.
4. The **Reflection Layer (R)** generates a **Reflection (r)**, a column vector with alternating 1s and 0s.
5. The **Intuitive Output (ŷ)** is passed to the **Error-Removed Output (ĝ)**, where an error is removed from ŷ2.
6. The **Error-Removed Output (ĝ)** and the **Knowledge Base (KB)** are fed into the **Abduction (δ)** process.
7. The **Abduction (δ)** process refines the output, specifically noted as δ(ŷ4).
8. The **Final Output (ŷ)** is produced, a column vector with elements ŷ2 to ŷn.
The Reflection (r) vector has the following values as shown:
1
0
1
0
...
0
The Intuitive Output (ŷ) vector has the following values as shown:
ŷ1
ŷ2
ŷ3
ŷ4
...
ŷn
The Error-Removed Output (ĝ) vector has the following values as shown:
ŷ2
ŷ3
ŷ4
...
ŷn
The Final Output (ŷ) vector has the following values as shown:
ŷ2
ŷ3
ŷ4
...
ŷn
### Key Observations
The diagram emphasizes the iterative refinement of the neural network's output through the abduction process. The reflection layer and knowledge base play crucial roles in this refinement. The specific notation δ(ŷ4) suggests that the abduction process may focus on refining specific elements of the output. The alternating 1 and 0 pattern in the Reflection (r) vector is notable and may represent a binary encoding or a specific type of feedback mechanism.
### Interpretation
This diagram illustrates a model for incorporating abductive reasoning into neural networks. Abduction, in this context, is the process of inferring the most likely explanation for an observation. The neural network provides an initial hypothesis (ŷ), the reflection layer generates a representation of the network's internal state (r), and the knowledge base (KB) provides background information. The abduction process (δ) then combines these elements to refine the hypothesis, resulting in a more accurate and reliable final output (ŷ). The diagram suggests a system where the network doesn't just predict, but also *reasons* about its predictions, leveraging external knowledge to improve its performance. The error removal step indicates a focus on correcting inaccuracies in the initial output. The diagram is a conceptual representation of a complex process and doesn't provide specific details about the algorithms or implementation used for each component.
</details>
Figure 2: Architecture of Abductive Reflection (ABL-Refl). It replaces the external consistency optimization module with an efficient reflection mechanism, which is abduced directly from $\mathcal{KB}$ .
Besides the structure described above, as shown in Figure 2, our architecture further incorporates a reflection layer $R$ after the body block $f_{1}$ , generating a reflection vector: $\boldsymbol{r}=\text{argmax}(R(f_{1}(\boldsymbol{x})))\in\{0,1\}^{n}$ . The reflection layer $R$ and reflection vector $\boldsymbol{r}$ together constitute the reflection mechanism. This vector $\boldsymbol{r}$ has the same dimensionality $n$ as the intuitive output $\boldsymbol{\hat{y}}$ , and each element, $r_{i}$ , acts as a binary classifier to indicate whether the corresponding element $\hat{y}_{i}$ is an error leading to inconsistencies with $\mathcal{KB}$ (flagged as 1 for an error, and 0 otherwise). The reflection vector $\boldsymbol{r}$ is generated concurrently with the intuitive response during inference, resonating with human cognition where cognitive reflection typically forms right upon generation of an intuitive response (Frederick 2005).
With the initial intuitive output $\boldsymbol{\hat{y}}$ and the corresponding reflection vector $\boldsymbol{r}$ , we seamlessly obtain the error-removed output $\hat{\boldsymbol{y}}^{\prime}$ : In $\hat{\boldsymbol{y}}^{\prime}$ , elements flagged as error by $\boldsymbol{r}$ are removed and left as blanks, while the rest are retained. Subsequently, $\mathcal{KB}$ applies abduction to fill in these blanks, thereby generating an output $\boldsymbol{\bar{y}}$ that is consistent with $\mathcal{KB}$ . That is:
$$
\bar{y}_{i}=\begin{cases}\quad\!\!\;\hat{y}_{i},&r_{i}=0\\
\delta(\hat{y}_{i}),&r_{i}=1\end{cases}\quad i=1,2,\dots,n
$$
where $\delta$ denotes abduction. We treat $\boldsymbol{\bar{y}}=\left[\bar{y}_{1},\bar{y}_{2},\dots,\bar{y}_{n}\right]$ as the final output.
During model training, the reflection is abduced from $\mathcal{KB}$ by directly leveraging information from domain knowledge (discussed later in Section 3.4). It can be seen as an attention mechanism generated from neural networks, which can help quickly focus symbolic reasoning specifically on areas it identifies as errors, hence largely narrowing the problem space of deliberate symbolic reasoning (Zhang et al. 2020).
#### Benefits.
Compared to previous ABL implementations, ABL-Refl replaces the zeroth-order consistency optimization module with the reflection mechanism to address the computational bottleneck. In this way, the need for a substantial number of querying $\mathcal{KB}$ is mitigated: After promptly pinpointing inconsistencies in System 1 output, regardless of the data scale, only a single invocation of $\mathcal{KB}$ is required to obtain a rectified and more consistent output.
Another thing worth noticing is that, in the architecture, the reflection layer directly connects to the body block, which helps leveraging information from the embeddings and linking more closely with the raw data. Therefore, the reflection vector $\boldsymbol{r}$ establishes a more direct and tighter bridge between raw data and domain knowledge.
### 3.4 Training Paradigm
In this section, we will discuss how to train the ABL-Refl method, especially the reflection in it.
<details>
<summary>extracted/6187776/figs/Opt.jpg Details</summary>

### Visual Description
\n
## Diagram: Intuitive Output and Reflection Process
### Overview
This diagram illustrates a process involving an "Intuitive Output" (ŷ), a "Reflection" (r), an "Error-Removed Output" (ŷ'), and a "Knowledge Base" (KB). The diagram depicts a flow of information from the Intuitive Output and Reflection to the Knowledge Base, resulting in two "Con" operations. It appears to be a conceptual model of a feedback or refinement process.
### Components/Axes
The diagram consists of the following labeled components:
* **Intuitive Output ŷ:** Represented as a vertical array with elements ŷ₁, ŷ₂, ŷ₃, ŷ₄, and continuing to ŷₙ.
* **Reflection r:** Represented as a vertical array with elements 1, 0, 1, and continuing to 0.
* **Error-Removed ŷ':** Represented as a vertical array with elements ŷ₂, ŷ₃, and continuing to ŷₙ.
* **KB:** A cylindrical shape representing a "Knowledge Base".
* **Con(ŷ, KB):** A label indicating a function or operation applied to the Intuitive Output and the Knowledge Base.
* **Con(ŷ', KB):** A label indicating a function or operation applied to the Error-Removed Output and the Knowledge Base.
The diagram uses arrows to indicate the flow of information. A large rectangular block connects the Intuitive Output and Reflection to the Error-Removed Output.
### Detailed Analysis or Content Details
The diagram shows the following information flow:
1. **Intuitive Output (ŷ):** The array ŷ contains elements from ŷ₁ to ŷₙ.
2. **Reflection (r):** The array r contains a sequence of 1s and 0s, starting with 1, 0, 1, and ending with 0. The length of this array is not explicitly defined, but it appears to be related to the length of the Intuitive Output array.
3. **Error-Removed Output (ŷ'):** The array ŷ' contains elements from ŷ₂ to ŷₙ. This suggests that ŷ₁ is removed or modified in the process.
4. **Knowledge Base (KB):** The Knowledge Base is a central component.
5. **Con Operations:**
* The Intuitive Output (ŷ) is fed into the first "Con" operation along with the Knowledge Base (KB), resulting in Con(ŷ, KB).
* The Error-Removed Output (ŷ') is fed into the second "Con" operation along with the Knowledge Base (KB), resulting in Con(ŷ', KB).
The rectangular block connecting Intuitive Output and Reflection to Error-Removed Output suggests a transformation or filtering process. The specific nature of this process is not detailed in the diagram.
### Key Observations
* The Reflection array (r) consists of alternating 1s and 0s. This could represent a selection or masking mechanism.
* The Error-Removed Output (ŷ') omits the first element (ŷ₁) of the Intuitive Output (ŷ).
* The Knowledge Base (KB) is used in both "Con" operations, suggesting it plays a crucial role in the overall process.
* The diagram does not provide any numerical values or specific details about the "Con" operations.
### Interpretation
This diagram likely represents a system for refining an initial "Intuitive Output" (ŷ) using a "Reflection" (r) and a "Knowledge Base" (KB). The Reflection array might act as a filter, selecting or weighting certain elements of the Intuitive Output. The "Error-Removed Output" (ŷ') represents the result of this filtering process. The "Con" operations likely involve comparing the outputs (ŷ and ŷ') with the Knowledge Base to assess their validity or consistency.
The diagram suggests a feedback loop where the initial Intuitive Output is refined based on the Knowledge Base and the Reflection, leading to a more accurate or reliable Error-Removed Output. The specific meaning of "Con" is unknown without further context, but it likely represents a consistency check or a constraint satisfaction process. The diagram is abstract and conceptual, focusing on the flow of information rather than the specific implementation details. It could represent a cognitive process, a machine learning algorithm, or a control system.
</details>
Figure 3: Consistency measurements.
In ABL-Refl, when each input $\boldsymbol{x}$ is processed by the neural network, we obtain the intuitive output $\boldsymbol{\hat{y}}$ and the reflection vector $\boldsymbol{r}$ , and subsequently obtain the error-removed (by $\boldsymbol{r}$ ) output $\boldsymbol{\hat{y}}^{\prime}$ . With $\boldsymbol{\hat{y}}$ and $\boldsymbol{\hat{y}}^{\prime}$ , we can measure their consistency with $\mathcal{KB}$ , respectively. We denote these consistency measurements as $\text{Con}(\boldsymbol{\hat{y}},\mathcal{KB})$ and $\text{Con}(\boldsymbol{\hat{y}}^{\prime},\mathcal{KB})$ , as shown in Figure 3. For a simplest example, if all elements in $\boldsymbol{\hat{y}}$ (or $\boldsymbol{\hat{y}}^{\prime}$ ) adhere to constraints in $\mathcal{KB}$ , the consistency measurement is 1; otherwise, it is 0.
Consequently, the improvement in consistency measurement after reflection, as denoted by
$$
\Delta\text{Con}_{\boldsymbol{r}}(\boldsymbol{\hat{y}})=\text{Con}(\boldsymbol
{\hat{y}}^{\prime},\mathcal{KB})-\text{Con}(\boldsymbol{\hat{y}},\mathcal{KB})
$$
naturally indicates the effectiveness of the reflection vector: A higher value of it signifies that reflection $\boldsymbol{r}$ can more effectively detect inconsistencies within $\boldsymbol{\hat{y}}$ . Our training goal is to guide the neural network’s parameters towards generating reflections that can maximize this value. Given that $\Delta\text{Con}_{\boldsymbol{r}}(\boldsymbol{\hat{y}})$ is usually a discrete value, we employ the REINFORCE algorithm to achieve this goal (Williams 1992), which optimizes the policy (implicitly defined by neural network $f$ ) through maximizing a specified reward — in this case, $\Delta\text{Con}_{\boldsymbol{r}}(\boldsymbol{\hat{y}})$ . This process leads to the following consistency loss:
$$
L_{con}(\boldsymbol{x})=-\Delta\text{Con}_{\boldsymbol{r}}(\boldsymbol{\hat{y}
})\cdot\nabla_{\theta}\log f_{\theta}\left(\boldsymbol{\hat{y}},\boldsymbol{r}
\mid\boldsymbol{x}\right) \tag{1}
$$
where $\theta$ are parameters of neural network $f$ .
Additionally, given that the time abduction required often escalates with problem size, we want to invoke it judiciously during inference, applying it only when it is truly necessary. Therefore, we aim to avoid the reflection vector from flagging too many elements in $\boldsymbol{\hat{y}}$ as error. To achieve this, we then introduce a reflection size loss:
$$
L_{size}(\boldsymbol{x})=\Phi\!\left(C-\frac{1}{n}\sum_{i=1}^{n}\left(1-R\left
(f_{1}(\boldsymbol{x})\right)_{i}\right)\right) \tag{2}
$$
where $\Phi(a)\triangleq\max(0,a)^{2}$ and $C$ is a hyperparameter ranging between 0 and 1. When $C$ is set at a higher value, the reflection vector tends to retain a greater number of intuitive output elements instead of flagging them as error and delegating to abduction.
In addition to the above-mentioned training methods, using labeled data, we employ data-driven supervised training methods similar to common neural network training paradigm. The loss function in this process, e.g., cross-entropy loss, is denoted by $L_{labeled}(\boldsymbol{x},\boldsymbol{y})$ .
Therefore, combining all the training loss, the total loss for ABL-Refl is represented as follows:
$$
\displaystyle\mathcal{L} \displaystyle=\frac{1}{|D_{l}|}\sum_{(\boldsymbol{x},\boldsymbol{y})\in D_{l}}
L_{labeled}(\boldsymbol{x},\boldsymbol{y}) \displaystyle+\frac{1}{|D_{l}\cup D_{u}|}\sum_{\boldsymbol{x}\in D_{l}\cup D_{
u}}(\alpha L_{con}(\boldsymbol{x})+\beta L_{size}(\boldsymbol{x})) \tag{3}
$$
where $\alpha$ and $\beta$ are hyperparameters, $D_{l}=\{(\boldsymbol{x}_{1},\boldsymbol{y}_{1}),(\boldsymbol{x}_{2}, \boldsymbol{y}_{2}),\dots\}$ are the labeled datasets and $D_{u}=\{\boldsymbol{x}_{1},\boldsymbol{x}_{2},\dots\}$ are the unlabeled datasets.
Note that neither $L_{con}$ nor $L_{size}$ , which are loss functions specifically related to the reflection, incorporate information from the data label. Instead, we leverage training information directly from $\mathcal{KB}$ to train the reflection. Also, despite sharing the prior feature layers, the output layer $f_{2}$ and reflection layer $R$ utilize different training information, thereby decoupling the objectives of intuitive problem-solving and inconsistency reflection.
## 4 Experiments
In this section, we will conduct several experiments. First, we will test our method on the NeSy benchmark task of solving Sudoku to comprehensively verify its effectiveness. Next, we will change the Sudoku input from symbols to images, which requires integrating and simultaneous reasoning with both sub-symbolic and symbolic elements, representing one of the most challenging tasks in this field. Finally, we will tackle NP-hard combinatorial optimization problems on graphs, using a knowledge base of only mathematical definitions, to demonstrate our method’s versatility. Through these experiments, we aim to answer the following questions:
- Compared to existing neuro-symbolic learning methods, can ABL-Refl achieve better performance in tasks requiring complex reasoning?
- Can ABL-Refl reduce the training resources required?
- Can ABL-Refl narrow the problem space for symbolic reasoning to achieve acceleration?
- Does ABL-Refl possess the capability for broad application, such as handling diverse data scenarios or various forms of domain knowledge?
All experiments are performed on a server with Intel Xeon Gold 6226R CPU and Tesla A100 GPU. In our experiments, we simply set hyperparameters $\alpha$ and $\beta$ in Eq. (3) to 1, since adjusting them does not have a noticeable impact on the results. For the hyperparameter $C$ in (2), we set it to 0.8, and have provided discussions in Appendix C, demonstrating that setting it to a value within a broad moderate range (e.g., 0.6-0.9) would always be a recommended choice. All experiments are repeated 5 times.
### 4.1 Solving Sudoku
#### Dataset and Setting.
This task aims to solve a 9 $\times$ 9 Sudoku: Given 81 digits of 0-9 (where 0 represents a blank space) in a 9 $\times$ 9 board, we aim to find a solution $\boldsymbol{y}\in\{1,2,\dots,9\}^{81}$ that adhere to the Sudoku rules: no duplicate numbers are allowed in any row, column, or 3 $\times$ 3 subgrid. In this section, we first consider inputs in symbolic form, $\boldsymbol{x}\in\{0,1,\dots,9\}^{81}$ , and use datasets from a publicly available Kaggle site (Vopani 2019).
For the neural network $f$ , we use a simple graph neural network (GNN): the body block $f_{1}$ consists of one embedding layer and eight iterations of message-passing layers, resulting in a 128-dimensional embedding for each number, and then connects to both a linear output layer $f_{2}$ to obtain the intuitive output $\hat{\boldsymbol{y}}$ and a linear reflection layer $R$ to obtain the reflection vector ${\boldsymbol{r}}$ . We use the cross-entropy loss as $L_{labeled}$ . For the domain knowledge base $\mathcal{KB}$ , it contains the Sudoku rules mentioned above. We express $\mathcal{KB}$ in the form of propositional logic and utilize the MiniSAT solver (Sörensson 2010), an open-source SAT solver, as the symbolic solver to leverage $\mathcal{KB}$ and perform abduction.
For the consistency measurement, we define it as follows: one point is awarded for each row, each column and each 3 $\times$ 3 subgrid with no duplicate numbers, additionally, ten points are awarded if the entire board has no inconsistencies with $\mathcal{KB}$ . In this way, it is entirely based on $\mathcal{KB}$ . Notice that we deviated from the 1 or 0 measurement example setup mentioned in Section 3.4 to avoid a predominance of zero values in $\Delta\text{Con}_{\boldsymbol{r}}(\boldsymbol{\hat{y}})$ of Eq. (1), facilitating effective training with the REINFORCE algorithm. Similar considerations are applied in subsequent experiments.
#### Compared Methods and Results.
We compare ABL-Refl with the following baseline methods: 1) Recurrent Relational Network (RRN) (Palm, Paquet, and Winther 2018), a pure neural network method, 2) CL-STE (Yang, Lee, and Park 2022), a representative method of logic-based regularized loss, and 3) SATNet (Wang et al. 2019). A detailed description of these methods is provided in Appendix A. We also report the result for Simple GNN, which is the very same neural network used in our setting, yet directly treats the intuitive output $\hat{\boldsymbol{y}}$ as the final output.
| RRN CL-STE SATNet | 114.8 $\pm$ 7.8 173.6 $\pm$ 9.9 140.3 $\pm$ 6.8 | 0.19 $\pm$ 0.01 0.19 $\pm$ 0.02 0.11 $\pm$ 0.01 | 73.1 $\pm$ 1.2 76.5 $\pm$ 1.8 74.1 $\pm$ 0.4 |
| --- | --- | --- | --- |
| ABL-Refl | 109.8 $\pm$ 10.8 | 0.22 $\pm$ 0.02 | 97.4 $\pm$ 0.3 |
| Simple GNN | 29.7 $\pm$ 2.6 | 0.02 $\pm$ 0.00 | 55.6 $\pm$ 0.3 |
Table 1: Training time (for a total of 100 epochs using 20K training data), inference time and accuracy (on 1K test data) on solving Sudoku.
We report the training time (for a total of 100 epochs using 20K training data), inference time (on 1K test data) and accuracy (the percentage of completely accurate Sudoku solution boards on test data) in Table 1. We may see that our method outperforms the baselines significantly, improving by over 20% while maintaining a comparable inference time. This suggests an answer to Q1: ABL-Refl can achieve better reasoning performance. This improvement is primarily due to the use of abduction to rectify the neural network’s output during inference.
Furthermore, our method reaches high accuracy in only a few epochs (training curve is shown in Appendix B), significantly reducing training time. Even considering under identical training epochs, our total training time is less than baseline methods, despite involving a time-consuming symbolic solver. This partly stems from the neural network in our approach being less complex than those in baseline methods while achieving high accuracy. Overall, this suggests an answer to Q2: ABL-Refl can reduce the training time required.
We also attempt to reduce the amount of labeled data, removing labels from 50%, 75%, and 90% of the training data. We record the inference accuracy in Table 2. It can be observed that even with only 2K labeled training data, our method still achieves far better accuracy than the baseline methods with 20K labeled training data. This suggests an answer to Q2 from another aspect: ABL-Refl can reduce the labeled training data required.
| 20K | 0 | 97.4 $\pm$ 0.3 |
| --- | --- | --- |
| 10K | 10K | 96.3 $\pm$ 0.3 |
| 5K | 15K | 95.8 $\pm$ 0.6 |
| 2K | 18K | 94.7 $\pm$ 0.8 |
Table 2: Inference accuracy on solving Sudoku after reducing the amount of labeled data.
#### Comparing to Symbolic Solvers.
| Propositional logic ABL-Refl First-order logic | MiniSAT 97.4 $\pm$ 0.3 Prolog with CLP(FD) | Solver only 0.021 $\pm$ 0.004 Solver only | 100 $\pm$ 0 0.196 $\pm$ 0.015 100 $\pm$ 0 | - 0.217 $\pm$ 0.019 - | 0.227 $\pm$ 0.024 105.81 $\pm$ 5.62 | 0.227 $\pm$ 0.024 105.81 $\pm$ 5.62 |
| --- | --- | --- | --- | --- | --- | --- |
| ABL-Refl | 97.4 $\pm$ 0.3 | 0.021 $\pm$ 0.004 | 31.86 $\pm$ 1.88 | 31.88 $\pm$ 1.89 | | |
Table 3: Inference accuracy and time (on 1K test data) on solving Sudoku. For $\mathcal{KB}$ expressed in two different forms, ABL-Refl shows notable acceleration compared to symbolic solvers in both cases.
We next compare our method with merely employing symbolic solvers from scratch, to demonstrate its capability in accelerating symbolic reasoning. We perform inference on 1K test data and record the accuracy and time in Table 3. The inference time for our method includes the combined duration for data processing through both the neural network (NN time) and symbolic reasoning (abduction time).
As observed in the former two lines, our method achieves a notable acceleration in the abduction process, consequently decreasing the overall inference time, with only a minor compromise in accuracy. This efficiency gain is due to the fact that in ABL-Refl, after quickly generating an intuition through the neural network, abduction only needs to focus on areas identified as necessary by the reflection vector, whereas using only symbolic solvers requires abduction to reason through all blanks in a Sudoku puzzle. Overall, this suggests an answer to Q3: ABL-Refl can quickly generate the reflection, thereby reducing the symbolic reasoning search space and enhancing reasoning efficiency.
We also compared with Prolog with CLP(FD) (Triska 2012) solver, by expressing the same $\mathcal{KB}$ with a first-order constraint logic program. As shown in the table, we observe a significant reduction in abduction time and overall inference time, which puts another evidence to our previous answer to Q3, and also suggests an answer to Q4: ABL-Refl can effectively utilize the two most commonly used forms in symbolic knowledge representation, propositional logic and first-order logic.
### 4.2 Solving Visual Sudoku
#### Dataset and Setting.
In this section, we modify the input from 81 symbolic digits to 81 MNIST images (handwritten digits of 0-9). We use the dataset provided in SATNet (Wang et al. 2019) and use 9K Sudoku boards for training and 1K for testing.
In order to process image data, we first pass each image through a LeNet convolutional neural network (CNN) (LeCun et al. 1998) to obtain the probability of each digit. The rest of our setting follows from that described in Section 4.1.
#### Compared Methods and Results.
We compare ABL-Refl with SATNet, as both methods allow for end-to-end training from visual inputs. We report the results in Table 4 and the training curve in Appendix B. Compared to SATNet, ABL-Refl shows notable improvement in reasoning accuracy within only a few training epochs. We then consider pretraining the CNN in advance using self-supervised learning methods (Chen et al. 2020) and find that this can further improve accuracy. Overall, the results further suggest positive answers to Q1 and Q2.
We also compare with CNN+Solver: each image is first mapped to symbolic form by a fully trained CNN (with 99.6% accuracy on the MNIST dataset) and then directly fed into the symbolic solver to fill in the blanks and derive the final output. In such scenarios, the problem space for the symbolic solver includes all the Sudoku blanks, and additionally, since the symbolic solver cannot revise errors from CNN, any inaccuracies in CNN’s output could lead the symbolic solver to crash (i.e., output no solution). Consequently, inference accuracy and time are adversely affected. This confirms the positive answer to Q3.
Finally, an overview of Sections 4.1 and 4.2 also suggests an answer to Q4: ABL-Refl is capable of handling both symbolic and sub-symbolic forms of input data.
| SATNet CNN+Solver ABL-Refl | 0.12 $\pm$ 0.01 0.23 $\pm$ 0.02 0.22 $\pm$ 0.02 | 63.5 $\pm$ 2.2 67.8 $\pm$ 4.2 77.8 $\pm$ 5.8 |
| --- | --- | --- |
| ABL-Refl (with pretrained CNN) | 0.22 $\pm$ 0.02 | 93.5 $\pm$ 3.2 |
Table 4: Inference time (on 1K test data) and accuracy on solving visual Sudoku.
### 4.3 Solving Combinatorial Optimization Problems on Graphs
In this section, we will further expand the application domain of our method. We apply ABL-Refl to solving combinatorial optimization problems on graphs. We conduct the experiment on finding the maximum clique in this section, and provide an additional experiment in Appendix E.
#### Dataset and Setting.
In this task, we are given a graph $G=(V,E)$ with $|V|=n$ nodes, and aim to output $\boldsymbol{y}\in\{0,1\}^{n}$ , where each index corresponds to a node, and the set of indices assigned the value of 1 collectively constitute the maximum clique. Note that this problem is a challenging NP-hard problem with extensive applications in real-life scenarios, and is generally considered challenging for neural networks (Zhang et al. 2023).
We use several datasets from the TUDatasets (Morris et al. 2020), with their basic information shown in Table 5. We use 80% of the data for training and 20% for testing.
In our method, the body layer $f_{1}$ consists of a single GAT layer (Veličković et al. 2017) and 16 gated graph convolution layers (Li et al. 2015), and the output layer $f_{2}$ and reflection layer $R$ are both linear layers. We use binary cross-entropy loss as $L_{labeled}$ . The domain knowledge base $\mathcal{KB}$ expresses the mathematical definition of maximum clique, i.e., every pair of vertices in the output set should be connected by an edge. We use Gurobi solver, an efficient mixed-integer program solver, to perform abduction. We define the consistency measurement as follows: one point is awarded for each pair of vertices if they are not connected by an edge; additionally, the size of the output set multiplied by 10 is added if the output set is indeed a clique.
| Method | Dataset (Graph nums./Avg. nodes per graph/Avg. edges per graph) | | | |
| --- | --- | --- | --- | --- |
| ENZYMES (600/33/62) | PROTEINS (1113/39/73) | IMDB-Binary (1000/19/97) | COLLAB (5000/74/2457) | |
| Erdos | 0.883 $\pm$ 0.156 | 0.905 $\pm$ 0.133 | 0.936 $\pm$ 0.175 | 0.852 $\pm$ 0.212 |
| Neural SFE | 0.933 $\pm$ 0.148 | 0.926 $\pm$ 0.165 | 0.961 $\pm$ 0.143 | 0.781 $\pm$ 0.316 |
| ABL-Refl | 0.991 $\pm$ 0.017 | 0.985 $\pm$ 0.020 | 0.979 $\pm$ 0.029 | 0.982 $\pm$ 0.015 |
Table 5: Approximation ratios on finding maximum clique on different datasets.
#### Compared Methods and Results.
We compare our methods with the following baselines: 1) Erdos (Karalias and Loukas 2020), 2) Neural SFE (Karalias et al. 2022), both leading methods for solving graph combinatorial problems. Their detailed descriptions are provided in Appendix A.
We report the approximation ratios in Table 5. The approximation ratio, indicating the result set size relative to the actual maximum set size, is better when closer to 1. We may observe that our method outperforms the baseline methods, achieving near-perfect results on all datasets. This confirms the positive answer to Q1. Also, as the scale of the data increases, our method maintains a high level of accuracy, showing a more pronounced improvement compared to baseline methods. This suggests an answer to Q4: ABL-Refl is capable of handling scalable data scenarios, even in high-dimensional settings that are challenging for previous methods. Finally, an overview of this section provides another aspect to Q4: ABL-Refl can utilize a wide range of $\mathcal{KB}$ , not limited to logical expressions but can also operate effectively with just the basic mathematical formulations.
## 5 Effects of Reflection Mechanism
This section provides a further analysis on the reflection mechanism. In ABL-Refl, the reflection is abduced from domain knowledge, and acts as an efficient attention mechanism to direct the focus for symbolic search. This reflection is the key in our method to accomplish the NeSy reasoning rectification pipeline, i.e., a pipeline that detects errors in neural networks and then invokes symbolic reasoning to rectify these positions. To corroborate the effectiveness of the reflection, we conduct direct comparison with other methods that achieve the same pipeline:
1. ABL, minimizing the inconsistency of intuitive output and knowledge base with an external zeroth-order consistency optimization module, as detailed in Section 3.2;
1. NN Confidence, retaining intuitive output with the top 80% confidence from the neural network result (other retain thresholds are explored in Appendix D) and passing the remaining into symbolic reasoning;
1. NASR (Cornelio et al. 2023), using a Transformer-based external selection module to detect error, and the module is trained on a large synthetic dataset in advance.
We compare them on the solving visual Sudoku task in Section 4.2. For a fair comparison, all methods employ the same neural network, $\mathcal{KB}$ and MiniSAT solver setup. We report the recall (the percentage of errors from neural networks that can be identified), inference time and accuracy (on 1K test data) in Table 6. Note that “recall” directly evaluates the effectiveness of the detection module itself. The following analysis examines the results:
- The consistency optimization in ABL faces significant efficiency challenges due to the large data scale (output dimension $n=81$ ). In such scenarios, the potential rectifications can reach up to $2^{81}$ , resulting in an overwhelmingly large search space for consistency optimization. Also, as an external module, its only way of interacting with $\mathcal{KB}$ is to treat it as a black box and repetitively submit queries for consistency evaluation. As a result, it may require more than $10^{9}$ queries to identify errors for each Sudoku example, resulting in several hours to complete inference on 1K test data.
- NN Confidence performs poorly in identifying outputs with errors. Since the pure data-driven neural network training does not explicitly incorporate $\mathcal{KB}$ information, a low confidence from it does not necessarily indicate an inconsistency with the domain knowledge. This subsequently results in the frequent crashing in symbolic solver, therefore hampering the overall inference time and accuracy. This result parallels human cognitive reflection abilities, which do not show much positive correlation with System 1 intuition (Pennycook et al. 2016). To further illustrate this point, we provide additional analysis, including a case study, in Appendix D.
- Our method also outperforms NASR, and notably, without the need of a synthetic dataset. This could be due to the fact that NASR’s error-selection module is trained independently from other components, and operates sequentially and separately during inference. Therefore, it can only rely on information from the output label, in contrast to our method, which can leverage information directly from the body block of neural network, establishing a deeper connection with the raw data. Additionally, in NASR, traversing the separate selection module takes additional time, whereas in ABL-Refl, the reflection is generated concurrently with the neural network output, avoiding efficiency loss.
| NN Confidence NASR ABL-Refl | 82.64 $\pm$ 2.78 95.86 $\pm$ 0.96 99.04 $\pm$ 0.85 | 0.24 $\pm$ 0.03 0.26 $\pm$ 0.02 0.22 $\pm$ 0.02 | 64.3 $\pm$ 6.2 82.7 $\pm$ 4.4 93.5 $\pm$ 3.2 |
| --- | --- | --- | --- |
Table 6: Recall, inference time and accuracy. ”Timeout” indicates that inference takes more than 1 hour.
## 6 Conclusion
In this paper, we present Abductive Reflection (ABL-Refl). It leverages domain knowledge to abduce a reflection vector, which flags potential errors in neural network outputs and then invokes abduction, serving as an attention mechanism for symbolic reasoning to focus on a much smaller problem space. Experiments show that ABL-Refl significantly outperforms other NeSy methods, achieving excellent reasoning accuracy with fewer training resources, and has successfully enhanced reasoning efficiency.
ABL-Refl preserves the integrity of both machine learning and logical reasoning with superior inference speed and high versatility. Therefore, it has the potential for broad application. In the future, it can be applied to large language models (Mialon et al. 2023) to help identify errors within their outputs, and subsequently exploit symbolic reasoning to enhance their trustworthiness and reliability.
## Acknowledgments
This research was supported by the NSFC (62176117, 62206124) and Jiangsu Science Foundation Leading-edge Technology Program (BK20232003).
## References
- Ahmed et al. (2022) Ahmed, K.; Wang, E.; Chang, K.-W.; and Van den Broeck, G. 2022. Neuro-symbolic entropy regularization. In Uncertainty in Artificial Intelligence, 43–53. PMLR.
- Amos and Kolter (2017) Amos, B.; and Kolter, J. Z. 2017. Optnet: Differentiable optimization as a layer in neural networks. In International Conference on Machine Learning, 136–145. PMLR.
- Bengio (2019) Bengio, Y. 2019. From system 1 deep learning to system 2 deep learning. In Neural Information Processing Systems.
- Cai et al. (2021) Cai, L.-W.; Dai, W.-Z.; Huang, Y.-X.; Li, Y.-F.; Muggleton, S. H.; and Jiang, Y. 2021. Abductive Learning with Ground Knowledge Base. In Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI’21), 1815–1821.
- Chen et al. (2020) Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, 1597–1607. PMLR.
- Cornelio et al. (2023) Cornelio, C.; Stuehmer, J.; Hu, S. X.; and Hospedales, T. 2023. Learning where and when to reason in neuro-symbolic inference. In The Eleventh International Conference on Learning Representations.
- Dai et al. (2019) Dai, W.-Z.; Xu, Q.; Yu, Y.; and Zhou, Z.-H. 2019. Bridging machine learning and logical reasoning by abductive learning. Advances in Neural Information Processing Systems, 32.
- Frederick (2005) Frederick, S. 2005. Cognitive reflection and decision making. Journal of Economic perspectives, 19(4): 25–42.
- Gao et al. (2024) Gao, E.-H.; Huang, Y.-X.; Hu, W.-C.; Zhu, X.-H.; and Dai, W.-Z. 2024. Knowledge-Enhanced Historical Document Segmentation and Recognition. In Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI’24), 8409–8416.
- Han et al. (2023) Han, Q.; Yang, L.; Chen, Q.; Zhou, X.; Zhang, D.; Wang, A.; Sun, R.; and Luo, X. 2023. A GNN-Guided Predict-and-Search Framework for Mixed-Integer Linear Programming. arXiv preprint arXiv:2302.05636.
- Hitzler (2022) Hitzler, P. 2022. Neuro-symbolic artificial intelligence: The state of the art. IOS Press.
- Hoernle et al. (2022) Hoernle, N.; Karampatsis, R. M.; Belle, V.; and Gal, K. 2022. Multiplexnet: Towards fully satisfied logical constraints in neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 5700–5709.
- Huang et al. (2020) Huang, Y.-X.; Dai, W.-Z.; Yang, J.; Cai, L.-W.; Cheng, S.; Huang, R.; Li, Y.-F.; and Zhou, Z.-H. 2020. Semi-Supervised Abductive Learning and Its Application to Theft Judicial Sentencing. In Proceedings of the 20th IEEE International Conference on Data Mining (ICDM’20), 1070–1075.
- Huang et al. (2024) Huang, Y.-X.; Hu, W.-C.; Gao, E.-H.; and Jiang, Y. 2024. ABLkit: A Python Toolkit for Abductive Learning. Frontiers of Computer Science, pp. to appear.
- Kahneman (2011) Kahneman, D. 2011. Thinking, fast and slow. macmillan.
- Karalias and Loukas (2020) Karalias, N.; and Loukas, A. 2020. Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs. Advances in Neural Information Processing Systems, 33: 6659–6672.
- Karalias et al. (2022) Karalias, N.; Robinson, J.; Loukas, A.; and Jegelka, S. 2022. Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions. arXiv preprint arXiv:2208.04055.
- LeCun et al. (1998) LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): 2278–2324.
- Li et al. (2015) Li, Y.; Tarlow, D.; Brockschmidt, M.; and Zemel, R. 2015. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493.
- Manhaeve et al. (2018) Manhaeve, R.; Dumancic, S.; Kimmig, A.; Demeester, T.; and De Raedt, L. 2018. Deepproblog: Neural probabilistic logic programming. advances in neural information processing systems, 31.
- Marra et al. (2020) Marra, G.; Giannini, F.; Diligenti, M.; and Gori, M. 2020. Integrating learning and reasoning with deep logic models. In Machine Learning and Knowledge Discovery in Databases, 517–532. Springer.
- Mialon et al. (2023) Mialon, G.; Fourrier, C.; Swift, C.; Wolf, T.; LeCun, Y.; and Scialom, T. 2023. GAIA: a benchmark for General AI Assistants. arXiv preprint arXiv:2311.12983.
- Morris et al. (2020) Morris, C.; Kriege, N. M.; Bause, F.; Kersting, K.; Mutzel, P.; and Neumann, M. 2020. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663.
- Nair et al. (2020) Nair, V.; Bartunov, S.; Gimeno, F.; Von Glehn, I.; Lichocki, P.; Lobov, I.; O’Donoghue, B.; Sonnerat, N.; Tjandraatmadja, C.; Wang, P.; et al. 2020. Solving mixed integer programs using neural networks. arXiv preprint arXiv:2012.13349.
- Nye et al. (2021) Nye, M.; Tessler, M.; Tenenbaum, J.; and Lake, B. M. 2021. Improving coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning. Advances in Neural Information Processing Systems, 34: 25192–25204.
- Palm, Paquet, and Winther (2018) Palm, R.; Paquet, U.; and Winther, O. 2018. Recurrent relational networks. Advances in neural information processing systems, 31.
- Pennycook et al. (2016) Pennycook, G.; Cheyne, J. A.; Koehler, D. J.; and Fugelsang, J. A. 2016. Is the cognitive reflection test a measure of both reflection and intuition? Behavior research methods, 48: 341–348.
- Selsam et al. (2018) Selsam, D.; Lamm, M.; Bünz, B.; Liang, P.; de Moura, L.; and Dill, D. L. 2018. Learning a SAT solver from single-bit supervision. arXiv preprint arXiv:1802.03685.
- Serafini and Garcez (2016) Serafini, L.; and Garcez, A. d. 2016. Logic tensor networks: Deep learning and logical reasoning from data and knowledge. arXiv preprint arXiv:1606.04422.
- Sinayev and Peters (2015) Sinayev, A.; and Peters, E. 2015. Cognitive reflection vs. calculation in decision making. Frontiers in psychology, 6: 532.
- Sörensson (2010) Sörensson, N. 2010. Minisat 2.2 and minisat++ 1.1. A short description in SAT Race.
- Triska (2012) Triska, M. 2012. The finite domain constraint solver of SWI-Prolog. In Functional and Logic Programming: 11th International Symposium, 307–316. Springer.
- Veličković et al. (2017) Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903.
- Vopani (2019) Vopani. 2019. 9 Million Sudoku Puzzles and Solutions. https://www.kaggle.com/datasets/rohanrao/sudoku. Accessed: 2024-08-01.
- Wang et al. (2021) Wang, J.; Deng, D.; Xie, X.; Shu, X.; Huang, Y.-X.; Cai, L.-W.; Zhang, H.; Zhang, M.-L.; Zhou, Z.-H.; and Wu, Y. 2021. Tac-Valuer: Knowledge-based Stroke Evaluation in Table Tennis. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD’21), 3688–3696.
- Wang et al. (2019) Wang, P.-W.; Donti, P.; Wilder, B.; and Kolter, Z. 2019. Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In International Conference on Machine Learning, 6545–6554. PMLR.
- Williams (1992) Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Reinforcement learning, 5–32.
- Xu et al. (2018) Xu, J.; Zhang, Z.; Friedman, T.; Liang, Y.; and Broeck, G. 2018. A semantic loss function for deep learning with symbolic knowledge. In International conference on machine learning, 5502–5511. PMLR.
- Yang, Ishay, and Lee (2020) Yang, Z.; Ishay, A.; and Lee, J. 2020. Neurasp: Embracing neural networks into answer set programming. In 29th International Joint Conference on Artificial Intelligence.
- Yang, Lee, and Park (2022) Yang, Z.; Lee, J.; and Park, C. 2022. Injecting logical constraints into neural networks via straight-through estimators. In International Conference on Machine Learning, 25096–25122. PMLR.
- Zhang et al. (2023) Zhang, B.; Luo, S.; Wang, L.; and He, D. 2023. Rethinking the expressive power of gnns via graph biconnectivity. arXiv preprint arXiv:2301.09505.
- Zhang et al. (2020) Zhang, W.; Sun, Z.; Zhu, Q.; Li, G.; Cai, S.; Xiong, Y.; and Zhang, L. 2020. NLocalSAT: Boosting local search with solution prediction. arXiv preprint arXiv:2001.09398.
- Zhou (2019) Zhou, Z.-H. 2019. Abductive learning: towards bridging machine learning and logical reasoning. Science China Information Sciences, 62: 1–3.
- Zhou and Huang (2022) Zhou, Z.-H.; and Huang, Y.-X. 2022. Abductive Learning. In Hitzler, P.; and Sarker, M. K., eds., Neuro-Symbolic Artificial Intelligence: The State of the Art, 353–369. Amsterdam: IOS Press.
## Appendix A Comparison Methods
In this section, we will provide a brief supplementary introduction to the compared baseline methods used in experiments.
### A.1 Solving Sudoku
In the solving Sudoku experiment (Section 4.1 and 4.2), we have compared our method with the following baselines:
1. Recurrent Relational Network (RRN) (Palm, Paquet, and Winther 2018), a state-of-the-art pure neural network method tailored for this problem;
1. CL-STE (Yang, Lee, and Park 2022), injecting logical knowledge (defined in the same way as our $\mathcal{KB}$ ) as neural network constraints during the training of RRN;
1. SATNet (Wang et al. 2019), incorporating a differentiable MaxSAT solver into the neural network to perform reasoning.
Note that CL-STE is a representative method of logic-based regularized loss, relaxing symbolic logic as neural network loss. Additionally, among these methods, CL-STE stands out in both accuracy and efficiency (partly because it prevents constructing complex SDDs, unlike other methods including semantic loss (Xu et al. 2018)).
Other lines of methods generally underperform above baselines in scenarios where $n$ (the scale of $\boldsymbol{y}$ ) is high. For instance, ABL faces the challenge where consistency optimization needs to choose among exponential query candidates, resulting in runtimes thousands of times longer than other methods, as seen in Section 5. Take two other representative NeSy methods as examples: DeepProbLog (Manhaeve et al. 2018) involves substantial computational costs, taking days to complete solving Sudoku; NeurASP (Yang, Ishay, and Lee 2020) also performs slow and lags in accuracy, as shown in Yang et al. (2022).
### A.2 Solving Combinatorial Optimization on Graphs
In the solving combinatorial optimization on graphs experiment (Section 4.3 and Appendix E), we have compared our method with the following baselines:
1. Erdos (Karalias and Loukas 2020), optimizing set functions using a neural network parametrizing a distribution over sets;
1. Neural SFE (Karalias et al. 2022), optimizing set functions by extending them onto high-dimensional continuous domains.
In this experiment, the above methods use the same body block graph neural network as our method.
## Appendix B Training Curve
In this section, we will report the training curve in the experiments on solving Sudoku (Section 4.1) and visual Sudoku (Section 4.2). The respective training curves for each scenario are shown in Figures 4(a) and 4(b), with the horizontal axis representing training epochs and the vertical axis representing inference accuracy. We may see that our method achieves high accuracy within just a few epochs, significantly reducing training time compared to other baseline methods.
<details>
<summary>extracted/6187776/figs/sudoku.jpg Details</summary>

### Visual Description
\n
## Line Chart: Inference Accuracy vs. Epoch
### Overview
This image presents a line chart illustrating the inference accuracy of five different models (Simple GNN, RRN, CL-STE, SATNet, and ABL-Refl) as a function of the training epoch. The chart spans from epoch 0 to 100, with inference accuracy ranging from 0 to 100.
### Components/Axes
* **X-axis:** Labeled "Epoch", ranging from 0 to 100.
* **Y-axis:** Labeled "Inference accuracy", ranging from 0 to 100.
* **Legend:** Located in the top-right corner of the chart. It identifies each line with a specific color and model name:
* Black: Simple GNN
* Orange: RRN
* Red: CL-STE
* Blue: SATNet
* Green: ABL-Refl (ours)
### Detailed Analysis
Here's a breakdown of each model's performance, with approximate values:
* **Simple GNN (Black):** Starts at approximately 5 accuracy at epoch 0. The line slopes upward, reaching a plateau around 58 accuracy between epochs 20 and 100.
* **RRN (Orange):** Begins at approximately 10 accuracy at epoch 0. The line increases rapidly to around 65 accuracy by epoch 10, then fluctuates between 60 and 75 accuracy for the remainder of the epochs, with a slight downward trend towards epoch 100, ending around 62 accuracy.
* **CL-STE (Red):** Starts at approximately 5 accuracy at epoch 0. The line increases steadily, reaching around 68 accuracy by epoch 10. It then fluctuates between 65 and 75 accuracy, with a slight dip around epoch 80, ending around 70 accuracy.
* **SATNet (Blue):** Starts at approximately 10 accuracy at epoch 0. The line increases rapidly to around 70 accuracy by epoch 10. It then fluctuates between 68 and 75 accuracy, with a slight dip around epoch 80, ending around 72 accuracy.
* **ABL-Refl (Green):** Starts at approximately 95 accuracy at epoch 0. The line decreases slightly to around 92 accuracy by epoch 5, then remains relatively stable between 90 and 98 accuracy for the rest of the epochs, ending around 96 accuracy.
### Key Observations
* ABL-Refl consistently demonstrates the highest inference accuracy throughout the training process, significantly outperforming the other models.
* Simple GNN exhibits the lowest and most stable accuracy, plateauing early in training.
* RRN, CL-STE, and SATNet show similar performance, with fluctuating accuracy between 60 and 75.
* RRN, CL-STE, and SATNet all show a dip in accuracy around epoch 80.
* The initial rapid increase in accuracy for RRN, CL-STE, and SATNet suggests fast learning in the early stages of training.
### Interpretation
The data suggests that the ABL-Refl model is substantially more effective at achieving high inference accuracy compared to the other models tested. This could be due to its architecture, training methodology, or a combination of factors. The relatively low and stable accuracy of Simple GNN indicates that it may be a less complex model or require further optimization. The fluctuations in accuracy for RRN, CL-STE, and SATNet could be attributed to overfitting, learning rate adjustments, or the inherent variability in the training data. The dip in accuracy around epoch 80 for these three models warrants further investigation to identify the underlying cause. The chart demonstrates the importance of model selection and training strategies in achieving optimal performance in inference tasks. The "ours" designation on ABL-Refl suggests this is the model developed by the authors of the study, and the results are presented to highlight its superiority.
</details>
(a) Sudoku.
<details>
<summary>extracted/6187776/figs/visual_sudoku.jpg Details</summary>

### Visual Description
\n
## Line Chart: Inference Accuracy vs. Epoch
### Overview
This line chart depicts the inference accuracy of three different models (SATNet, ABL-Refl, and ABL-Refl with pretrained CNN) over 100 epochs of training. The x-axis represents the epoch number, and the y-axis represents the inference accuracy.
### Components/Axes
* **X-axis:** Epoch (Scale: 0 to 100, approximately)
* **Y-axis:** Inference accuracy (Scale: 0 to 100, approximately)
* **Legend:** Located in the bottom-right corner of the chart.
* SATNet (Blue line)
* ABL-Refl (ours) (Olive line)
* ABL-Refl (ours) with pretrained CNN (Green line)
### Detailed Analysis
* **SATNet (Blue Line):** The blue line starts at approximately 5% accuracy at epoch 0, rises sharply to around 50% by epoch 10, then plateaus, fluctuating between approximately 50% and 62% for the remainder of the 100 epochs. There is a slight downward trend in the final epochs.
* **ABL-Refl (ours) (Olive Line):** The olive line begins at approximately 5% accuracy at epoch 0, increases rapidly to around 68% by epoch 10, and continues to rise, reaching a peak of approximately 75% around epoch 20. It then fluctuates between approximately 65% and 75% for the rest of the epochs, with a slight downward trend towards the end.
* **ABL-Refl (ours) with pretrained CNN (Green Line):** The green line starts at approximately 0% accuracy at epoch 0, experiences a very rapid increase to around 85% by epoch 10, and then fluctuates between approximately 75% and 90% for the remaining epochs. It shows more consistent performance and higher accuracy than the other two models.
Here's a more detailed breakdown of approximate values at specific epochs:
| Epoch | SATNet (Blue) | ABL-Refl (Olive) | ABL-Refl + CNN (Green) |
|---|---|---|---|
| 0 | ~5% | ~5% | ~0% |
| 10 | ~50% | ~68% | ~85% |
| 20 | ~55% | ~75% | ~88% |
| 40 | ~58% | ~70% | ~82% |
| 60 | ~60% | ~65% | ~78% |
| 80 | ~59% | ~72% | ~85% |
| 100 | ~57% | ~68% | ~88% |
### Key Observations
* The model utilizing a pretrained CNN (green line) consistently achieves the highest inference accuracy throughout the training process.
* SATNet (blue line) exhibits the lowest overall accuracy and the most stable, but lowest, performance.
* ABL-Refl (olive line) performs better than SATNet but consistently underperforms the CNN-pretrained version.
* All three models show a trend of initial rapid improvement followed by a plateau and slight decline in accuracy towards the end of the training period.
### Interpretation
The data suggests that incorporating a pretrained CNN significantly improves the inference accuracy of the ABL-Refl model. The pretrained CNN likely provides a strong initial feature representation, allowing the model to converge faster and achieve higher accuracy. The plateauing and slight decline in accuracy after a certain number of epochs could indicate overfitting or the need for further optimization techniques, such as regularization or learning rate scheduling. The relatively low performance of SATNet suggests that its architecture or training process may be less effective for this particular task compared to the ABL-Refl architecture, especially when combined with a pretrained CNN. The consistent performance of the green line suggests that the pretrained CNN is providing a robust and generalizable feature representation.
</details>
(b) Visual Sudoku.
Figure 4: Training curve on solving Sudoku and visual Sudoku.
## Appendix C Discussion on Hyperparameter $C$
In this section, we will discuss the effect of the hyperparameter $C$ . Previous experiments in Sections 4 and 5, $C$ was consistently set to 0.8, and we will now explore adjustments. We report the extended results in Tables 7 and 8. It is shown that when $C$ is set within a wide range, ABL-Refl uniformly outperforms the baseline methods.
Intuitively, as mentioned in Section 3, setting $C$ lower delegates more elements to the solver for correction, thereby often enhancing reasoning accuracy. The results in Tables 7 and 8 have also demonstrated this point.
| Experiment | Method | Inference Time (s) | Inference Accuracy |
| --- | --- | --- | --- |
| Sudoku | Simple GNN | 0.02 $\pm$ 0.00 | 55.6 $\pm$ 0.3 |
| RNN | 0.19 $\pm$ 0.01 | 73.1 $\pm$ 1.2 | |
| CL-STE | 0.19 $\pm$ 0.02 | 76.5 $\pm$ 1.8 | |
| SATNet | 0.11 $\pm$ 0.01 | 74.1 $\pm$ 0.4 | |
| ABL-Refl | $C=0.7$ | 0.24 $\pm$ 0.02 | 99.1 $\pm$ 0.2 |
| $C=0.8$ | 0.22 $\pm$ 0.02 | 97.4 $\pm$ 0.3 | |
| $C=0.9$ | 0.21 $\pm$ 0.02 | 96.6 $\pm$ 0.5 | |
| SATNet | 0.12 $\pm$ 0.01 | 63.5 $\pm$ 2.2 | |
| Visual Sudoku | CNN+Solver | 0.23 $\pm$ 0.02 | 67.8 $\pm$ 4.2 |
| ABL-Refl | $C=0.7$ | 0.24 $\pm$ 0.02 | 95.9 $\pm$ 2.8 |
| $C=0.8$ | 0.22 $\pm$ 0.02 | 93.5 $\pm$ 3.2 | |
| $C=0.9$ | 0.21 $\pm$ 0.02 | 90.6 $\pm$ 4.2 | |
Table 7: Inference time and accuracy on solving Sudoku and visual Sudoku. For different values of the hyperparameter $C$ , ABL-Refl uniformly outperforms other baseline methods.
| Method | Dataset | | | | |
| --- | --- | --- | --- | --- | --- |
| ENZYMES | PROTEINS | IMDB-Binary | COLLAB | | |
| Erdos | 0.883 $\pm$ 0.156 | 0.905 $\pm$ 0.133 | 0.936 $\pm$ 0.175 | 0.852 $\pm$ 0.212 | |
| Neural SFE | 0.933 $\pm$ 0.148 | 0.926 $\pm$ 0.165 | 0.961 $\pm$ 0.143 | 0.781 $\pm$ 0.316 | |
| ABL-Refl | $C=0.7$ | 0.992 $\pm$ 0.012 | 0.988 $\pm$ 0.019 | 0.984 $\pm$ 0.026 | 0.986 $\pm$ 0.016 |
| $C=0.8$ | 0.991 $\pm$ 0.017 | 0.985 $\pm$ 0.020 | 0.979 $\pm$ 0.029 | 0.982 $\pm$ 0.015 | |
| $C=0.9$ | 0.982 $\pm$ 0.023 | 0.975 $\pm$ 0.021 | 0.968 $\pm$ 0.035 | 0.971 $\pm$ 0.021 | |
Table 8: Approximation ratios on finding maximum clique. For different values of the hyperparameter $C$ , ABL-Refl uniformly outperforms other baseline methods.
However, setting $C$ to more extreme lower values, while potentially further enhancing reasoning accuracy, will face the risk of weakening the reflection in accelerating reasoning, since more elements are delegated to symbolic reasoning. Therefore, we do not recommend excessively lowering $C$ . For this effect of $C$ in computational efficiency, we have also conducted experimental evaluation: The runtime after adjusting $C$ are reported in Table 9. It can be seen that setting $C$ to a higher value can further narrow the search space for symbolic reasoning, thereby offering a more substantial efficiency improvement. (On the contrary, setting $C$ to a more extreme high value would essentially rely merely on the neural network’s intuitive output, rendering the reflection vector ineffective; hence, such settings are not considered.)
| $\boldsymbol{\mathcal{KB}}$ Form | Solver | Method | Inference Accuracy | Inference Time (s) | | |
| --- | --- | --- | --- | --- | --- | --- |
| NN Time | Abduction Time | Overall Time | | | | |
| Propositional logic | MiniSAT | Solver only | 100 $\pm$ 0 | - | 0.227 $\pm$ 0.024 | 0.227 $\pm$ 0.024 |
| ABL-Refl | $C=0.7$ | 99.1 $\pm$ 0.2 | | 0.218 $\pm$ 0.019 | 0.239 $\pm$ 0.023 | |
| $C=0.8$ | 97.4 $\pm$ 0.3 | 0.021 $\pm$ 0.004 | 0.196 $\pm$ 0.015 | 0.217 $\pm$ 0.019 | | |
| $C=0.9$ | 96.6 $\pm$ 0.5 | | 0.185 $\pm$ 0.017 | 0.206 $\pm$ 0.021 | | |
| First-order logic | Prolog with CLP(FD) | Solver only | 100 $\pm$ 0 | - | 105.81 $\pm$ 5.62 | 105.81 $\pm$ 5.62 |
| ABL-Refl | $C=0.7$ | 99.1 $\pm$ 0.2 | | 68.59 $\pm$ 3.31 | 68.61 $\pm$ 3.31 | |
| $C=0.8$ | 97.4 $\pm$ 0.3 | 0.021 $\pm$ 0.004 | 31.86 $\pm$ 1.88 | 31.88 $\pm$ 1.89 | | |
| $C=0.9$ | 96.6 $\pm$ 0.5 | | 20.47 $\pm$ 1.23 | 20.49 $\pm$ 1.23 | | |
Table 9: Inference accuracy and time (on 1K test data) on solving Sudoku. Setting the hyperparameter $C$ to a higher value offers a more substantial efficiency improvement compared to symbolic solvers.
In summary, to utilize the reflection vector’s role in bridging neural network outputs and symbolic reasoning, setting $C$ within a moderate range is advised. Experimental evidence suggests that within this broad range, e.g., 0.6-0.9, the specific value of $C$ actually does not significantly impact outcomes; it is merely a balance between accuracy and computation time.
## Appendix D More Discussion on Comparison with Neural Network Confidence
The core idea of ABL-Refl is to identify areas in the neural network’s intuitive output where inconsistencies with knowledge are most likely to occur. Thus, a straightforward approach might seem to be letting the neural network itself highlight errors, i.e., treating elements with low confidence values from the neural network result as potential errors. However, Section 5 have proven that such a naive approach significantly underperforms our method. This is because neural networks cannot explicitly utilize symbolic knowledge during training, making it challenging to establish a correlation between confidence levels and inconsistencies with knowledge.
To illustrate this more clearly, we now demonstrate a case study in the solving Sudoku experiment: Figures 5(a) - 5(b) below depict a Sudoku problem and its correct solution. Figure 5(c) shows the intuitive output obtained from the GNN, where several numbers marked in red are incorrect. Figures 5(d) and 5(e) display the results using NN confidence and the reflection vector, respectively, with identified potential error positions in blue.
<details>
<summary>extracted/6187776/figs/case/case1.jpg Details</summary>

### Visual Description
\n
## Grid: Numerical Data Array
### Overview
The image presents a 10x10 grid filled with single-digit numerical values. The grid is enclosed in a black border. There are no axes, legends, or explicit titles. The data appears to be arranged in a rectangular array.
### Components/Axes
There are no axes or legends present. The grid itself defines the structure. The grid is composed of 10 rows and 10 columns, creating 100 cells, each containing a numerical value from 0 to 9.
### Detailed Analysis or Content Details
The grid contains the following numerical data, row by row:
* **Row 1:** 7, 8, 0, 4, 0, 0, 1, 2, 0, 0
* **Row 2:** 6, 0, 0, 0, 7, 5, 0, 0, 0, 9
* **Row 3:** 0, 0, 0, 6, 0, 1, 0, 7, 8, 0
* **Row 4:** 0, 0, 7, 0, 4, 0, 2, 6, 0, 0
* **Row 5:** 0, 0, 1, 0, 5, 0, 9, 3, 0, 0
* **Row 6:** 9, 0, 4, 0, 6, 0, 0, 0, 0, 5
* **Row 7:** 0, 7, 0, 3, 0, 0, 0, 1, 2, 0
* **Row 8:** 1, 2, 0, 0, 0, 7, 4, 0, 0, 0
* **Row 9:** 0, 4, 9, 2, 0, 6, 0, 0, 0, 7
* **Row 10:** 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
### Key Observations
The grid contains a mix of values, with a significant presence of zeros. The distribution of non-zero values appears relatively random, with no immediately obvious patterns or clusters. The highest value present is 9.
### Interpretation
Without additional context, it is difficult to determine the meaning of this data. It could represent a variety of things, such as:
* **Pixel values in an image:** The numbers could represent grayscale intensity values of pixels in a small image.
* **Matrix data:** It could be a matrix used in mathematical operations.
* **Game board:** It could represent the state of a game board, such as a simplified version of Sudoku or a similar puzzle.
* **Sensor readings:** The numbers could represent readings from a sensor array.
* **Statistical data:** It could be a small sample of data from a larger dataset.
The lack of labels or context makes it impossible to draw definitive conclusions about the data's purpose or significance. Further information is needed to understand the underlying meaning.
</details>
(a) Sudoku problem
<details>
<summary>extracted/6187776/figs/case/case2.jpg Details</summary>

### Visual Description
\n
## Grid: Sudoku Puzzle
### Overview
The image presents a completed 9x9 Sudoku puzzle. The grid is enclosed in a black border. Each cell within the grid contains a single digit from 1 to 9.
### Components/Axes
The image consists of a 9x9 grid. There are no explicit axes or legends. The grid is divided into nine 3x3 subgrids. The values within the grid are integers from 1 to 9.
### Detailed Analysis or Content Details
The grid contains the following values, row by row:
* **Row 1:** 7, 8, 5, 4, 3, 9, 1, 2, 6
* **Row 2:** 6, 1, 2, 8, 7, 5, 3, 4, 9
* **Row 3:** 4, 9, 3, 6, 2, 1, 5, 7, 8
* **Row 4:** 8, 5, 7, 9, 4, 3, 2, 6, 1
* **Row 5:** 2, 6, 1, 7, 5, 8, 9, 3, 4
* **Row 6:** 9, 3, 4, 1, 6, 2, 7, 8, 5
* **Row 7:** 5, 7, 8, 3, 9, 4, 6, 1, 2
* **Row 8:** 1, 2, 6, 5, 8, 7, 4, 9, 3
* **Row 9:** 3, 4, 9, 2, 1, 6, 8, 5, 7
Each row, column, and 3x3 subgrid contains all digits from 1 to 9 without repetition.
### Key Observations
The puzzle is solved correctly, adhering to all Sudoku rules. There are no apparent errors or inconsistencies.
### Interpretation
The image demonstrates a valid solution to a standard 9x9 Sudoku puzzle. Sudoku is a logic-based number-placement puzzle. The goal is to fill a 9x9 grid with digits so that each column, each row, and each of the nine 3x3 subgrids that compose the grid contain all of the digits from 1 to 9. The image confirms that a solution exists for this particular puzzle configuration. The arrangement of numbers demonstrates a logical and consistent pattern, fulfilling the constraints of the game.
</details>
(b) Sudoku solution
<details>
<summary>extracted/6187776/figs/case/case3.jpg Details</summary>

### Visual Description
\n
## Sudoku Grid: Number Puzzle
### Overview
The image presents a partially filled 9x9 Sudoku grid. The grid is enclosed in a thick black border. Several cells are pre-filled with numbers ranging from 1 to 9. The goal of a Sudoku puzzle is to fill the remaining cells with numbers 1-9 such that each row, column, and 3x3 subgrid contains each number exactly once.
### Components/Axes
The image consists of a 9x9 grid. There are no explicit axes or legends. The grid is divided into nine 3x3 subgrids, visually delineated by the thicker lines separating the cells. The numbers within the cells range from 1 to 9.
### Detailed Analysis or Content Details
Here's a transcription of the numbers present in the grid, row by row:
* **Row 1:** 7, 8, 5, 4, 3, 9, 1, 2, 6
* **Row 2:** 6, 1, 2, 8, 7, 5, 3, 4, 9
* **Row 3:** 4, 9, 4, 6, 2, 1, 5, 7, 8
* **Row 4:** 8, 5, 7, 9, 4, 3, 2, 6, 1
* **Row 5:** 2, 6, 1, 7, 5, 8, 9, 3, 4
* **Row 6:** 9, 3, 4, 1, 5, 2, 7, 8, 5
* **Row 7:** 5, 7, 8, 3, 9, 4, 6, 1, 2
* **Row 8:** 1, 5, 6, 5, 8, 7, 4, 6, 3
* **Row 9:** 3, 4, 9, 2, 1, 6, 8, 5, 7
Note: The number '5' appears twice in Row 6 and the number '6' appears twice in Row 8. This indicates an error in the puzzle's initial state, or it is a puzzle designed to be more challenging with multiple solutions.
### Key Observations
The puzzle is partially filled, with a varying degree of numbers provided in each row and column. The distribution of numbers appears somewhat uneven, with some rows and columns having more pre-filled values than others. The presence of duplicate numbers in rows 6 and 8 is a notable anomaly.
### Interpretation
The image represents a classic Sudoku puzzle. The purpose of the puzzle is to exercise logical deduction and problem-solving skills. The pre-filled numbers serve as constraints, guiding the player towards a unique solution. The presence of duplicate numbers in rows 6 and 8 suggests either a flawed puzzle or a puzzle with multiple possible solutions, deviating from the standard Sudoku format. The puzzle's difficulty will depend on the number of pre-filled cells and the complexity of the logical deductions required to complete it. The grid structure and number placement are designed to create a challenging yet solvable puzzle.
</details>
(c) Neural network intuitive output
<details>
<summary>extracted/6187776/figs/case/case4.jpg Details</summary>

### Visual Description
\n
## Grid: Sudoku Puzzle with Highlighted Cells
### Overview
The image presents a partially filled 9x9 Sudoku grid. Several cells are highlighted with colored circles, and an annotation indicates an "Irrelevant/Incorrect data pattern" pointing to one of these highlighted cells. The grid contains numbers from 1 to 9, with some cells left blank.
### Components/Axes
The image consists of a 9x9 grid. There are no explicit axes or legends, but the grid structure itself defines the rows and columns. The numbers 1-9 are the possible values for each cell. The highlighted cells are marked with orange and teal circles.
### Detailed Analysis or Content Details
The grid is filled as follows (row by row, left to right, with 0 representing empty cells):
* Row 1: 7, 8, 5, 4, 3, 9, 1, 2, 6
* Row 2: 6, 1, 2, 8, 7, 5, 3, 4, 9
* Row 3: 4, 9, 4, 6, 2, 1, 5, 7, 8
* Row 4: 8, 5, 7, 9, 4, 3, 2, 6, 1
* Row 5: 2, 6, 1, 7, 5, 8, 9, 3, 4
* Row 6: 9, 3, 4, 1, 5, 2, 7, 8, 5
* Row 7: 5, 7, 8, 3, 9, 4, 6, 1, 2
* Row 8: 1, 5, 6, 5, 8, 7, 4, 6, 3
* Row 9: 3, 4, 9, 2, 1, 6, 8, 5, 7
Highlighted Cells:
* Top-left: Cell (1,1) contains '7' and is circled in orange.
* Center-left: Cell (3,2) contains '4' and is circled in orange.
* Bottom-left: Cell (8,2) contains '5' and is circled in orange.
* Bottom-right: Cell (9,8) contains '5' and is circled in teal.
* Second row from bottom, second column from right: Cell (8,7) contains '6' and is circled in teal.
* Center: Cell (5,5) contains '5' and is circled in teal.
The annotation "Irrelevant/Incorrect data pattern" points to the orange circle around the '7' in the top-left cell (1,1).
### Key Observations
The grid is a standard Sudoku puzzle. The highlighted cells seem to draw attention to potential errors or areas of focus. The annotation suggests that at least one of the highlighted numbers is incorrectly placed according to Sudoku rules. The presence of multiple highlighted cells suggests a pattern or a set of potential errors.
### Interpretation
The image presents a Sudoku puzzle with a focus on specific cells. The annotation indicates that the puzzle is not fully valid, and the highlighted cells are likely related to the invalidity. The purpose of the image is likely to demonstrate a Sudoku puzzle with errors, potentially for educational purposes or to illustrate the process of solving and validating a Sudoku puzzle. The annotation suggests that the '7' in the top-left corner is an error, but further analysis would be needed to confirm this and identify any other errors. The use of color-coding (orange and teal) might indicate different types of errors or different stages of the error-checking process. The image is a visual representation of a logical puzzle and its validation.
</details>
(d) Errors identified by NN confidence
<details>
<summary>extracted/6187776/figs/case/case5.jpg Details</summary>

### Visual Description
\n
## Grid: Sudoku Puzzle
### Overview
The image presents a partially filled Sudoku grid. The grid is a 9x9 arrangement of cells, with some cells pre-filled with numbers from 1 to 9. The goal of a Sudoku puzzle is to fill the remaining cells with numbers such that each row, each column, and each of the nine 3x3 subgrids contains all of the digits from 1 to 9. Several cells are highlighted in blue with a number in red.
### Components/Axes
The image consists of a 9x9 grid. There are no explicit axes or legends. The grid is divided into nine 3x3 subgrids. The cells contain numerical values ranging from 1 to 9, or are empty.
### Detailed Analysis or Content Details
The grid contains the following pre-filled numbers:
* **Row 1:** 7, 8, 5, 4, 3, 9, 1, 2, 6
* **Row 2:** 6, 1, 2, 8, 7, 5, 3, 4, 9
* **Row 3:** 4, 9, 4, 6, 2, 1, 5, 7, 8
* **Row 4:** 8, 5, 7, 9, 4, 3, 2, 6, 1
* **Row 5:** 2, 6, 1, 7, 5, 8, 9, 3, 4
* **Row 6:** 9, 3, 4, 1, 5, 2, 7, 8, 5
* **Row 7:** 5, 7, 8, 3, 9, 4, 6, 1, 2
* **Row 8:** 1, 5, 6, 5, 8, 7, 4, 6, 3
* **Row 9:** 3, 4, 9, 2, 1, 6, 8, 5, 7
Highlighted cells (blue background, red number):
* Row 3, Column 1: 4
* Row 3, Column 2: 4
* Row 6, Column 5: 5
* Row 7, Column 1: 5
* Row 8, Column 2: 5
* Row 8, Column 8: 6
### Key Observations
The puzzle is partially filled, and the highlighted cells seem to indicate potential areas of focus or conflict during the solving process. The repetition of the number 4 in Row 3 and the number 5 in Row 8 may be significant.
### Interpretation
The image represents a classic logic puzzle, Sudoku. The pre-filled numbers provide constraints for the solution. The highlighted cells may be part of a solving strategy, perhaps indicating cells where the solver is considering multiple possibilities or has identified a conflict. The puzzle's difficulty depends on the number and placement of the initial numbers. The presence of repeated numbers in rows or columns suggests that the puzzle may require careful consideration of the constraints to find a valid solution. The image does not provide any information about the puzzle's difficulty level or the solver's progress. It is simply a snapshot of the puzzle at a given state.
</details>
(e) Errors identified by ABL-Refl
Figure 5: A case study in the solving Sudoku experiment.
It can be seen that the errors marked by the reflection vector generally correspond to the constraints in $\mathcal{KB}$ , containing duplicate numbers either in a row, column, or subgrid. In contrast, errors identified by NN confidence are difficult to align with such knowledge. Take the incorrect identification of the first row, first column as an example, after examining the dataset, we find that there are some of the Sudoku solutions with a number “4” in the third row, third column and a number “7” in the first row, first column at the same time. These irrelevant yet common data patterns likely lead the neural network to erroneously learn during training. Hence, when an error occurs in the third row and third column, the confidence in the first row and first column also drops. This case study highlights that pure data-driven networks cannot explicitly utilize KB knowledge: during training, they only have access to data labels, not the logical principles behind the data. Consequently, due to factors like learning incorrect data patterns or overfitting to noise, confidence values often misalign with the compatibility with domain knowledge, leading them become unreliable to identify errors. In contrast, the training information for the reflection vector is directly derived from the $\mathcal{KB}$ .
Furthermore, as discussed in Appendix C, in ABL-Refl, adjusting the hyperparameter $C$ , as a soft margin, can help determine how much of the neural network’s output is retained. In Section 5, corresponding to $C=0.8$ , the neural network’s output with top 80% confidence was retained. We will now test adjusting this threshold of retaining neural network’s output. We report the results in Table 10. As can be seen, regardless of the threshold value, our method consistently outperforms NN confidence.
| 60% ABL-Refl ( $C=0.6$ ) 70% | NN Confidence 99.31 $\pm$ 0.84 NN Confidence | 93.18 $\pm$ 2.34 95.8 $\pm$ 2.8 88.60 $\pm$ 2.66 | 77.2 $\pm$ 5.5 70.1 $\pm$ 5.7 |
| --- | --- | --- | --- |
| ABL-Refl ( $C=0.7$ ) | 99.25 $\pm$ 0.84 | 94.5 $\pm$ 2.9 | |
| 80% | NN Confidence | 82.64 $\pm$ 2.78 | 64.3 $\pm$ 6.2 |
| ABL-Refl ( $C=0.8$ ) | 99.04 $\pm$ 0.85 | 93.5 $\pm$ 3.2 | |
| 90% | NN Confidence | 71.05 $\pm$ 3.01 | 52.1 $\pm$ 6.2 |
| ABL-Refl ( $C=0.9$ ) | 98.86 $\pm$ 0.89 | 91.2 $\pm$ 3.5 | |
Table 10: Recall and inference accuracy for different thresholds of intuitive output retained (In ABL-Refl, the threshold is controlled by $C$ as a soft margin and not a strict boundary).
## Appendix E Additional Experiment on Solving Combinatorial Optimization Problems on Graphs
In this section, we present an additional experiment on solving combinatorial optimization problems on graphs, finding the maximum independent set. In this experiment, we will demonstrate how our method can easily extend across varied reasoning scenarios.
#### Dataset and Settings.
The input is the same as in Section 4.3 for solving the maximum clique, given a graph $G=(V,E)$ with $|V|=n$ nodes, but in this section, we aim for the output $\boldsymbol{y}\in\{0,1\}^{n}$ where the set of value 1 collectively constitutes the maximum independent set. While the two problems share similarities, they exhibit distinct reasoning capabilities: cliques rely on high homophily, whereas an independent set demonstrates significant heterophily. Generally, it is challenging for graph neural networks to simultaneously handle both scenarios effectively.
We utilize the same structure of graph neural networks as in Section 4.3. For the reasoning part, we continue to use Gurobi as the symbolic solver, and $\mathcal{KB}$ remains the basic mathematical definition of an independent set, i.e., no two nodes are connected by an edge. For consistency measurement, we adopt a similar definition in Section 4.3 as follows: one point is awarded for each pair of vertices if they are not connected by an edge; additionally, if the output set is indeed an independent set, the size of the output set multiplied by 10 is added. We may see that although the nature of the reasoning becomes entirely opposite compared to solving the maximum clique, we are able to flexibly transition to the new scenario with minimal changes.
#### Results.
We report the results in Table 11. We may see that our method significantly outperforms compared methods. Additionally, when compared to the results in Table 8, it can be observed that the performance of other baselines has declined when switching from finding maximum cliques to this task of finding maximum independent set. However, the performance of ABL-Refl has remained near perfect.
| Method | Dataset | | | | |
| --- | --- | --- | --- | --- | --- |
| ENZYMES | PROTEINS | IMDB-Binary | COLLAB | | |
| Erdos | 0.821 $\pm$ 0.125 | 0.903 $\pm$ 0.114 | 0.515 $\pm$ 0.310 | 0.886 $\pm$ 0.198 | |
| Neural SFE | 0.775 $\pm$ 0.155 | 0.729 $\pm$ 0.205 | 0.679 $\pm$ 0.287 | 0.392 $\pm$ 0.253 | |
| ABL-Refl | $C=0.7$ | 0.989 $\pm$ 0.022 | 0.958 $\pm$ 0.029 | 0.964 $\pm$ 0.026 | 0.987 $\pm$ 0.016 |
| $C=0.8$ | 0.986 $\pm$ 0.026 | 0.954 $\pm$ 0.053 | 0.960 $\pm$ 0.037 | 0.985 $\pm$ 0.016 | |
| $C=0.9$ | 0.980 $\pm$ 0.025 | 0.942 $\pm$ 0.051 | 0.952 $\pm$ 0.021 | 0.975 $\pm$ 0.021 | |
Table 11: Approximation ratios on finding maximum maximum independent set.