## Diagram: Comparison of Neural-Symbolic (NeSy) Approaches for Autonomous Driving Logic
### Overview
The image is a technical diagram comparing two approaches to implementing a logical rule for autonomous driving. The left panel shows a real-world driving scene with annotated objects. The middle and right panels contrast a "State-of-the-Art" (SotA) Neural-Symbolic (NeSy) method with a proposed "RS-aware" (Rule-Semantics-aware) method, using entropy charts and connection diagrams to illustrate their performance and internal reasoning.
### Components/Axes
The image is divided into three vertical panels separated by dashed lines.
**1. Left Panel (Problem Definition):**
* **Header Text:** `K = (pedestrian ∨ red ⇒ stop)`
* This is a logical rule: "If there is a pedestrian OR a red light, then stop."
* **Image Content:** A street scene from a vehicle's perspective.
* **Annotations:**
* A cyan bounding box around a traffic light on the left, labeled `green` in cyan text.
* A cyan bounding box around a traffic light further ahead, labeled `red` in cyan text.
* A magenta bounding box around two pedestrians crossing the street, labeled `red` in magenta text. This label likely indicates the logical condition "red" (as in red light) is being triggered by the presence of pedestrians, or it's a mislabel highlighting the conflict.
**2. Middle Panel (NeSy SotA):**
* **Header Text:** `NeSy SotA`
* **Top Chart - "Entropy":**
* A bar chart with three bars.
* **Bar 1 (Left):** Short blue bar with a green checkmark (✓) above it.
* **Bar 2 (Middle):** Taller blue bar with a red cross (✗) above it.
* **Bar 3 (Right):** Taller blue bar (similar height to Bar 2) with a red cross (✗) above it.
* **Bottom Diagram:**
* Three nodes at the top labeled: `g_l` (green light), `r_l` (red light), `pe` (pedestrian).
* Three colored circles at the bottom: a green circle, a red circle, and a neutral face (😐) circle.
* **Connections (Arrows):**
* `g_l` → green circle.
* `r_l` → red circle.
* `pe` → red circle.
* `pe` → neutral face circle.
**3. Right Panel (RS-aware):**
* **Header Text:** `RS-aware`
* **Top Chart - "Entropy":**
* A bar chart with three bars.
* **All Three Bars:** Are of equal, moderate height (shorter than the tall bars in the NeSy SotA chart). Each has a green checkmark (✓) above it.
* **Bottom Diagram:**
* Same three nodes at the top: `g_l`, `r_l`, `pe`.
* Same three colored circles at the bottom: green, red, neutral face.
* **Connections (Arrows):**
* `g_l` → green circle.
* `r_l` → red circle.
* `r_l` → neutral face circle.
* `pe` → red circle.
* `pe` → neutral face circle.
### Detailed Analysis
* **Logical Rule Application:** The left panel establishes the ground truth rule `K`. The scene contains a `green` light, a `red` light, and `pedestrians`. According to the rule, the presence of pedestrians (`pe`) should trigger the `stop` condition.
* **NeSy SotA Performance (Middle Panel):**
* **Entropy Chart:** Shows low entropy (certainty) for the first condition (likely `g_l`), but high entropy (uncertainty) for the second and third conditions (likely `r_l` and `pe`). The red crosses indicate failure or high uncertainty in correctly classifying or reasoning about the red light and pedestrian.
* **Connection Diagram:** Shows a direct, one-to-one mapping for `g_l` and `r_l`. The `pe` node connects *only* to the `red` circle and the `neutral` circle. This suggests the model is uncertain whether the pedestrian should map to the "red" (stop) condition or a neutral state, failing to confidently link it to the required logical outcome.
* **RS-aware Performance (Right Panel):**
* **Entropy Chart:** Shows moderate, equal entropy across all three conditions, with green checkmarks indicating successful handling or correct classification.
* **Connection Diagram:** Shows a more complex, cross-connected mapping. Both `r_l` and `pe` connect to *both* the `red` circle and the `neutral` circle. This indicates the model acknowledges the ambiguity or shared semantic role between a red light and a pedestrian in triggering the stop rule, distributing its reasoning across both relevant outputs.
### Key Observations
1. **Spatial Grounding:** The legend (Entropy chart symbols: ✓/✗) is placed directly above its corresponding bar. The connection diagrams are positioned directly below their respective entropy charts.
2. **Trend Verification:** The NeSy SotA entropy trend is "low, high, high". The RS-aware entropy trend is "moderate, moderate, moderate". The connection complexity increases from NeSy SotA (mostly separate lines) to RS-aware (crossed lines).
3. **Component Isolation:** The left panel defines the problem. The middle and right panels are direct comparisons of solution methods, each with a performance metric (entropy) and a model of internal reasoning (connection diagram).
4. **Anomaly/Highlight:** The magenta label `red` on the pedestrians in the left image is a critical annotation. It visually forces the connection between the object "pedestrian" and the logical condition "red" (stop), which is the core challenge the diagrams explore.
### Interpretation
This diagram argues that a standard Neural-Symbolic (NeSy) approach struggles with the semantic ambiguity inherent in real-world rules. The rule `pedestrian ∨ red ⇒ stop` requires the system to understand that two different physical objects (a traffic light showing red, and a pedestrian) map to the same logical condition (`stop`). The NeSy SotA model shows high uncertainty (high entropy) when processing these ambiguous cases (`r_l` and `pe`), as its internal connections are too rigid.
The proposed "RS-aware" model is designed to handle this rule semantics explicitly. Its lower, uniform entropy indicates greater confidence. The cross-connected diagram reveals its strategy: it doesn't force a one-to-one mapping. Instead, it allows both `r_l` and `pe` to influence both the `red` (stop) and `neutral` outputs, better modeling the "OR" logic where either input can trigger the same outcome. The diagram suggests that explicitly modeling the semantics of logical rules (how inputs relate to outputs) improves the robustness and certainty of AI systems in safety-critical tasks like autonomous driving.