## Heatmap Comparison: Decision Boundaries of Logic Gates and Neural Models
### Overview
The image is a composite figure containing ten individual heatmap subplots arranged in a 5-row by 2-column grid. The overall title is "Comparison of Hard Logic, NLU, and MLP (Dense + ReLU) Decision Boundaries." Each subplot visualizes the decision boundary of a specific logical function or neural network model over a 2D input space defined by variables \(x_1\) and \(x_2\), both ranging from 0.0 to 1.0. A vertical color bar on the right side of the figure provides a legend for the "Decision Output Intensity," mapping colors to numerical values from 0.000 (dark purple) to 1.000 (bright yellow).
### Components/Axes
* **Main Title:** "Comparison of Hard Logic, NLU, and MLP (Dense + ReLU) Decision Boundaries"
* **Subplot Titles (Row-wise, Left to Right):**
1. "Hard Logic OR", "Hard Logic AND"
2. "NLU Soft-OR (β=1, w=(0.5,0.5))", "NLU Soft-AND (β=1, w=(0.5,0.5))"
3. "NLU Soft-OR (β=10, w=(0.5,0.5))", "NLU Soft-AND (β=10, w=(0.5,0.5))"
4. "NLU Soft-OR (β=100, w=(0.5,0.5))", "NLU Soft-AND (β=100, w=(0.5,0.5))"
5. "Dense+ReLU (Bias=0.0, w=(0.5,0.5))", "Dense+ReLU (Bias=-0.5, w=(0.5,0.5))"
* **Axes (for all subplots):**
* **X-axis:** Label is "\(x_1\)". Ticks are at 0.0, 0.2, 0.4, 0.6, 0.8, 1.0.
* **Y-axis:** Label is "\(x_2\)". Ticks are at 0.0, 0.2, 0.4, 0.6, 0.8, 1.0.
* **Color Bar (Right side, spanning full height):**
* **Label:** "Decision Output Intensity"
* **Scale:** Linear, from 0.000 at the bottom to 1.000 at the top.
* **Tick Values:** 0.000, 0.111, 0.222, 0.333, 0.444, 0.556, 0.667, 0.778, 0.889, 1.000.
* **Color Gradient:** Transitions from dark purple (0.000) through blue, teal, and green to bright yellow (1.000).
### Detailed Analysis
**Row 1: Hard Logic Gates**
* **Left (Hard Logic OR):** Shows a sharp, step-function boundary. The region where \(x_1 > 0.5\) OR \(x_2 > 0.5\) is bright yellow (output ≈ 1.0). The region where both \(x_1 ≤ 0.5\) and \(x_2 ≤ 0.5\) is dark purple (output ≈ 0.0). The boundary forms an "L" shape.
* **Right (Hard Logic AND):** Shows a sharp, step-function boundary. The region where \(x_1 > 0.5\) AND \(x_2 > 0.5\) is bright yellow (output ≈ 1.0). All other regions are dark purple (output ≈ 0.0). The boundary forms a square in the top-right quadrant.
**Rows 2-4: NLU (Neural Logic Unit) Soft Gates**
These plots show smooth, continuous approximations of the hard logic gates. The parameter β controls the sharpness of the transition. The weight vector w is fixed at (0.5, 0.5).
* **Row 2 (β=1):** Boundaries are very diffuse. For Soft-OR, the output increases gradually from the bottom-left corner. For Soft-AND, the output increases gradually towards the top-right corner. The transition zone is wide.
* **Row 3 (β=10):** Boundaries become more defined. The transition from low to high output is sharper than at β=1, but still smooth. The shape begins to resemble the hard logic counterparts.
* **Row 4 (β=100):** Boundaries are very sharp, closely approximating the hard logic gates. The transition from purple to teal/green is abrupt, mimicking the step function. The Soft-OR boundary is a sharp "L", and the Soft-AND boundary is a sharp square in the top-right.
**Row 5: Dense Layer + ReLU Activation**
These plots show the decision boundary of a single-layer neural network with a linear (Dense) layer followed by a Rectified Linear Unit (ReLU) activation. The weight vector w is (0.5, 0.5).
* **Left (Bias=0.0):** The output is a linear plane clipped at zero by the ReLU. The boundary is a straight diagonal line from (0,1) to (1,0). The output is 0.0 (purple) in the bottom-left triangle and increases linearly to 1.0 (yellow) in the top-right triangle.
* **Right (Bias=-0.5):** The negative bias shifts the linear plane downward. The ReLU clips a larger region to zero. The decision boundary (where output > 0) is now a diagonal line shifted towards the top-right corner. The output is 0.0 (purple) over most of the space, rising to a maximum of ~0.5 (teal) only in the extreme top-right corner.
### Key Observations
1. **Sharpness Progression:** The NLU models demonstrate a clear progression from very soft, diffuse decision boundaries (β=1) to very sharp, near-binary boundaries (β=100) that closely mimic the hard logic gates.
2. **Boundary Shape:** The fundamental shape of the decision boundary is determined by the logical operation (OR vs. AND). OR boundaries are "L"-shaped, encompassing the top and right. AND boundaries are square, confined to the top-right.
3. **ReLU Limitation:** The single Dense+ReLU unit with these weights cannot perfectly replicate the hard AND or OR boundaries. It can only produce a linear diagonal boundary. The bias term shifts this boundary but does not change its fundamental linear nature.
4. **Color-Value Consistency:** The color mapping is consistent across all plots. Bright yellow always corresponds to an output near 1.0, and dark purple to an output near 0.0. The intermediate colors (teal, green) represent the smooth transition zones in the soft models.
### Interpretation
This figure is a pedagogical comparison of how different computational models represent logical decisions.
* **Hard Logic** represents the classical, binary ideal: a decision is either fully true (1) or fully false (0) with an instantaneous switch.
* **NLU (Soft Logic)** provides a differentiable, continuous approximation of hard logic. The parameter β acts as a "temperature" or "sharpness" control. Low β creates a very gradual, uncertain transition (useful for gradient-based learning), while high β recovers the crisp, confident decision of hard logic. This illustrates how neural networks can learn to implement logical rules in a smooth, optimizable way.
* **Dense+ReLU** represents the standard building block of deep learning. The figure shows its inherent limitation: a single unit with a ReLU activation can only create a linear decision boundary. To approximate the more complex "L" or square boundaries of OR/AND logic, multiple such units (i.e., a deeper network) would be required to combine several linear boundaries.
**In summary,** the visualization argues that specialized "soft logic" units (like the NLU) can efficiently and intuitively model logical operations with a controllable degree of fuzziness, whereas standard neural network components require more complexity (more units/layers) to achieve the same functional representation. The progression of β elegantly shows the continuum between probabilistic/fuzzy reasoning and deterministic logic.