## Diagram Set: Network-on-Chip Link Controller and Core Connectivity
### Overview
The image is a technical figure composed of three interconnected panels (a, b, c) illustrating the architecture and connectivity of a network-on-chip (NoC) communication system. Panel **a** details the internal block diagram of a single link controller. Panel **b** shows the interconnection of two specific link controllers with surrounding processing cores. Panel **c** presents a connectivity matrix (heatmap) showing active transmission paths between cores across the entire chip grid.
### Components/Axes
**Panel a: Link Controller Block Diagram**
* **Main Block:** A purple rectangle representing the link controller.
* **Inputs:**
* "From LDPU" (top-left, white arrow).
* "Config Interface" (bottom-center, white arrow).
* "RX 1,2,...8" (bottom-right, 8 white arrows).
* **Outputs:**
* "To LDPU" (top-right, white arrow).
* "TX 1,2,...8" (bottom-left, 8 white arrows).
* **Internal Components (Labeled A-E):**
* **A:** "TX Preamble Insertion" (connected to the TX multiplexer).
* **B:** "TX Routing Registers" (connected to the TX multiplexer and Config Interface).
* **C:** "Preamble Registers" (connected to Config Interface and components D/E).
* **D:** "LDPU Preamble Check" (connected to RX inputs and component C).
* **E:** "Hopping Preamble Check" (connected to RX inputs and component C).
* **Legend (Top-Right):** A list mapping letters to component names (A through E as listed above).
* **Inset Diagram (Top-Right):** A small 8x8 grid labeled "Receiving column" (1-8, top) and "Transmitting column" (1-8, right). A diagonal pattern of green squares runs from (1,1) to (8,8).
**Panel b: Core Interconnection Diagram**
* **Central Elements:** Two purple rectangles labeled "Link controller - Core(3,5)" (top) and "Link controller - Core(4,5)" (bottom).
* **Connected Cores (Red Arrows - Transmitting):**
* From Core(2,5) (top-left, arrow pointing down to top link controller).
* From Core(3,3), Core(3,4), Core(4,3), Core(4,4) (left side, arrows pointing right to both link controllers).
* From Core(5,5) (bottom-left, arrow pointing up to bottom link controller).
* **Connected Cores (Blue Arrows - Receiving):**
* To Core(2,5) (top-right, arrow pointing up from top link controller).
* To Core(3,6), Core(3,7), Core(4,6), Core(4,7) (right side, arrows pointing right from both link controllers).
* To Core(5,5) (bottom-right, arrow pointing down from bottom link controller).
* **Cross-Connection:** Red and blue lines cross between the two link controllers, indicating a direct link between Core(3,5) and Core(4,5).
**Panel c: Connectivity Matrix (Heatmap)**
* **Axes:**
* **X-axis (Bottom):** "Receiving row", numbered 1 through 8.
* **Y-axis (Left):** "Transmitting row", numbered 1 through 8.
* **Grid Structure:** An 8x8 grid of large gray squares. Each large square is subdivided into an 8x8 sub-grid of smaller cells.
* **Data Representation:** Green squares within the sub-grids indicate an active connection from a specific transmitting column to a specific receiving column within the given row-to-row link.
* **Highlighted Rows/Columns:**
* **Red Horizontal Lines:** Highlight rows 3 and 4 on the Y-axis. Labels: "Core(3,5), TX" (next to row 3) and "Core(4,5), TX" (next to row 4).
* **Blue Vertical Lines:** Highlight columns 3 and 4 on the X-axis. Labels: "Core(3,5), RX" (below column 3) and "Core(4,5), RX" (below column 4).
* **Pattern:** The green squares form a repeating diagonal block pattern across the matrix. The pattern is consistent for most row-to-row connections, but the highlighted rows/columns show the specific connectivity for the cores detailed in panel **b**.
### Detailed Analysis
**Panel a - Link Controller Flow:**
1. **Transmit Path (Red Lines):** Data "From LDPU" enters, goes through "TX Preamble Insertion" (A), and is routed by a multiplexer controlled by "TX Routing Registers" (B) to one of the eight TX outputs.
2. **Receive Path (Blue Lines):** Data from eight RX inputs passes through "LDPU Preamble Check" (D) and "Hopping Preamble Check" (E). These checks use configuration data from "Preamble Registers" (C). Validated data is sent "To LDPU".
3. **Configuration:** The "Config Interface" writes to registers B and C to set up routing and preamble checking rules.
**Panel b - Core Connectivity:**
* The link controller for Core(3,5) handles transmissions **from** cores (2,5), (3,3), (3,4), (4,3), (4,4) and sends data **to** cores (2,5), (3,6), (3,7), (4,6), (4,7).
* The link controller for Core(4,5) handles transmissions **from** cores (4,3), (4,4), (5,5) and sends data **to** cores (4,6), (4,7), (5,5).
* There is a direct, bidirectional connection between the two link controllers (Core(3,5) and Core(4,5)).
**Panel c - Connectivity Matrix Data:**
* The matrix shows that for any given transmitting row `i` and receiving row `j`, communication is possible only between specific column pairs. This creates the diagonal green blocks.
* **For Core(3,5) as Transmitter (Row 3):** It can send to Receiving Rows 1, 2, 3, 4, 5, 6, 7, and 8. The specific receiving columns vary per row, following the diagonal pattern.
* **For Core(3,5) as Receiver (Column 3):** It can receive from Transmitting Rows 1, 2, 3, 4, 5, 6, 7, and 8. The specific transmitting columns vary per row.
* The same logic applies to Core(4,5) (Row 4 / Column 4). The pattern indicates a structured, likely dimension-ordered routing network.
### Key Observations
1. **Hierarchical Design:** The system is shown at three levels: internal controller logic (a), local core cluster interconnect (b), and global chip-wide connectivity (c).
2. **Structured Routing:** The strict diagonal pattern in the heatmap (c) suggests a deterministic routing algorithm (e.g., XY routing), where a packet's path is determined by its source and destination coordinates.
3. **Dedicated Link Controllers:** Cores (3,5) and (4,5) are not just processing elements but also act as communication hubs (routers) for their local 3x3 neighborhoods of cores, as shown in panel **b**.
4. **Bidirectional & Multicast Capability:** Panel **b** shows a single link controller handling both incoming (RX) and outgoing (TX) traffic for multiple neighbor cores, and the cross-connection suggests direct core-to-core links.
### Interpretation
This figure describes a **2D mesh Network-on-Chip (NoC)** architecture. The data demonstrates a scalable communication infrastructure for a multi-core processor.
* **Panel a** reveals the **microarchitecture** of the network interface/router, handling packet encapsulation (preamble insertion/checking) and routing decisions.
* **Panel b** illustrates the **local view**, showing how specific router nodes (Cores 3,5 and 4,5) manage traffic for their adjacent cores. The red/blue color coding effectively separates transmit and receive paths.
* **Panel c** provides the **global, systemic view**. The connectivity matrix is a formal representation of the network's routing table. The repeating diagonal pattern is the visual signature of a **dimension-ordered routing** scheme (like XY routing), which is deadlock-free and commonly used in mesh NoCs. The highlighted rows and columns for Cores (3,5) and (4,5) directly map the local connections from panel **b** onto this global routing table, confirming their role as routers.
**Notable Anomaly/Insight:** The heatmap shows that a core in row `i` can communicate with cores in *all other rows* (1-8), not just adjacent ones. This indicates the network supports long-distance, multi-hop communication across the entire chip, with the link controllers in panel **a** handling the hop-by-hop forwarding. The system is designed for both local (neighbor) and global (chip-wide) data exchange.