# Encoder-Free Knowledge-Graph Reasoning with LLMs via Hyperdimensional Path Retrieval
**Authors**:
- Calvin Yeung Mohsen Imani (University of California, Irvine)
Abstract
Recent progress in large language models (LLMs) has made knowledge-grounded reasoning increasingly practical, yet KG-based QA systems often pay a steep price in efficiency and transparency. In typical pipelines, symbolic paths are scored by neural encoders or repeatedly re-ranked by multiple LLM calls, which inflates latency and GPU cost and makes the decision process hard to audit. We introduce PathHD, an encoder-free framework for knowledge-graph reasoning that couples hyperdimensional computing (HDC) with a single LLM call per query. Given a query, PathHD represents relation paths as block-diagonal GHRR hypervectors, retrieves candidate paths using a calibrated blockwise cosine similarity with Top- $K$ pruning, and then performs a one-shot LLM adjudication that outputs the final answer together with supporting, citeable paths. The design is enabled by three technical components: (i) an order-sensitive, non-commutative binding operator for composing multi-hop paths, (ii) a robust similarity calibration that stabilizes hypervector retrieval, and (iii) an adjudication stage that preserves interpretability while avoiding per-path LLM scoring. Across WebQSP, CWQ, and GrailQA, PathHD matches or improves Hits@1 compared to strong neural baselines while using only one LLM call per query, reduces end-to-end latency by 40-60%, and lowers GPU memory by 3-5 $×$ due to encoder-free retrieval. Overall, the results suggest that carefully engineered HDC path representations can serve as an effective substrate for efficient and faithful KG-LLM reasoning, achieving a strong accuracy-efficiency-interpretability trade-off.
1 Introduction
Large Language Models (LLMs) have rapidly advanced reasoning over both text and structured knowledge. Typical pipelines follow a retrieve–then–reason pattern: they first surface evidence (documents, triples, or relation paths), then synthesize an answer using a generator or a verifier [21, 32, 46, 43, 47]. In knowledge-graph question answering (KGQA), this often becomes path-based reasoning: systems construct candidate relation paths that connect the topic entities to potential answers and pick the most plausible ones for final prediction [38, 17, 15, 16, 25]. While these approaches obtain strong accuracy on WebQSP, CWQ, and GrailQA, they typically depend on heavy neural encoders (e.g., Transformers or GNNs) or repeated LLM calls to rank paths, which makes them slow and expensive at inference time, especially when many candidates must be examined.
<details>
<summary>x1.png Details</summary>

### Visual Description
## Diagram: Two Major Pain Points in KG-LLM Reasoning
### Overview
The image is a diagram illustrating two major pain points in Knowledge Graph (KG) - Large Language Model (LLM) reasoning. It shows an example of incorrect reasoning by an LLM when answering a question about a knowledge graph, and it outlines the process of LLM-based path scoring, highlighting issues related to latency, cost, and scalability.
### Components/Axes
* **Title:** Two Major Pain Points in KG-LLM Reasoning
* **Input Query:** "Which company acquired SolarCity?"
* **Legend:**
* Green dashed arrow: correct (not chosen)
* Red solid arrow: chosen wrong
* **Knowledge Graph (Left Side):**
* Nodes: SolarCity, Tesla, Elon Musk, SpaceX, Renewable Energy, SolarEdge
* Edges: acquired\_by, founded\_by, CEO\_of, operates\_in, related\_to
* **LLM-based Path Scoring Methods (Right Side):**
* Candidate Path Generator
* LLM (appears twice)
* Score s1, Score s2
* Best Path
* **Outcome:** Incorrect reasoning despite access to KG (selected path mismatches the query relation).
* **Issues (Left Side):**
* Mismatch between query relation ("acquired\_by") and used path (founder/CEO).
* Hallucination due to non-faithful reasoning over KG.
* **Issues (Right Side):**
* Repeated LLM calls per candidate path -> high latency & cost.
* Evaluation is sequential / hard to parallelize.
* Scalability degrades as candidate count increases.
### Detailed Analysis
**Knowledge Graph (Left Side):**
* **SolarCity** is connected to **Tesla** via a green dashed arrow labeled "acquired\_by" with a green checkmark, indicating the correct answer.
* **SolarCity** is connected to **Elon Musk** via a red solid arrow labeled "founded\_by".
* **Elon Musk** is connected to **SpaceX** via a red solid arrow labeled "CEO\_of" with a red X, indicating an incorrect path.
* **SolarCity** is connected to **Renewable Energy** via a red solid arrow labeled "operates\_in".
* **Renewable Energy** is connected to **SolarEdge** via a red solid arrow labeled "related\_to" with a red X, indicating an incorrect path.
**LLM-based Path Scoring Methods (Right Side):**
* The **Candidate Path Generator** outputs two paths: Path #1 and Path #2.
* Each path is processed by an **LLM**.
* The LLM assigns a score to each path: Score s1 for Path #1 and Score s2 for Path #2.
* The path with the highest score is selected as the **Best Path**, indicated by a green checkmark.
**Issues (Left Side):**
* The diagram highlights that the LLM's reasoning is incorrect despite having access to the knowledge graph.
* The selected path (founder/CEO) mismatches the query relation (acquired\_by).
* The LLM exhibits hallucination due to non-faithful reasoning over the knowledge graph.
**Issues (Right Side):**
* Repeated LLM calls per candidate path lead to high latency and cost.
* Evaluation is sequential, making it hard to parallelize.
* Scalability degrades as the candidate count increases.
### Key Observations
* The diagram illustrates how an LLM can fail to correctly answer a question about a knowledge graph, even when the correct information is present.
* The LLM's incorrect reasoning is attributed to a mismatch between the query relation and the selected path, as well as hallucination.
* The LLM-based path scoring method suffers from issues related to latency, cost, and scalability.
### Interpretation
The diagram highlights the challenges of using LLMs for reasoning over knowledge graphs. While LLMs have the potential to answer complex questions based on structured knowledge, they can also make mistakes due to incorrect reasoning, hallucination, and scalability issues. The diagram suggests that further research is needed to improve the accuracy and efficiency of LLM-based knowledge graph reasoning methods. The issues on the right side of the diagram suggest that the process is computationally expensive and difficult to scale, which could limit its applicability in real-world scenarios.
</details>
Figure 1: Two major pain points in KG-LLM reasoning. Left: A path-based KGQA system selects a candidate path whose relation sequence does not match the query relation (“acquired_by”), leading to an incorrect answer even though the KG contains the correct evidence. Right: LLM-based scoring evaluates each candidate path in a separate LLM call, which is sequential, hard to parallelize, and incurs high latency and token cost as the candidate set grows.
Figure ˜ 1 highlights two recurring issues in KG–LLM reasoning. ❶ Path–query mismatch: Order-insensitive encodings, weak directionality, and noisy similarity often favor superficially related yet misaligned paths, blurring the question’s intended relation. ❷ Per-candidate LLM scoring: Many systems score candidates sequentially, so latency and token cost grow roughly linearly with set size; batching is limited by context/API, and repeated calls introduce instability, yet models can still over-weight long irrelevant chains, hallucinate edges, or flip relation direction. Most practical pipelines first detect a topic entity, enumerate $10\!\sim\!100$ length- $1$ – $4$ paths, then score each with a neural model or LLM, sending top paths to a final step [38, 25, 16]. This hard-codes two inefficiencies: (i) neural scoring dominates latency (fresh encoding/prompt per candidate), and (ii) loose path semantics (commutative/direction-insensitive encoders conflate founded_by $\!→$ CEO_of with its reverse), which compounds on compositional/long-hop questions.
Hyperdimensional Computing (HDC) offers a different lens: represent symbols as long, nearly-orthogonal hypervectors and manipulate structure with algebraic operations such as binding and bundling [18, 31]. HDC has been used for fast associative memory, robust retrieval, and lightweight reasoning because its core operations are elementwise or blockwise and parallelize extremely well on modern hardware [7]. Encodings tend to be noise-tolerant and compositional; similarity is computed by simple cosine or dot product; and both storage and computation scale linearly with dimensionality. Crucially for KGQA, HDC supports order-sensitive composition when the binding operator is non-commutative, allowing a path like $r_{1}\!→ r_{2}\!→ r_{3}$ to be distinguished from its permutations while remaining a single fixed-length vector. This makes HDC a promising substrate for ranking many candidate paths without invoking a neural model for each one.
Motivated by these advantages, we introduce PathHD (Hyper D imensional Path Retrieval), a lightweight retrieval-and-reason framework for efficient KGQA with LLMs. First, we map every relation to a block-diagonal unitary representation and encode a candidate path by non-commutative Generalized Holographic Reduced Representation (GHRR) binding [48]; this preserves order and direction in a single hypervector. In parallel, we encode the query into the same space to obtain a query hypervector. Second, we score all candidates via cosine similarity to the query hypervector and keep only the top- $K$ paths with a simple, parallel Top- $K$ selection. Finally, instead of per-candidate LLM calls, we make one LLM call that sees the question plus these top- $K$ paths (verbalized), and it outputs the answer along with cited supporting paths. In effect, PathHD addresses both pain points in Figure ˜ 1: order-aware binding reduces path–query mismatch, and vector-space scoring eliminates per-path LLM evaluation, cutting latency and token cost.
Our contributions are as follows:
- A fast, order-aware retriever for KG paths. We present PathHD, which uses GHRR-based, non-commutative binding to encode relation sequences into hypervectors and ranks candidates with plain cosine similarity, without learned neural encoders or per-path prompts. This design keeps a symbolic structure while enabling fully parallel scoring with $\mathcal{O}(Nd)$ complexity.
- An efficient one-shot reasoning stage. PathHD replaces many LLM scoring calls with a single LLM adjudication over the top- $K$ paths. This decouples retrieval from generation, lowers token usage, and improves wall-clock latency while remaining interpretable: the model cites the supporting path(s) it used.
- Extensive validation and operator study. On WebQSP, CWQ, and GrailQA, PathHD achieves competitive Hits@1 and F1 with markedly lower inference cost. An ablation on binding operators shows that our block-diagonal (GHRR) binding outperforms commutative binding and circular convolution, and additional studies analyze the impact of top- $K$ pruning and latency–accuracy trade-offs.
Overall, PathHD shows that carefully designed hyperdimensional representations can act as an encoder-free, training-free path scorer inside KG-based LLM reasoning systems, preserving competitive answer accuracy while substantially improving inference efficiency and providing explicit path-level rationales.
2 Method
The proposed PathHD follows a Plan $→$ Encode $→$ Retrieve $→$ Reason pipeline (Figure ˜ 2). (i) We first generate or select relation plans that describe how an answer can be reached (schema enumeration optionally refined by a light prompt). (ii) Each plan is mapped to a hypervector via a non-commutative GHRR binding so that order and direction are preserved. (iii) We compute a blockwise cosine similarity in the hypervector space and apply Top- $K$ pruning. (iv) Finally, a single LLM call produces the answer with path-based explanations. This design keeps the heavy lifting in cheap vector operations, delegating semantic adjudication to one-shot LLM reasoning.
2.1 Problem Setup & Notation
Given a question $q$ , a knowledge graph (KG) $\mathcal{G}$ , and a set of relation schemas $\mathcal{Z}$ , the goal is to predict an answer $a$ . Formally, we write $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{R})$ , where $\mathcal{V}$ is the set of entities, $\mathcal{R}$ is the set of relation types, and $\mathcal{E}⊂eq\mathcal{V}×\mathcal{R}×\mathcal{V}$ is the set of directed edges $(e,r,e^{\prime})$ . We denote entities by $e∈\mathcal{V}$ and relations by $r∈\mathcal{R}$ . A relation schema $z∈\mathcal{Z}$ is a sequence of relation types $z=(r_{1},...,r_{\ell})$ . Instantiating a schema $z$ on $\mathcal{G}$ yields concrete KG paths of the form $(e_{0},r_{1},e_{1},...,r_{\ell},e_{\ell})$ such that $(e_{i-1},r_{i},e_{i})∈\mathcal{E}$ for all $i$ . For a given question $q$ , we denote by $\mathcal{P}(q)$ the set of candidate paths instantiated from schemas in $\mathcal{Z}$ and by $N=|\mathcal{P}(q)|$ its size. We write $d$ for the dimensionality of the hypervectors used to represent relations and paths.
A key challenge is to efficiently locate a small set of plausible paths for $q$ from this large candidate pool, and then let an LLM reason over only those paths. A summary of the notation throughout the paper can be found in Appendix ˜ A.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Diagram: PathHD Hyperdimensional Encoding for Question Answering
### Overview
The image presents a diagram illustrating the PathHD Hyperdimensional Encoding approach for question answering. It outlines the process from inputting a query and knowledge graph (KG) to encoding paths, calculating similarity, pruning, reasoning, and ultimately selecting the answer. The diagram is divided into three main sections: Input & KG, PathHD Hyperdimensional Encoding, and Similarity, Pruning & Reasoning.
### Components/Axes
**1. Input & KG (Left Section - Pink Border)**
* **Input Query:** "Which company acquired SolarCity?"
* **Knowledge Graph (KG):** A network of entities and relations:
* Entities: SolarCity, Tesla, Elon Musk, SpaceX, Renewable Energy, SolarEdge
* Relations: acquired_by, founded_by, CEO_of, operates_in, related_to
* Edges connect entities based on these relations (e.g., SolarCity acquired_by Tesla).
* **Candidate Paths:** A list of possible paths extracted from the KG:
* acquired_by
* founded_by -> CEO_of
* operates_in -> related_to
* ... (Indicates more paths exist)
**2. PathHD Hyperdimensional Encoding (Middle Section - Blue Border)**
* **Initialize Relation HVs:**
* `founded_by`: Represented by a vertical bar with blue at the top and fading to white at the bottom.
* `CEO_of`: Represented by a vertical bar with red at the top and fading to white at the bottom.
* **Random HV assignment**
* **Obtain Query HV:**
* Question text: "Which company acquired SolarCity?"
* Dense embedding: A process involving "BERT" and "Projection to HDC space" to generate a "Query HV".
* **GHRR Binding & Path Encoding:**
* **Binding (GHRR):** A visual representation of the binding process using a grid. The top row is red, and the left column is blue. The diagonal elements are marked with a "-" symbol.
* **Path Composition:**
* Path HV = r1 @ r2 @ ... @ rn (where r represents relations)
* Path#1 HV = acquired_by
* Path#2 HV = founded_by @ CEO_of
* Path#3 HV = operates_in @ related_to
* ... (Indicates more paths exist)
* **Block-diagonal binding (GHRR):** Captures non-commutative relation binding.
**3. Similarity, Pruning & Reasoning (Right Section - Yellow Border)**
* **Similarity Calculation:**
* Cosine Similarity: The Query HV is compared to each Candidate Path HV using cosine similarity.
* Path#1 HV, Path#2 HV, Path#3 HV, ... (Candidate path HV)
* **Answer Selection:**
* Top-K candidates: The paths are ranked based on their similarity scores (s1, s2, s3).
* 1x LLM adjudication: The top-K candidates are processed by a Large Language Model (LLM) for final adjudication.
* Predicted answer: The final predicted answer is selected.
* **Legend:**
* Entity (head/tail): Rectangle
* Relation: Oval
* Operation: Circle
### Detailed Analysis or ### Content Details
* **Input & KG:** The initial step involves defining the question and representing the relevant knowledge as a graph. The example question is "Which company acquired SolarCity?". The KG contains entities like "SolarCity" and "Tesla" and relations like "acquired_by".
* **PathHD Hyperdimensional Encoding:** This section focuses on encoding the query and the candidate paths into high-dimensional vectors (HVs). Relation HVs are initialized, and the query is converted into a Query HV using dense embeddings. The GHRR binding process is used to encode paths, capturing the order of relations.
* **Similarity, Pruning & Reasoning:** The similarity between the Query HV and each Candidate Path HV is calculated using cosine similarity. The paths are then ranked based on their similarity scores. The top-K candidates are passed to an LLM for final adjudication, and the predicted answer is selected.
### Key Observations
* The diagram illustrates a pipeline for question answering using hyperdimensional computing.
* The GHRR binding is used to capture the order of relations in the paths.
* Cosine similarity is used to measure the similarity between the query and the candidate paths.
* An LLM is used for final adjudication of the top-K candidates.
### Interpretation
The diagram presents a method for question answering that leverages hyperdimensional computing to encode knowledge graph paths and query information into high-dimensional vectors. The use of GHRR binding allows the model to capture the order of relations in the paths, which is crucial for accurate reasoning. The cosine similarity measure provides a way to compare the query and the candidate paths, and the LLM adjudication step helps to refine the answer selection process. This approach aims to improve the accuracy and efficiency of question answering systems by combining the strengths of knowledge graphs, hyperdimensional computing, and large language models.
</details>
Figure 2: Overview of PathHD: a Plan $→$ Encode $→$ Retrieve $→$ Reason pipeline. A schema-based planner first generates relation plans over the KG; PathHD encodes these plans and instantiates candidate paths into order-aware GHRR hypervectors, ranks candidates with blockwise cosine similarity and Top- $K$ selection, and then issues a single LLM adjudication call to answer with cited paths, with most computation handled by vector operations and modest LLM use.
2.2 Hypervector Initialization
We work in a Generalized Holographic Reduced Representations (GHRR) space. Each atomic symbol $x$ (relation or, optionally, entity) is assigned a $d$ -dimensional hypervector $\mathbf{v}_{x}\!∈\!\mathbb{C}^{d}$ constructed as a block vector of unitary matrices:
$$
\mathbf{v}_{x}\;=\;[A^{(x)}_{1};\dots;A^{(x)}_{D}],\qquad A^{(x)}_{j}\in\mathrm{U}(m),\;\;d=Dm^{2}. \tag{1}
$$
In practice, we sample each block from a simple unitary family for efficiency, e.g.,
$$
A^{(x)}_{j}=\operatorname{diag}\!\big(e^{i\phi_{j,1}},\ldots,e^{i\phi_{j,m}}\big),\qquad\phi_{j,\ell}\sim\operatorname{Unif}[0,2\pi),
$$
or a random Householder product. Blocks are $\ell_{2}$ -normalized so that all hypervectors have unit norm. This initialization yields near-orthogonality among symbols, which concentrates with dimension (cf. Prop. 1). Hypervectors are sampled once and kept fixed; the retriever itself has no learned parameters.
Query hypervector.
For a question $q$ , we obtain a query hypervector in two ways, depending on the planning route used in Section ˜ 2: (i) plan-based —encode the selected relation plan $z_{q}=(r_{1},...,r_{\ell})$ using the same GHRR binding as paths (see Eq. equation 3); or (ii) text-projection —embed $q$ with a sentence encoder (e.g., SBERT) to $\mathbf{h}_{q}∈\mathbb{R}^{d_{t}}$ and project it to the HDC space using a fixed random linear map $P∈\mathbb{R}^{d× d_{t}}$ , then block-normalize:
$$
\mathbf{v}_{q}\;=\;\mathcal{N}_{\text{block}}\!\big(P\,\mathbf{h}_{q}\big). \tag{2}
$$
Both choices produce a query hypervector compatible with GHRR scoring. Unless otherwise specified, we use the plan-based encoding in all main experiments, and report the text-projection variant only in ablations (Section ˜ J.2).
2.3 GHRR Binding and Path Encoding
A GHRR hypervector is a block vector $\mathbf{H}=[A_{1};...;A_{D}]$ with $A_{j}∈\mathrm{U}(m)$ . Given two hypervectors $\mathbf{X}=[X_{1};...;X_{D}]$ and $\mathbf{Y}=[Y_{1};...;Y_{D}]$ , we define the block-wise binding operator $\mathbin{\raisebox{0.86108pt}{\scalebox{0.9}{$\circledast$}}}$ and the encoding of a length- $\ell$ relation path $z=(r_{1},...,r_{\ell})$ by:
$$
\mathbf{v}_{z}\;=\;\mathbf{v}_{r_{1}}\mathbin{\raisebox{0.86108pt}{\scalebox{0.9}{$\circledast$}}}\mathbf{v}_{r_{2}}\mathbin{\raisebox{0.86108pt}{\scalebox{0.9}{$\circledast$}}}\cdots\mathbin{\raisebox{0.86108pt}{\scalebox{0.9}{$\circledast$}}}\mathbf{v}_{r_{\ell}},\qquad\mathbf{X}\mathbin{\raisebox{0.86108pt}{\scalebox{0.9}{$\circledast$}}}\mathbf{Y}=[X_{1}Y_{1};\dots;X_{D}Y_{D}], \tag{3}
$$
followed by block-wise normalization to unit norm. Binding is applied left-to-right along the path, and because the matrix multiplication is non-commutative ( $X_{j}Y_{j}≠ Y_{j}X_{j}$ ), the encoding preserves the order and directionality of relations, which are critical for multi-hop KG reasoning.
Properties and choice of binding operator.
Although PathHD only uses forward binding for retrieval, GHRR also supports approximate unbinding: for $Z_{j}=X_{j}Y_{j}$ with unitary blocks, we have $X_{j}≈ Z_{j}Y_{j}^{\ast}$ and $Y_{j}≈ X_{j}^{\ast}Z_{j}$ . This property enables inspection of the contribution of individual relations in a composed path and underpins our path-level rationales.
Classical HDC bindings (XOR, element-wise multiplication, circular convolution) are commutative, which collapses $r_{1}{→}r_{2}$ and $r_{2}{→}r_{1}$ to similar codes and hurts directional reasoning. GHRR is non-commutative, invertible at the block level, and offers higher representational capacity via unitary blocks, leading to better discrimination between paths of the same multiset but different order. We empirically validate this choice in the ablation study (Table ˜ 3), where GHRR consistently outperforms commutative bindings. Additional background on binding operations is provided in Appendix ˜ K.
2.4 Query & Candidate Path Construction
We obtain a query plan $z_{q}$ via schema-based enumeration on the relation-schema graph (depth $≤ L_{\max}$ ). In all main experiments reported in Section ˜ 3, this planning stage is purely symbolic: we do not invoke any LLM beyond the single final reasoning call in Section ˜ 2.6. The query hypervector $\mathbf{v}_{q}$ is then constructed from the selected plan $z_{q}$ using the plan-based encoding described above (Eq. equation 3 or, for text projection, Eq. equation 2).
Given a query plan, candidate paths $\mathcal{Z}$ are instantiated from the KG either by matching plan templates to existing edges or by a constrained BFS with beam width $B$ , both of which yield symbolic paths. These paths are then deterministically encoded into hypervectors and scored by our HDC module (Sec. 2.5). An optional lightweight prompt-based refinement of schema plans is described in the appendix as an extension; it is not used in our main experiments and does not change the single-call nature of the system.
2.5 HD Retrieval: Blockwise Similarity and Top- $K$
Let $\langle A,B\rangle_{F}:=\mathrm{tr}(A^{\ast}B)$ be the Frobenius inner product. Given two GHRR hypervectors $\mathbf{X}=[X_{j}]_{j=1}^{D}$ and $\mathbf{Y}=[Y_{j}]_{j=1}^{D}$ , we define the blockwise cosine similarity
$$
\mathrm{sim}(\mathbf{X},\mathbf{Y})=\frac{1}{D}\sum_{j=1}^{D}\,\frac{\Re\,\langle X_{j},\,Y_{j}\rangle_{F}}{\|X_{j}\|_{F}\,\|Y_{j}\|_{F}}. \tag{4}
$$
For each candidate $z∈\mathcal{Z}$ we compute $\mathrm{sim}(\mathbf{v}_{q},\mathbf{v}_{z})$ and (optionally) apply a calibrated score
$$
s(z)\;=\;\mathrm{sim}(\mathbf{v}_{q},\mathbf{v}_{z})\;+\;\alpha\,\mathrm{IDF}(z)\;-\;\beta\,\lambda^{|z|}, \tag{5}
$$
where $\mathrm{IDF}(z)$ is a simple inverse-frequency weight on relation schemas. Let $\text{schema}(z)$ denote the relation schema of path $z$ and $\mathrm{freq}(\text{schema}(z))$ be the number of training questions whose candidate sets contain at least one path with the same schema. With $N_{\text{train}}$ the total number of training questions, we define:
$$
\mathrm{IDF}(z)=\log\!\left(1+\frac{N_{\text{train}}}{1+\mathrm{freq}(\text{schema}(z))}\right). \tag{6}
$$
Thus, frequent schemas (large $\mathrm{freq}(\text{schema}(z))$ ) receive a smaller bonus, while rare schemas receive a larger one. All similarity scores and calibrated scores can be computed fully in parallel over candidates, with overall cost $\mathcal{O}(|\mathcal{Z}|d)$ .
2.6 One-shot Reasoning with Retrieved Paths
Putting the pieces together, PathHD turns a question into (i) a schema-level plan $z_{q}$ , (ii) a set of candidate paths ranked in hypervector space, and (iii) a single LLM call that adjudicates among the top-ranked candidates.
We linearize the Top- $K$ paths into concise natural-language statements and issue a single LLM call with a minimal, citation-style prompt (see Table ˜ 8 from Appendix ˜ C). The prompt lists the question and the numbered paths, and requires the model to return a short answer, the index(es) of supporting path(s), and a 1-2 sentence rationale. This one-shot format constrains reasoning to the provided evidence, resolves near-ties and direction errors, and keeps LLM usage minimal.
2.7 Theoretical & Complexity Analysis
| Method | Candidate Path Gen. | Scoring | Reasoning |
| --- | --- | --- | --- |
| StructGPT [15] | ✓ | ✓ | ✓ |
| FiDeLiS [37] | ✓ | ✗ | ✓ |
| ToG [39] | ✓ | ✓ | ✓ |
| GoG [44] | ✓ | ✓ | ✓ |
| KG-Agent [16] | ✓ | ✓ | ✓ |
| RoG [25] | ✓ | ✗ | ✓ |
| PathHD | ✗ | ✗ | ✓ ( $1$ call) |
Table 1: LLM usage across pipeline stages. A checkmark indicates that the method uses an LLM in that stage. Candidate Path Gen.: using an LLM to propose or expand relation paths; Scoring: using an LLM to score or rank candidates (non-LLM similarity or graph heuristics count as “no”); Reasoning: using an LLM to produce the final answer from the retrieved paths. PathHD uses a single LLM call only in the final reasoning stage.
We briefly characterize the behavior of random GHRR hypervectors and the computational cost of PathHD.
**Proposition 1 (Near-orthogonality and distractor bound)**
*Let $\{\mathbf{v}_{r}\}$ be i.i.d. GHRR hypervectors with zero-mean, unit Frobenius-norm blocks. For a query path $z_{q}$ and any distractor $z≠ z_{q}$ encoded via non-commutative binding, the cosine similarity $X=\mathrm{sim}(\mathbf{v}_{z_{q}},\mathbf{v}_{z})$ (Equation ˜ 4) satisfies, for any $\epsilon>0$ ,
$$
\Pr\!\left(|X|\geq\epsilon\right)\leq 2\exp\!\left(-c\,d\,\epsilon^{2}\right), \tag{7}
$$
for an absolute constant $c>0$ depending only on the sub-Gaussian proxy of entries.*
* Proof sketch*
Each block inner product $\langle X_{j},Y_{j}\rangle_{F}$ is a sum of products of independent sub-Gaussian variables (closed under products for the bounded/phase variables used by GHRR). After normalization, the average in Equation ˜ 4 is a mean-zero sub-Gaussian average over $d$ degrees of freedom, so a standard Bernstein/Hoeffding tail bound applies. Details are deferred to Appendix ˜ E. ∎
**Corollary 1 (Capacity with union bound)**
*Let $\mathcal{M}$ be a collection of $M$ distractor paths scored against a fixed query. With probability at least $1-\delta$ ,
$$
\max_{z\in\mathcal{M}}\mathrm{sim}(\mathbf{v}_{z_{q}},\mathbf{v}_{z})\leq\epsilon\quad\text{whenever}\quad d\;\geq\;\frac{1}{c\,\epsilon^{2}}\,\log\!\frac{2M}{\delta}. \tag{8}
$$*
Thus, the probability of a false match under random hypervectors decays exponentially with the dimension $d$ , and the required dimension scales as $d=\mathcal{O}(\epsilon^{-2}\log M)$ for a target error tolerance $\epsilon$ .
Complexity comparison with neural retrievers.
Let $N$ be the number of candidates, $d$ the embedding dimension, and $L$ the number of encoder layers used by a neural retriever. A typical neural encoder (e.g., Transformer-based path encoder as in RoG) incurs $\mathcal{O}(NLd^{2})$ cost for encoding and scoring. In contrast, PathHD forms each path vector by $|z|\!-\!1$ block multiplications plus one similarity in Equation ˜ 4, i.e., $\mathcal{O}(|z|d)+\mathcal{O}(d)$ per candidate, giving a total of $\mathcal{O}(Nd)$ and an $\mathcal{O}(Ld)$ -fold reduction in leading order.
Beyond the $\mathcal{O}(Nd)$ vs. $\mathcal{O}(NLd^{2})$ compute gap, end-to-end latency is dominated by the number of LLM calls. Table ˜ 1 contrasts pipeline stages across methods: unlike prior agents that query an LLM for candidate path generation and sometimes for scoring, PathHD defers a single LLM call to the final reasoning step. This design reduces both latency and API cost; empirical results in Section ˜ 3.3 confirm the shorter response times.
3 Experiments
We evaluate PathHD against state-of-the-art baselines on reasoning accuracy, measure efficiency with a focus on end-to-end latency, and conduct module-wise ablations followed by illustrative case studies.
3.1 Datasets, Baselines, and Setup
We evaluate on three standard multi-hop KGQA benchmarks: WebQuestionsSP (WebQSP) [49], Complex WebQuestions (CWQ) [40], and GrailQA [9], all grounded in Freebase [2]. These datasets span increasing reasoning complexity (roughly 2-4 hops): WebQSP features simpler single-turn queries, CWQ adds compositional and constraint-based questions, and GrailQA stresses generalization across i.i.d., compositional, and zero-shot splits. Our study is therefore scoped to Freebase-style KGQA, extending PathHD to domain-specific KGs are left for future work.
We compare against four families of methods: embedding-based, retrieval-augmented, pure LLMs (no external KG), and LLM+KG hybrids. All results are reported on dev (IID) splits under a unified Freebase evaluation protocol using the official Hits@1 and F1 scripts, so that numbers are directly comparable across systems. Unless otherwise noted, PathHD uses the schema-based planner from Section ˜ 2.4 (no LLM calls during planning), the plan-based query hypervector in Equation ˜ 3, and a single LLM adjudication call as described in Section ˜ 2.6, with all LLMs and sentence encoders used off the shelf (no fine-tuning). Detailed dataset statistics, baseline lists, model choices, and additional training-free hyperparameters are provided in Appendices ˜ G, H and I.
3.2 Reasoning Performance Comparison
| | | WebQSP | CWQ | GrailQA (F1) | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Type | Methods | Hits@ $1$ | F1 | Hits@ $1$ | F1 | Overall | IID |
| KV-Mem [27] | $46.7$ | $34.5$ | $18.4$ | $15.7$ | $-$ | $-$ | |
| EmbedKGQA [35] | $66.6$ | $-$ | $45.9$ | $-$ | $-$ | $-$ | |
| NSM [10] | $68.7$ | $62.8$ | $47.6$ | $42.4$ | $-$ | $-$ | |
| Embedding | TransferNet [36] | $71.4$ | $-$ | $48.6$ | $-$ | $-$ | $-$ |
| GraftNet [38] | $66.4$ | $60.4$ | $36.8$ | $32.7$ | $-$ | $-$ | |
| SR+NSM [50] | $68.9$ | $64.1$ | $50.2$ | $47.1$ | $-$ | $-$ | |
| SR+NSM+E2E [50] | $69.5$ | $64.1$ | $49.3$ | $46.3$ | $-$ | $-$ | |
| Retrieval | UniKGQA [17] | $77.2$ | $72.2$ | $51.2$ | $49.1$ | $-$ | $-$ |
| ChatGPT [30] | $67.4$ | $59.3$ | $47.5$ | $43.2$ | $25.3$ | $19.6$ | |
| Davinci - 003 [30] | $70.8$ | $63.9$ | $51.4$ | $47.6$ | $30.1$ | $23.5$ | |
| Pure LLMs | GPT - 4 [1] | $73.2$ | $62.3$ | $55.6$ | $49.9$ | $31.7$ | $25.0$ |
| StructGPT [15] | $72.6$ | $63.7$ | $54.3$ | $49.6$ | $54.6$ | $70.4$ | |
| ROG [25] | $\underline{85.7}$ | $70.8$ | $62.6$ | $56.2$ | $-$ | $-$ | |
| Think-on-Graph [39] | $81.8$ | $76.0$ | $68.5$ | $60.2$ | $-$ | $-$ | |
| GoG [44] | $84.4$ | $-$ | $\mathbf{75.2}$ | $-$ | $-$ | $-$ | |
| KG-Agent [16] | $83.3$ | $\mathbf{81.0}$ | $\underline{72.2}$ | $\mathbf{69.8}$ | $\underline{86.1}$ | $\underline{92.0}$ | |
| FiDeLiS [37] | $84.4$ | $78.3$ | $71.5$ | $64.3$ | $-$ | $-$ | |
| LLMs + KG | PathHD | $\mathbf{86.2}$ | $\underline{78.6}$ | $71.5$ | $\underline{65.8}$ | $\mathbf{86.7}$ | $\mathbf{92.4}$ |
Table 2: Comparison on Freebase-based KGQA. Our method PathHD follows exactly the same protocol. “–” indicates that the metric was not reported by the original papers under the Freebase+official-script setting. We bold the best and underline the second-best score for each metric/column.
We evaluate under a unified Freebase protocol with the official Hits@1 / F1 scripts on WebQSP, CWQ, and GrailQA (dev, IID); results are in Table ˜ 2. Baselines cover classic KGQA (embedding/retrieval), recent LLM+KG systems, and pure LLMs (no KG grounding). Our PathHD uses hyperdimensional scoring with GHRR, Top- $K$ pruning, and a single LLM adjudication step.
Key observations are as follows. Obs.❶ SOTA on WebQSP/GrailQA; competitive on CWQ. PathHD attains best WebQSP Hits@1 ( $86.2$ ) and best GrailQA F1 (Overall/IID $86.7/92.4$ ), while staying strong on CWQ (Hits@1 $71.5$ , F1 $65.8$ ), close to the top LLM+KG systems (e.g., GoG $75.2$ Hits@1; KG-Agent $69.8$ F1). Obs.❷ One-shot adjudication rivals multi-step agents. Compared to RoG ( $\sim$ 12 calls) and Think-on-Graph/GoG/KG-Agent ( $3$ – $8$ calls), PathHD matches or exceeds accuracy on WebQSP/GrailQA and remains competitive on CWQ with just one LLM call, which reduces error compounding and focuses the LLM on a high-quality shortlist. Obs.❸ Pure LLMs lag without KG grounding. Zero/few-shot GPT-4 or ChatGPT underperform LLM+KG systems, e.g., on CWQ GPT-4 Hits@1 $55.6$ vs. PathHD $71.5$ . Obs.❹ Classic embedding/retrieval trails modern LLM+KG. KV-Mem, NSM, SR+NSM rank subgraphs well but lack a flexible language component for composing multi-hop constraints, yielding consistently lower scores.
Candidate enumeration strategy. Before turning to efficiency (Section ˜ 3.3), we briefly clarify how candidate paths are enumerated, since this affects both accuracy and cost. In our current implementation, we use a deterministic BFS-style enumeration of relation paths, controlled by the maximum depth $L_{\max}$ and beam width $B$ . This choice is (i) simple and efficient, (ii) guarantees coverage of all type-consistent paths up to length $L_{\max}$ under clear complexity bounds, and (iii) makes it easy to compare against prior KGQA baselines that also rely on BFS-like expansion. In practice, we choose $L_{\max}$ and $B$ to achieve high coverage of gold answer paths while keeping candidate set sizes comparable to RoG and KG-Agent. More sophisticated, adaptive enumeration strategies (e.g., letting the HDC scores or the LLM guide, which relations to expand next) are an interesting extension, but orthogonal to our core contribution.
3.3 Efficiency and Cost Analysis
<details>
<summary>x3.png Details</summary>

### Visual Description
## Chart: Hits@1 vs. latency on WebQSP
### Overview
This is a scatter plot comparing the Hits@1 metric on the WebQSP dataset against the per-query latency. The plot shows different models categorized into three families: Embedding, Pure LLM, and LLMs+KG. Each point represents a model, with its position indicating its performance on the two metrics.
### Components/Axes
* **Title:** Hits@1 vs. latency on WebQSP
* **X-axis:** Hits@1 on WebQSP (%)
* Scale: 50 to 90, with tick marks at intervals of 10.
* **Y-axis:** Per-query latency 10'x (seconds, median)
* Scale: -0.25 to 1.50, with tick marks at intervals of 0.25.
* **Legend:** Located in the top-left corner.
* Embedding: Represented by blue circles.
* Pure LLM: Represented by yellow squares.
* LLMs+KG: Represented by orange triangles.
### Detailed Analysis
* **Embedding Models:**
* KV-Mem: Located at approximately (48, -0.15).
* NSM: Located at approximately (68, -0.15).
* **Pure LLM Models:**
* ChatGPT (1 call): Located at approximately (65, 0.30).
* StructGT: Located at approximately (73, 0.50).
* GPT-4 (1 call): Located at approximately (75, 0.55).
* **LLMs+KG Models:**
* PathHD: Located at approximately (82, 0.35).
* UniKGQA: Located at approximately (80, 0.50).
* DeLiS: Located at approximately (82, 0.85).
* GOG: Located at approximately (81, 0.95).
* Think-on-Graph: Located at approximately (79, 1.00).
* K-Agent: Located at approximately (78, 1.05).
* RoG: Located at approximately (87, 1.45).
### Key Observations
* The LLMs+KG models generally have higher latency and higher Hits@1 scores compared to the Embedding and Pure LLM models.
* Embedding models have the lowest latency but also the lowest Hits@1 scores.
* Pure LLM models fall in between, with moderate latency and Hits@1 scores.
* There is a positive correlation between latency and Hits@1 score, suggesting that models with higher accuracy tend to have higher latency.
* RoG has the highest latency and Hits@1 score.
* KV-Mem and NSM have the lowest latency and Hits@1 score.
### Interpretation
The scatter plot visualizes the trade-off between accuracy (Hits@1) and latency for different models on the WebQSP dataset. The data suggests that incorporating knowledge graphs (LLMs+KG) generally improves accuracy but at the cost of increased latency. Embedding models offer a low-latency solution but with lower accuracy. Pure LLM models provide a balance between the two. The choice of model depends on the specific requirements of the application, where either accuracy or latency may be prioritized. The outlier RoG shows that very high accuracy can be achieved, but with a significant increase in latency.
</details>
<details>
<summary>x4.png Details</summary>

### Visual Description
## Chart: Hits@1 vs. latency on CWQ
### Overview
This is a scatter plot showing the relationship between Hits@1 on CWQ (percentage) and per-query latency (seconds, median) for different language model families. The plot distinguishes between Embedding models, Pure LLM models, and LLMs+KG models using different shapes and colors.
### Components/Axes
* **Title:** Hits@1 vs. latency on CWQ
* **X-axis:** Hits@1 on CWQ (%)
* Scale ranges from approximately 15% to 75% with tick marks at 20, 30, 40, 50, 60, and 70.
* **Y-axis:** Per-query latency 10'x (seconds, median)
* Scale ranges from -0.25 to 1.50 with tick marks at -0.25, 0.00, 0.25, 0.50, 0.75, 1.00, 1.25, and 1.50.
* **Legend (top-left):**
* Embedding: Blue circle
* Pure LLM: Blue square
* LLMs+KG: Blue triangle
### Detailed Analysis
The data points are scattered across the plot, with some clustering in certain regions.
* **Embedding Models:**
* KV-Mem: Located at approximately (18%, -0.15 seconds).
* NSM: Located at approximately (48%, -0.15 seconds).
* Both are represented by blue circles.
* **Pure LLM Models:**
* ChatGPT (1 call): Located at approximately (45%, 0.3 seconds).
* StructGPT: Located at approximately (58%, 0.5 seconds).
* GPT-4 (1 call): Located at approximately (55%, 0.6 seconds).
* UniKGOA: Located at approximately (52%, 0.5 seconds).
* All are represented by blue squares.
* **LLMs+KG Models:**
* PathHD: Located at approximately (70%, 0.3 seconds).
* EjDeLiS: Located at approximately (72%, 0.8 seconds).
* GoG: Located at approximately (70%, 0.9 seconds).
* KG-Agent: Located at approximately (70%, 1.0 seconds).
* Think-on-Graph: Located at approximately (65%, 1.0 seconds).
* RoG: Located at approximately (60%, 1.3 seconds).
* All are represented by blue triangles.
### Key Observations
* There appears to be a general trend where higher Hits@1 on CWQ is associated with higher per-query latency.
* Embedding models (KV-Mem, NSM) have the lowest latency but also the lowest Hits@1.
* LLMs+KG models generally have higher Hits@1 but also higher latency.
* Pure LLM models are clustered in the middle range for both metrics.
### Interpretation
The plot suggests a trade-off between accuracy (Hits@1) and speed (latency) for different language model families. Embedding models are fast but less accurate, while LLMs+KG models are more accurate but slower. Pure LLM models offer a balance between the two. The specific models mentioned (ChatGPT, GPT-4, etc.) provide benchmarks for performance within these categories. The data indicates that incorporating knowledge graphs (KG) into language models tends to improve accuracy at the cost of increased latency. The outlier RoG has the highest latency and a high Hits@1 score.
</details>
<details>
<summary>x5.png Details</summary>

### Visual Description
## Chart: Hits@1 vs. latency on GrailQA
### Overview
This chart plots the Hits@1 (accuracy) against the per-query latency for different models on the GrailQA dataset. The models are categorized into three families: Embedding, Pure LLM, and LLMs+KG. The chart visualizes the trade-off between accuracy and latency for each model.
### Components/Axes
* **Title:** Hits@1 vs. latency on GrailQA
* **X-axis:** Hits@1 on GrailQA (%)
* Scale: 20 to 90, with tick marks at intervals of 10.
* **Y-axis:** Per-query latency 10'x (seconds, median)
* Scale: 0.50 to 1.00, with tick marks at intervals of 0.25.
* **Legend (top-left):**
* Embedding (blue circle)
* Pure LLM (blue square)
* LLMs+KG (blue triangle)
### Detailed Analysis
* **Embedding:**
* There are no explicit data points for "Embedding" models on the chart.
* **Pure LLM:**
* ChatGPT (1 call): Located at approximately (22, 0.30). Color: Light Yellow.
* GPT-4 (1 call): Located at approximately (35, 0.65). Color: Light Yellow.
* **LLMs+KG:**
* StructGPT: Located at approximately (55, 0.52). Color: Light Blue.
* PathHD: Located at approximately (85, 0.35). Color: Light Orange.
* KG-Agent: Located at approximately (82, 1.02). Color: Light Orange.
### Key Observations
* The chart shows a distribution of models across different accuracy and latency levels.
* ChatGPT and GPT-4 (Pure LLM) have lower latency but also lower Hits@1 compared to other models.
* KG-Agent (LLMs+KG) has the highest latency and highest Hits@1.
* StructGPT (LLMs+KG) has a moderate latency and moderate Hits@1.
* PathHD (LLMs+KG) has a low latency and high Hits@1.
### Interpretation
The data suggests a trade-off between accuracy (Hits@1) and latency for question answering on the GrailQA dataset. Pure LLM models like ChatGPT and GPT-4 offer faster response times but lower accuracy. LLMs+KG models, such as KG-Agent, StructGPT, and PathHD, generally achieve higher accuracy but at the cost of increased latency. PathHD is a notable outlier, achieving high accuracy with relatively low latency. The choice of model depends on the specific application requirements, balancing the need for accurate answers with the acceptable response time.
</details>
Figure 3: Visualization of performance and latency. The x-axis is Hits@ $1$ (%), the y-axis is per-query latency in seconds (median, log scale). Bubble size indicates the average number of LLM calls; marker shape denotes the method family. PathHD gives strong accuracy with lower latency than multi-call LLMs+KG baselines.
We assess end-to-end cost via a Hits@1–latency bubble plot (Figure ˜ 3) and a lollipop latency chart (Figure ˜ 6). In Figure ˜ 3, $x$ = Hits@1, $y$ = median per-query latency (log-scale); bubble size = average #LLM calls; marker shape = method family. All latencies are measured under a common hardware and LLM-backend setup to enable fair relative comparison. Latencies in Figure ˜ 6 follow a shared protocol: in our implementation, each LLM call takes on the order of a few seconds, whereas non-LLM vector/graph operations are typically within $0.3$ – $0.8$ s. PathHD uses vector-space scoring with Top- $K$ pruning and a single LLM decision; RoG uses beam search ( $B{=}3$ , depth $≤$ dataset hops). A factor breakdown (#calls, depth $d$ , beam $b$ , tools) appears in Table ˜ 10 (Section ˜ I.1).
Key observations are: Obs.❶ Near-Pareto across datasets. With comparable accuracy to multi-call LLM+KG systems (Think-on-Graph/GoG/KG-Agent), PathHD achieves markedly lower latency due to its single-call design and compact post-pruning candidate set. Obs.❷ Latency is dominated by #LLM calls. Methods with 3–8 calls (agent loops) or $≈ d× b$ calls (beam search) sit higher in Figure ˜ 3 and show longer whiskers in Figure ˜ 6; PathHD avoids intermediate planning/scoring overhead. Obs.❸ Moderate pruning improves cost–accuracy. Shrinking the pool before adjudication lowers latency without hurting Hits@1, especially on CWQ, where paths are longer. Obs.❹ Pure LLMs are fast but underpowered. Single-call GPT-4/ChatGPT has similar latency to our final decision yet notably lower accuracy, underscoring the importance of structured retrieval and path scoring.
3.4 Ablation Study
We analyze the contribution of each module/operation in PathHD. Our operation study covers: (1) Path composition operator, (2) Single-LLM adjudicator, and (3) Top- $K$ pruning.
| Operator | WebQSP | CWQ |
| --- | --- | --- |
| XOR / bipolar product | 83.9 | 68.8 |
| Element-wise product (Real-valued) | 84.4 | 69.2 |
| Comm. bind | 84.7 | 69.6 |
| FHRR | 84.9 | 70.0 |
| HRR | 85.1 | 70.2 |
| GHRR | 86.2 | 71.5 |
Table 3: Effect of the path–composition operator. GHRR yields the best performance.
Which path–composition operator works best?
We isolate relation binding by fixing retrieval, scoring, pruning, and the single LLM step, and only swapping the encoder’s path–composition operator. We compare six options (defs. in Appendix ˜ K): (i) XOR/bipolar and (ii) real-valued element-wise products, both fully commutative; (iii) a stronger commutative mix of binary/bipolar; (iv) FHRR (phasors) and (v) HRR (circular convolution), efficient yet effectively commutative; and (vi) our block-diagonal GHRR with unitary blocks, non-commutative and order-preserving. Paths of length $1$ – $4$ use identical dimension/normalization. As in Table ˜ 3, commutative binds lag, HRR/FHRR give modest gains, and GHRR yields the best Hits@1 on WebQSP and CWQ by reliably separating founded_by $→$ CEO_of from its reverse.
| Final step | WebQSP | CWQ |
| --- | --- | --- |
| Vector-only | 85.4 | 70.8 |
| Vector $→$ 1 $×$ LLM | 86.2 | 71.5 |
Table 4: Ablation on the final decision maker. Passing pruned candidates and scores to a single LLM for adjudication yields consistent gains over vector-only selection.
Do we need a final single LLM adjudicator?
We test whether a lightweight LLM judgment helps beyond pure vector scoring. Vector-only selects the top path by cosine similarity; Vector $→$ 1 $×$ LLM instead forwards the pruned top- $K$ paths (with scores and end entities) to a single LLM using a short fixed template (no tools/planning) to choose the answer without long chains of thought. As shown in Table ˜ 4, Vector $→$ 1 $×$ LLM consistently outperforms Vector-only on both datasets, especially when the top two paths are near-tied or a high-scoring path has a subtle type mismatch; a single adjudication pass resolves such cases at negligible extra cost.
| Pruning | Hits@ $1$ (WebQSP) | Lat. | Hits@ $1$ (CWQ) | Lat. |
| --- | --- | --- | --- | --- |
| No-prune | 85.8 | 2.42s | 70.7 | 2.45s |
| $K{=}2$ | 86.0 | 1.98s | 71.2 | 2.00s |
| $K{=}3$ | 86.2 | 1.92s | 71.5 | 1.94s |
| $K{=}5$ | 86.1 | 2.05s | 71.4 | 2.06s |
Table 5: Impact of top- $K$ pruning before the final LLM. Small sets (K=2–3) retain or slightly improve accuracy while reducing latency. We adopt $K{=}3$ by default.
What is the effect of top- $K$ pruning before the final step?
Finally, we study how many candidates should be kept for the last decision. We vary the number of paths passed to the final LLM among $K\!∈\!\{2,3,5\}$ and also include a No-prune variant that sends all retrieved paths. Retrieval and scoring are fixed; latency is the median per query (lower is better). As shown in Table ˜ 5, $K{=}3$ achieves the best Hits@ $1$ on both WebQSP and CWQ with the lowest latency, while $K{=}2$ is a close second and yields the largest latency drop. In contrast, No-prune maintains maximal recall but increases latency and often introduces near-duplicate/noisy paths that can blur the final decision. We therefore adopt $K{=}3$ as the default.
<details>
<summary>x6.png Details</summary>

### Visual Description
## Line Charts: F1 Score vs. Hypervector Dimension for WebQSP, CWQ, and GrailQA
### Overview
The image presents three line charts side-by-side, each displaying the relationship between the F1 score (in percentage) and the hypervector dimension for different datasets: WebQSP, CWQ, and GrailQA. The x-axis represents the hypervector dimension, with values ranging from 512 to 8192. The y-axis represents the F1 score, with different ranges for each dataset.
### Components/Axes
* **Titles:**
* Left Chart: WebQSP
* Middle Chart: CWQ
* Right Chart: GrailQA
* **X-Axis (Horizontal):**
* Label: Hypervector Dimension
* Values: 512, 1024, 2048, 3072, 4096, 6144, 8192
* **Y-Axis (Vertical):**
* Label: F1 (%)
* Left Chart (WebQSP): Range approximately 77% to 79%
* Ticks: 77, 78, 79
* Middle Chart (CWQ): Range approximately 64% to 66%
* Ticks: 64, 65, 66
* Right Chart (GrailQA): Range approximately 86.0% to 87.0%
* Ticks: 86.0, 86.5, 87.0
* **Data Series:** Each chart contains a single data series represented by a blue line with circular markers at each data point.
### Detailed Analysis
**WebQSP (Left Chart):**
* Trend: The F1 score increases sharply from 512 to 2048, then plateaus around 78.7% until 4096, and then decreases slightly towards 8192.
* Data Points:
* 512: Approximately 77.3%
* 1024: Approximately 78.1%
* 2048: Approximately 78.6%
* 3072: Approximately 78.7%
* 4096: Approximately 78.7%
* 6144: Approximately 78.5%
* 8192: Approximately 78.2%
**CWQ (Middle Chart):**
* Trend: The F1 score increases sharply from 512 to 2048, continues to increase at a slower rate until 6144, and then decreases slightly towards 8192.
* Data Points:
* 512: Approximately 64.2%
* 1024: Approximately 65.0%
* 2048: Approximately 65.5%
* 3072: Approximately 65.7%
* 4096: Approximately 65.8%
* 6144: Approximately 65.9%
* 8192: Approximately 65.7%
**GrailQA (Right Chart):**
* Trend: The F1 score increases sharply from 512 to 3072, plateaus around 86.7% until 4096, and then decreases towards 8192.
* Data Points:
* 512: Approximately 86.1%
* 1024: Approximately 86.4%
* 2048: Approximately 86.6%
* 3072: Approximately 86.7%
* 4096: Approximately 86.7%
* 6144: Approximately 86.6%
* 8192: Approximately 86.4%
### Key Observations
* All three datasets show a similar trend: a rapid increase in F1 score with increasing hypervector dimension up to a certain point, followed by a plateau or slight decrease.
* The optimal hypervector dimension appears to be around 3072-4096 for WebQSP and GrailQA, and around 6144 for CWQ, beyond which increasing the dimension does not significantly improve the F1 score and may even slightly reduce it.
* GrailQA has the highest F1 scores, followed by WebQSP, and then CWQ.
### Interpretation
The charts suggest that increasing the hypervector dimension initially improves the performance (F1 score) of the models on these datasets. However, there is a point of diminishing returns, beyond which increasing the dimension does not lead to significant improvements and may even lead to a slight decrease in performance. This could be due to overfitting or increased noise in the higher-dimensional space. The optimal hypervector dimension varies slightly depending on the dataset. The higher F1 scores for GrailQA indicate that the model performs better on this dataset compared to WebQSP and CWQ, potentially due to the nature of the questions or the structure of the knowledge graph.
</details>
Figure 4: Hypervector dimension study. Each panel reports F1 (%) of PathHD on WebQSP, CWQ, and GrailQA as a function of the hypervector dimension. Overall, performance rises from $512$ to the mid-range and then tapers off: WebQSP and GrailQA peak around 3k–4k, while CWQ prefers a slightly larger size (6k), after which F1 decreases mildly.
3.5 Case Study
To better understand how our model performs step-by-step reasoning, we present two representative cases from the WebQSP dataset in Table ˜ 6. These cases highlight the effects of candidate path pruning and the contribution of LLM-based adjudication in improving answer accuracy. Case 1: Top- $K$ pruning preserves paths aligned with both film.film.music and actor cues; the vector-only scorer already picks the correct path, and a single LLM adjudication confirms Valentine’s Day, illustrating that pruning reduces cost while retaining high-coverage candidates. Case 2: A vector-only top path (film.film.edited_by) misses the actor constraint and yields a false positive, but adjudication over the pruned set, now including performance.actor, corrects to The Perks of Being a Wallflower, showing that LLM adjudication resolves compositional constraints beyond static similarity.
| Case 1: which movies featured Taylor Swift and music by John Debney | |
| --- | --- |
| Top-4 candidates | 1) film.film.music (0.2567) |
| 2) person.nationality $→$ film.film.country (0.2524) | |
| 3) performance.actor $→$ performance.film (0.2479) | |
| 4) people.person.languages $→$ film.film.language (0.2430) | |
| Top- $K$ after pruning (K=3) | film.film.music |
| person.nationality $→$ film.film.country | |
| performance.actor $→$ performance.film | |
| Vector-only (no LLM) | Pick film.film.music ✓ — directly targets the composer-to-film mapping; relevant for filtering by music. |
| 1 $×$ LLM adjudication | Rationale: “To find films with both Taylor Swift and music by John Debney, use actor-to-film and music-to-film relations. The chosen path targets the latter directly.” |
| Final Answer / GT | Valentine’s Day (predict) / Valentine’s Day ✓ |
| Case 2 : in which movies does Logan Lerman act in that was edited by Mary Jo Markey | |
| Top-4 candidates | 1) film.film.edited_by (0.2548) |
| 2) person.nationality $→$ film.film.country (0.2527) | |
| 3) performance.actor $→$ performance.film (0.2505) | |
| 4) award.award_winner.awards_won $→$ | |
| award.award_honor.honored_for (0.2420) | |
| Top- $K$ after pruning (K=3) | film.film.edited_by |
| person.nationality $→$ film.film.country | |
| performance.actor $→$ performance.film | |
| Vector-only (no LLM) | Pick film.film.edited_by ✗ — identifies edited films, but lacks actor constraint; leads to false positives. |
| 1 $×$ LLM adjudication | Rationale: “The question requires jointly filtering for actor and editor. While film.edited_by is relevant, combining it with performance.actor improves precision by ensuring Logan Lerman is in the cast.” |
| Final Answer / GT | Perks of Being a Wallflower (predict) / Perks of Being a Wallflower ✓ |
Table 6: Case studies on multi-hop reasoning over WebQSP. Top- $K$ pruning is applied before invoking LLM, reducing cost while retaining plausible candidates.
Discussion. As PathHD operates in a single-call, fixed-candidate regime, its performance ultimately depends on (i) the Top- $K$ retrieved paths covering at least one valid reasoning chain and (ii) the adjudication LLM correctly ranking these candidates. In practice, we mitigate this by using a relatively generous $K$ (e.g., $K=3$ ) and beam widths that yield high coverage of gold paths (see Section ˜ K.2), but extreme cases can still be challenging. Note that all LLM reasoning systems that first retrieve a Top- $K$ set of candidates will face the same challenge.
4 Related Work
4.1 LLM-based Reasoning
Large language models (LLMs) are now widely adopted across research and industry—powering generation, retrieval, and decision-support systems at scale [5, 28, 22, 23]. LLM-based Reasoning, such as GPT [33, 3], LLaMA [41], and PaLM [6], have demonstrated impressive capabilities in diverse reasoning tasks, ranging from natural language inference to multi-hop question answering [45]. A growing body of work focuses on enhancing the interpretability and reliability of LLM reasoning through symbolic path-based reasoning over structured knowledge sources [38, 4, 11]. For example, Wei et al. [43] proposed chain-of-thought prompting, which improves reasoning accuracy by encouraging explicit intermediate steps. Wang et al. [42] introduced self-consistency decoding, which aggregates multiple reasoning chains to improve robustness.
Knowledge graphs are widely deployed in real-world systems (e.g., web search, recommendation, biomedicine) and constitute an active research area for representation learning and reasoning [14, 52, 24, 51]. In the context of knowledge graphs, recent efforts have explored hybrid neural-symbolic approaches to combine the structural expressiveness of graph reasoning with the generative power of LLMs. Fan et al. [25] proposed Reasoning on Graphs (RoG), which first prompts LLMs to generate plausible symbolic relation paths and then retrieves and verifies these paths over knowledge graphs. Similarly, Khattab et al. [20] leveraged demonstration-based prompting to guide LLM reasoning grounded in external knowledge. Despite their interpretability benefits, these methods rely heavily on neural encoders for path matching, incurring substantial computational and memory overhead, which limits scalability to large KGs or real-time applications.
4.2 Hyperdimensional Computing (HDC)
HDC is an emerging computational paradigm inspired by the properties of high-dimensional representations in cognitive neuroscience [18, 19]. In HDC, information is represented as fixed-length high-dimensional vectors (hypervectors), and symbolic structures are manipulated through simple algebraic operations such as binding, bundling, and permutation [8]. These operations are inherently parallelizable and robust to noise, making HDC appealing for energy-efficient and low-latency computation.
HDC has been successfully applied in domains such as classification [34], biosignal processing [29], natural language understanding [26], and graph analytics [12]. For instance, Imani et al. [12] demonstrated that HDC can encode and process graph-structured data efficiently, enabling scalable similarity search and inference. Recent studies have also explored neuro-symbolic integrations, where HDC complements neural networks to achieve interpretable yet computationally efficient models [13, 34]. However, the potential of HDC in large-scale reasoning over knowledge graphs, particularly when combined with LLMs, remains underexplored. Our work bridges this gap by leveraging HDC as a drop-in replacement for neural path matchers in LLM-based reasoning frameworks, thereby achieving both scalability and interpretability.
Existing KG-LLM reasoning frameworks typically rely on learned neural encoders or multi-call agent pipelines to score candidate paths or subgraphs, often with Transformers, GNNs, or repeated LLM calls. In contrast, our work keeps the retrieval module entirely encoder-free and training-free: PathHD replaces neural path scorers with HDC-based hypervector encodings and similarity, while remaining compatible with standard KG-LLM agents. Our goal is thus not to introduce new VSA theory, but to show that such carefully designed HDC representations can replace learned neural scorers in KG-LLM systems while preserving accuracy and substantially improving latency, memory footprint, and interpretability.
5 Conclusion
In this work, we presented PathHD, an encoder-free and interpretable retrieval mechanism for path-based reasoning over knowledge graphs with LLMs. PathHD replaces neural path scorers with hyperdimensional Computing (HDC): relation paths are encoded into order-aware GHRR hypervectors, ranked via simple vector operations, and passed to a single LLM adjudication step that outputs answers together with cited supporting paths. This Plan $→$ Encode $→$ Retrieve $→$ Reason design removes the need for costly neural encoders in the retrieval module and shifts most computation into cheap, parallel hypervector operations. Experimental results on three standard Freebase-based KGQA benchmarks (WebQSP, CWQ, GrailQA) show that PathHD attains competitive Hits@1 and F1 while substantially reducing end-to-end latency and GPU memory usage, and produces faithful, path-grounded rationales that aid error analysis and controllability. Taken together, these findings indicate that carefully designed HDC representations are a practical substrate for efficient KG-LLM reasoning, offering a favorable accuracy-efficiency-interpretability trade-off. An important direction for future work is to apply CS to domain-specific graphs, such as UMLS and other biomedical or enterprise KGs, as well as to tasks beyond QA (e.g., fact checking or rule induction), and to explore how the HDC representations and retrieval pipeline can be adapted in these settings while retaining the same efficiency benefits.
Acknowledgements
This work was supported in part by the DARPA Young Faculty Award, the National Science Foundation (NSF) under Grants #2127780, #2319198, #2321840, #2312517, and #2235472, #2431561, the Semiconductor Research Corporation (SRC), the Office of Naval Research through the Young Investigator Program Award, and Grants #N00014-21-1-2225 and #N00014-22-1-2067, Army Research Office Grant #W911NF2410360. Additionally, support was provided by the Air Force Office of Scientific Research under Award #FA9550-22-1-0253, along with generous gifts from Xilinx and Cisco.
References
- [1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
- [2] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250, 2008.
- [3] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems (NeurIPS), volume 33, pages 1877–1901, 2020.
- [4] Shulin Cao, Jiaxin Shi, Liangming Pan, Lunyiu Nie, Yutong Xiang, Lei Hou, Juanzi Li, Bin He, and Hanwang Zhang. Kqa pro: A dataset with explicit compositional programs for complex question answering over knowledge base. In Proceedings of the 60th annual meeting of the Association for Computational Linguistics (volume 1: long papers), pages 6101–6119, 2022.
- [5] Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. A survey on evaluation of large language models. ACM transactions on intelligent systems and technology, 15(3):1–45, 2024.
- [6] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1–113, 2023.
- [7] E. Paxon Frady, Denis Kleyko, and Friedrich T. Sommer. Variable binding for sparse distributed representations: Theory and applications. Neural Computation, 33(9):2207–2248, 2021.
- [8] Ross W Gayler. Vector symbolic architectures answer jackendoff’s challenges for cognitive neuroscience. arXiv preprint cs/0412059, 2004.
- [9] Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, and Yu Su. Beyond iid: three levels of generalization for question answering on knowledge bases. In Proceedings of the web conference 2021, pages 3477–3488, 2021.
- [10] Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, and Ji-Rong Wen. Improving multi-hop knowledge base question answering by learning intermediate supervision signals. In Proceedings of the 14th ACM international conference on web search and data mining, pages 553–561, 2021.
- [11] Haotian Hu, Alex Jie Yang, Sanhong Deng, Dongbo Wang, and Min Song. Cotel-d3x: a chain-of-thought enhanced large language model for drug–drug interaction triplet extraction. Expert Systems with Applications, 273:126953, 2025.
- [12] Mohsen Imani, Yeseong Kim, Sadegh Riazi, John Messerly, Patric Liu, Farinaz Koushanfar, and Tajana Rosing. A framework for collaborative learning in secure high-dimensional space. In 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), pages 435–446. IEEE, 2019.
- [13] Mohsen Imani, Justin Morris, Samuel Bosch, Helen Shu, Giovanni De Micheli, and Tajana Rosing. Adapthd: Adaptive efficient training for brain-inspired hyperdimensional computing. In 2019 IEEE Biomedical Circuits and Systems Conference (BioCAS), pages 1–4. IEEE, 2019.
- [14] Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and Philip S Yu. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE transactions on neural networks and learning systems, 33(2):494–514, 2021.
- [15] Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji-Rong Wen. Structgpt: A general framework for large language model to reason over structured data. arXiv preprint arXiv:2305.09645, 2023.
- [16] Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, Yang Song, Chen Zhu, Hengshu Zhu, and Ji-Rong Wen. Kg-agent: An efficient autonomous agent framework for complex reasoning over knowledge graph. arXiv preprint arXiv:2402.11163, 2024.
- [17] Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, and Ji-Rong Wen. Unikgqa: Unified retrieval and reasoning for solving multi-hop question answering over knowledge graph. arXiv preprint arXiv:2212.00959, 2022.
- [18] Pentti Kanerva. Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors. Cognitive Computation, 1(2):139–159, 2009.
- [19] Pentti Kanerva et al. Fully distributed representation. PAT, 1(5):10000, 1997.
- [20] Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp. arXiv preprint arXiv:2212.14024, 2022.
- [21] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp. In NeurIPS, 2020.
- [22] Yezi Liu, Hanning Chen, Wenjun Huang, Yang Ni, and Mohsen Imani. Lune: Efficient llm unlearning via lora fine-tuning with negative examples. In Socially Responsible and Trustworthy Foundation Models at NeurIPS 2025.
- [23] Yezi Liu, Hanning Chen, Wenjun Huang, Yang Ni, and Mohsen Imani. Recover-to-forget: Gradient reconstruction from lora for efficient llm unlearning. In Socially Responsible and Trustworthy Foundation Models at NeurIPS 2025.
- [24] Yezi Liu, Qinggang Zhang, Mengnan Du, Xiao Huang, and Xia Hu. Error detection on knowledge graphs with triple embedding. In 2023 31st European Signal Processing Conference (EUSIPCO), pages 1604–1608. IEEE, 2023.
- [25] Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, and Shirui Pan. Reasoning on graphs: Faithful and interpretable large language model reasoning. arXiv preprint arXiv:2310.01061, 2023.
- [26] Raghavender Maddali. Fusion of quantum-inspired ai and hyperdimensional computing for data engineering. Zenodo, doi, 10, 2023.
- [27] Alexander H. Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. CoRR, abs/1606.03126, 2016.
- [28] Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher, Xavier Amatriain, and Jianfeng Gao. Large language models: A survey. arXiv preprint arXiv:2402.06196, 2024.
- [29] Ali Moin, Alex Zhou, Abbas Rahimi, Ankita Menon, Simone Benatti, George Alexandrov, Samuel Tamakloe, Joash Ting, Naoya Yamamoto, Yasser Khan, et al. A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition. Nature Electronics, 4(1):54–63, 2021.
- [30] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730–27744, 2022.
- [31] Tony A Plate. Holographic reduced representations. IEEE Transactions on Neural Networks, 6(3):623–641, 1995.
- [32] Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Omer Levy. Measuring and narrowing the compositionality gap in language models. arXiv:2210.03350, 2022. Self-Ask.
- [33] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2019.
- [34] Abbas Rahimi, Pentti Kanerva, José del R Millán, and Jan M Rabaey. Hyperdimensional computing for noninvasive brain–computer interfaces: Blind and one-shot classification of eeg error-related potentials. In 10th EAI International Conference on Bio-Inspired Information and Communications Technologies, page 19. European Alliance for Innovation (EAI), 2017.
- [35] Apoorv Saxena, Aditay Tripathi, and Partha Talukdar. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In Proceedings of the 58th annual meeting of the association for computational linguistics, pages 4498–4507, 2020.
- [36] Jiaxin Shi, Shulin Cao, Lei Hou, Juanzi Li, and Hanwang Zhang. Transfernet: An effective and transparent framework for multi-hop question answering over relation graph. arXiv preprint arXiv:2104.07302, 2021.
- [37] Yuan Sui, Yufei He, Nian Liu, Xiaoxin He, Kun Wang, and Bryan Hooi. Fidelis: Faithful reasoning in large language model for knowledge graph question answering. arXiv preprint arXiv:2405.13873, 2024.
- [38] Haitian Sun, Tania Bedrax-Weiss, and William W Cohen. Open domain question answering using early fusion of knowledge bases and text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4231–4242, 2018.
- [39] Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Lionel M Ni, Heung-Yeung Shum, and Jian Guo. Think-on-graph: Deep and responsible reasoning of large language model on knowledge graph. arXiv preprint arXiv:2307.07697, 2023.
- [40] Alon Talmor and Jonathan Berant. The web as a knowledge-base for answering complex questions. arXiv preprint arXiv:1803.06643, 2018.
- [41] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
- [42] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
- [43] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H Chi, Quoc V Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
- [44] Yao Xu, Shizhu He, Jiabei Chen, Zihao Wang, Yangqiu Song, Hanghang Tong, Guang Liu, Kang Liu, and Jun Zhao. Generate-on-graph: Treat llm as both agent and kg in incomplete knowledge graph question answering. arXiv preprint arXiv:2404.14741, 2024.
- [45] Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600, 2018.
- [46] Shunyu Yao, Dian Yang, Run-Ze Cui, and Karthik Narasimhan. React: Synergizing reasoning and acting in language models. In ICLR, 2023.
- [47] Shunyu Yao, Dian Zhao, Luyu Yu, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv:2305.10601, 2024.
- [48] Calvin Yeung, Zhuowen Zou, and Mohsen Imani. Generalized holographic reduced representations. arXiv preprint arXiv:2405.09689, 2024.
- [49] Wen-tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, and Jina Suh. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201–206, 2016.
- [50] Jing Zhang, Xiaokang Zhang, Jifan Yu, Jian Tang, Jie Tang, Cuiping Li, and Hong Chen. Subgraph retrieval enhanced model for multi-hop knowledge base question answering. arXiv preprint arXiv:2202.13296, 2022.
- [51] Qinggang Zhang, Junnan Dong, Keyu Duan, Xiao Huang, Yezi Liu, and Linchuan Xu. Contrastive knowledge graph error detection. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 2590–2599, 2022.
- [52] Xiaohan Zou. A survey on application of knowledge graph. In Journal of Physics: Conference Series, volume 1487, page 012016. IOP Publishing, 2020.
Appendix A Notation
| Notation | Definition |
| --- | --- |
| $\mathcal{G}=(\mathcal{V},\mathcal{E})$ | Knowledge graph with entity set $\mathcal{V}$ and edge set $\mathcal{E}$ . |
| $\mathcal{Z}$ | Set of relation schemas/path templates. |
| $q$ , $a$ | Input question and (predicted) answer. |
| $e$ , $r$ | An entity and a relation (schema edge), respectively. |
| $z=(r_{1},...,r_{\ell})$ | A relation path; $|z|=\ell$ denotes path length. |
| $\mathcal{Z}_{\text{cand}}$ | Candidate path set instantiated from $\mathcal{G}$ . |
| $N=|Z_{\text{cand}}|$ | The number of candidate paths instantiated from the KG for a given query. |
| $L_{\max}$ , $B$ , $K$ | Max plan depth, BFS beam width, and number of retrieved paths kept after pruning. |
| $d$ , $D$ , $m$ | Hypervector dimension, # of GHRR blocks, and block size (unitary $m{×}m$ ); flattened $d=Dm^{2}$ . |
| $\mathbf{v}_{x}$ | Hypervector for symbol $x$ (entity/relation/path). |
| $\mathbf{v}_{q}$ , $\mathbf{v}_{z}$ | Query-plan hypervector and a candidate-path hypervector. |
| $\mathbf{H}=[A_{1};...;A_{D}]$ | A GHRR hypervector with unitary blocks $A_{j}∈\mathrm{U}(m)$ . |
| $A^{\ast}$ | Conjugate transpose (unitary inverse) of a block $A$ . |
| $\mathbin{\raisebox{0.77498pt}{\scalebox{0.9}{$\circledast$}}}$ | GHRR blockwise binding operator (matrix product per block). |
| $\langle A,B\rangle_{F}$ | Frobenius inner product $\mathrm{tr}(A^{\ast}B)$ ; $\|A\|_{F}$ is the Frobenius norm. |
| $\mathrm{sim}(·,·)$ | Blockwise cosine similarity used for HD retrieval. |
| $s(z)$ | Calibrated retrieval score; $\alpha,\beta,\lambda$ are calibration hyperparameters; $\mathrm{IDF}(z)$ is an inverse-frequency weight. |
| $\mathcal{M}$ , $M$ | Distractor set and its size $M=\lvert\mathcal{M}\rvert$ (used in capacity bounds). |
| $\epsilon,\delta$ | Tolerance and failure probability in the concentration/union bounds. |
| $c$ | Absolute constant in the sub-Gaussian tail bound. |
Table 7: Notation used throughout the paper.
Appendix B Algorithm
Input: question $q$ ; KG $\mathcal{G}$ ; relation schemas $\mathcal{Z}$ ; max depth $L_{\max}$ ; beam width $B$ ; calibration $(\alpha,\beta,\lambda)$ ; Top- $K$
Output: Top- $K$ reasoning paths $\mathcal{P}_{K}$ and their scores
1
2 Plan (schema-level):
3 Construct a relation-schema graph over $\mathcal{Z}$ and run constrained BFS up to depth $L_{\max}$ with beam width $B$ to obtain a small set of type-consistent relation plans $\mathcal{Z}_{q}⊂eq\mathcal{Z}$ for $q$ .
4
5 Encode Query:
Pick a plan $z_{q}∈\mathcal{Z}_{q}$ and encode it by GHRR binding
$$
\mathbf{v}_{q}\leftarrow\mathrm{BindPath}(z_{q})=\mathop{\mathbin{\raisebox{0.86108pt}{\scalebox{0.9}{$\circledast$}}}}_{r\in z_{q}}\mathbf{v}_{r},
$$
followed by blockwise normalization.
// plan-based query hypervector; no unbinding
6
7 Instantiate Candidates (entity-level):
8 Initialize $\mathcal{P}(q)←\emptyset$ .
9 for $z∈\mathcal{Z}_{q}$ do
10 Instantiate concrete KG paths consistent with schema $z$ by matching its relation pattern to edges in $\mathcal{G}$ or by a constrained BFS on $\mathcal{G}$ (depth $≤ L_{\max}$ , beam width $B$ );
11 Add all instantiated paths to $\mathcal{P}(q)$ .
12 Deduplicate paths in $\mathcal{P}(q)$ and enforce type consistency.
13
14 for $p∈\mathcal{P}(q)$ do
15 Let $z(p)=(r_{1},...,r_{\ell})$ be the relation sequence of path $p$ .
16 [0.2em] Encode Candidate:
$$
\mathbf{v}_{p}\leftarrow\mathrm{BindPath}(z(p))=\mathop{\mathbin{\raisebox{0.86108pt}{\scalebox{0.9}{$\circledast$}}}}_{r\in z(p)}\mathbf{v}_{r}
$$
with blockwise normalization.
17 [0.2em] Score (blockwise cosine):
$$
s_{\mathrm{cos}}(p)\leftarrow\mathrm{sim}(\mathbf{v}_{q},\mathbf{v}_{p})\quad\text{(Eq.~equation~\ref{eq:block-cos})}
$$
Calibrate (optional):
$$
s(p)\leftarrow s_{\mathrm{cos}}(p)+\alpha\,\mathrm{IDF}(\text{schema}(z(p)))-\beta\,\lambda^{|z(p)|}
$$
18 return Top- $K$ paths in $\mathcal{P}(q)$ ranked by $s(p)$ as $\mathcal{P}_{K}$ .
[0.3em] // All steps above are symbolic; no additional LLM calls beyond the final reasoning step.
Algorithm 1 HD-Retrieve: Hyperdimensional Top- $K$ Path Retrieval
Appendix C Prompt Template for One-shot Reasoning
| System | You are a careful reasoner. Only use the provided KG reasoning paths as evidence. Cite the most relevant path(s) and answer concisely. |
| --- | --- |
| User | Question: “$QUESTION” Retrieved paths (Top- $K$ ): 1.
$PATH_1 2.
$PATH_2 3.
… 4.
$PATH_K |
| Assistant (required format) | Answer: $SHORT_ANSWER Supporting path(s): [indexes from the list above] Rationale (1–2 sentences): why those paths imply the answer. |
Table 8: Prompt template for KG path–grounded QA.
Appendix D Additional Theoretical Support
D.1 Near-Orthogonality and Capacity in a Toy Rademacher Model
To build intuition for why high-dimensional hypervectors are effective for retrieval, we first consider a simplified, classical VSA setting: real Rademacher hypervectors with element-wise binding. Our actual method uses complex GHRR hypervectors with blockwise binding (Section 2.2, Proposition 1), but the same sub-Gaussian concentration arguments apply.
We justify the use of high-dimensional hypervectors in PathHD by showing that (i) random hypervectors are nearly orthogonal with high probability, and (ii) this property is preserved under binding, yielding exponential concentration that enables accurate retrieval at scale.
Setup.
Let each entity/relation be encoded as a Rademacher hypervector $\mathbf{x}∈\{-1,+1\}^{d}$ with i.i.d. entries. For two independent hypervectors $\mathbf{x},\mathbf{y}$ , define cosine similarity $\cos(\mathbf{x},\mathbf{y})=\frac{\langle\mathbf{x},\mathbf{y}\rangle}{\|\mathbf{x}\|\,\|\mathbf{y}\|}$ . Since $\|\mathbf{x}\|=\|\mathbf{y}\|=\sqrt{d}$ , we have $\cos(\mathbf{x},\mathbf{y})=\frac{1}{d}\sum_{k=1}^{d}x_{k}y_{k}$ .
**Proposition 2 (Near-orthogonality of random hypervectors)**
*For any $\epsilon∈(0,1)$ ,
$$
\Pr\!\left(\big|\cos(\mathbf{x},\mathbf{y})\big|>\epsilon\right)\;\leq\;2\exp\!\left(-\tfrac{1}{2}\epsilon^{2}d\right).
$$*
* Proof*
Each product $Z_{k}=x_{k}y_{k}$ is i.i.d. Rademacher with $\mathbb{E}[Z_{k}]=0$ and $|Z_{k}|≤ 1$ . By Hoeffding’s inequality, $\Pr\!\left(\left|\sum_{k=1}^{d}Z_{k}\right|>\epsilon d\right)≤ 2\exp(-\epsilon^{2}d/2)$ . Divide both sides by $d$ to obtain the claim. ∎
**Lemma 1 (Closure under binding)**
*Let $\mathbf{r}_{1},...,\mathbf{r}_{n}$ be independent Rademacher hypervectors and define binding (element-wise product) $\mathbf{p}=\mathbf{r}_{1}\odot·s\odot\mathbf{r}_{n}$ . Then $\mathbf{p}$ is also a Rademacher hypervector. Moreover, if $\mathbf{s}$ is independent of at least one $\mathbf{r}_{i}$ used in $\mathbf{p}$ , then $\mathbf{p}$ and $\mathbf{s}$ behave as independent Rademacher hypervectors and $\mathbb{E}[\cos(\mathbf{p},\mathbf{s})]=0$ .*
* Proof*
Each coordinate $p_{k}=\prod_{i=1}^{n}r_{i,k}$ is a product of independent Rademacher variables, hence Rademacher. If $\mathbf{s}$ is independent of some $r_{j}$ , then $p_{k}s_{k}$ has zero mean and remains bounded, so the same Hoeffding-type bound as in Proposition 2 applies. ∎
**Theorem 1 (Separation and error bound for hypervector retrieval)**
*Let the query hypervector be $\mathbf{q}=\mathbf{r}_{1}\odot·s\odot\mathbf{r}_{n}$ and consider a candidate set containing the true path $\mathbf{p}^{\star}=\mathbf{q}$ and $M$ distractors $\{\mathbf{p}_{i}\}_{i=1}^{M}$ , where each distractor differs from $\mathbf{q}$ in at least one relation (thus satisfies Lemma 1). Then for any $\epsilon∈(0,1)$ and $\delta∈(0,1)$ , if
$$
d\;\geq\;\frac{2}{\epsilon^{2}}\,\log\!\Big(\frac{2M}{\delta}\Big),
$$
we have, with probability at least $1-\delta$ ,
$$
\cos(\mathbf{q},\mathbf{p}^{\star})=1\quad\text{and}\quad\max_{1\leq i\leq M}\big|\cos(\mathbf{q},\mathbf{p}_{i})\big|\leq\epsilon.
$$*
* Proof*
By construction, $\mathbf{p}^{\star}=\mathbf{q}$ , hence cosine $=1$ . For each distractor $\mathbf{p}_{i}$ , Lemma 1 implies that $\mathbf{q}$ and $\mathbf{p}_{i}$ behave as independent Rademacher hypervectors; applying Proposition 2, $\Pr(|\cos(\mathbf{q},\mathbf{p}_{i})|>\epsilon)≤ 2e^{-\epsilon^{2}d/2}$ . A union bound over $M$ distractors yields $\Pr(\max_{i}|\cos(\mathbf{q},\mathbf{p}_{i})|>\epsilon)≤ 2Me^{-\epsilon^{2}d/2}≤\delta$ under the stated condition on $d$ . ∎
Appendix E Additional Proofs and Tail Bounds
* Details for Prop.1*
Here, we sketch the argument for the GHRR near-orthogonality bound used in the main text. We view each GHRR block as a unitary matrix with i.i.d. phase (or signed) entries, so blockwise products preserve the unit norm and keep coordinates sub-Gaussian. Let $X=\frac{1}{d}\sum_{j=1}^{d}\xi_{j}$ with $\xi_{j}$ i.i.d., mean zero, and $\psi_{2}$ -norm bounded. Applying Hoeffding/Bernstein, $\Pr(|X|≥\epsilon)≤ 2\exp(-cd\epsilon^{2})$ for some absolute constant $c>0$ , which yields the stated result after $\ell_{2}$ normalization. Unitary blocks ensure no variance blow-up under binding depth; see also [31, 18] for stability of holographic codes. ∎
Appendix F Optional Prompt-Based Schema Refinement
As described in Section ˜ 2.4, all results in Section ˜ 3 use only schema-based enumeration of relation schemas, without any additional LLM calls beyond the final reasoning step. For completeness, we describe here an optional extension that refines schema plans with a lightweight prompt.
Given a small set of candidate relation schemas $\{z^{(1)},...,z^{(M)}\}$ obtained from enumeration, we first verbalize each schema into a short natural-language description (e.g., by mapping each relation type $r$ to a phrase and concatenating them). We then issue a single prompt of the form:
Given the question $q$ and the following candidate relation patterns: (1) [schema 1], (2) [schema 2], …, which $K$ patterns are most relevant for answering $q$ ? Please output only the indices.
The LLM outputs a small subset of indices, which we use to select the top- $K$ schemas $\{z^{(i_{1})},...,z^{(i_{K})}\}$ . These refined schemas are then instantiated into concrete KG paths and encoded into hypervectors exactly as in the main method.
We emphasize that this refinement is not used in any of our reported experiments, and that it can be implemented with at most one additional short LLM call per query. The main system studied in this paper, therefore, remains a single-call KG-LLM pipeline in all empirical results.
Appendix G Dataset Introduction
We provide detailed descriptions of the three benchmark datasets used in our experiments:
WebQuestionsSP (WebQSP). WebQuestionsSP (WebQSP) [49] consists of $4{,}737$ questions, where each question is manually annotated with a topic entity and a SPARQL query over Freebase. The answer entities are within a maximum of $2$ hops from the topic entity. Following prior work [38], we use the standard train/validation/test splits released by GraftNet and the same Freebase subgraph for fair comparison.
Complex WebQuestions–SP (CWQ-SP) [40] is the Freebase/SPARQL–annotated variant of CWQ, aligning each question to a topic entity and an executable SPARQL query over a cleaned Freebase subgraph. Questions are created by compositional expansions of WebQSP (adding constraints, joins, and longer paths), and typically require up to $4$ -hop reasoning. We use the standard train/dev/test split released with CWQ-SP for fair comparison.
GrailQA [9] is a large-scale KGQA benchmark with $64{,}331$ questions. It focuses on evaluating generalization in multi-hop reasoning across three distinct settings: i.i.d., compositional, and zero-shot. Each question is annotated with a corresponding logical form and answer, and the underlying KG is a cleaned subset of Freebase. We follow the official split provided by the authors for fair comparison. In our experiments, we evaluate on the official dev set. The dev set is the authors’ held-out split from the same cleaned Freebase graph and mirrors the three generalization settings; it is commonly used for ablations and model selection when the test labels are held out.
We follow the unified Freebase protocol [2], which contains approximately $88$ million entities, $20$ thousand relations, and $126$ million triples. The official Hits@1/F1 scripts. For GrailQA, numbers in the main results are reported on the dev split (and additionally on its IID subset); many recent works adopt dev evaluation due to test server restrictions. WebQSP has no official dev split under this setting. Additional statistics, including the number of reasoning hops and answer entities, are shown in Table ˜ 9.
| Dataset | Train | Dev | Test | Typical hops | KG |
| --- | --- | --- | --- | --- | --- |
| WebQSP [49] | 3,098 | – | 1,639 | 1–2 | Freebase |
| CWQ [40] | 27,734 | 3,480 | 3,475 | 2–4 | Freebase |
| GrailQA [9] | 44,337 | 6,763 | 13,231 | 1–4 | Freebase |
Table 9: Statistics of Freebase-based KGQA datasets used in our experiments.
Appendix H Detailed Baseline Descriptions
We categorize the baseline methods into four groups and describe each group below.
H.1 Embedding-based methods
- KV-Mem [27] uses a key-value memory architecture to store knowledge triples and performs multi-hop reasoning through iterative memory operations.
- EmbedKGQA [35] formulates KGQA as an entity-linking task and ranks entity embeddings using a question encoder. NSM [10] adopts a sequential program execution framework over KG relations, learning to construct and execute reasoning paths.
- TransferNet [36] builds on GraftNet by incorporating both relational and text-based features, enabling interpretable step-wise reasoning over entity graphs.
H.2 Retrieval-augmented methods
- GraftNet [38] retrieves question-relevant subgraphs and applies GNNs for reasoning over linked entities.
- SR+NSM [50] retrieves relation-constrained subgraphs and runs NSM over them to generate answers.
- SR+NSM+E2E [50] further optimizes SR+NSM via end-to-end training of the retrieval and reasoning modules.
- UniKGQA [17] unifies entity retrieval and graph reasoning into a single LLM-in-the-loop architecture, achieving strong performance with reduced pipeline complexity.
H.3 Pure LLMs
- ChatGPT [30], Davinci-003 [30], and GPT-4 [1] serve as closed-book baselines using few-shot or zero-shot prompting.
- StructGPT [15] generates structured reasoning paths in natural language form, then executes them step by step.
- ROG [25] reasons over graph-based paths with alignment to LLM beliefs.
- Think-on-Graph [39] prompts the LLM to search symbolic reasoning paths over a KG and use them for multi-step inference.
H.4 LLMs + KG methods
- GoG [44] adopts a plan-then-retrieve paradigm, where an LLM generates reasoning plans and a KG subgraph is retrieved accordingly.
- KG-Agent [16] turns the KGQA task into an agent-style decision process using graph environment feedback.
- FiDeLiS [37] fuses symbolic subgraph paths with LLM-generated evidence, filtering hallucinated reasoning chains.
- PathHD (ours) proposes a vector-symbolic integration pipeline where top- $K$ relation paths are selected by vector matching and adjudicated by an LLM, combining symbolic controllability with neural flexibility.
Appendix I Detailed Experimental Setups
We follow a unified evaluation protocol: all methods are evaluated on the Freebase KG with the official Hits@1 / F1 scripts for WebQSP, CWQ, and GrailQA (dev, IID), so that the numbers are directly comparable across systems. Whenever possible, we adopt the official results reported by prior work under the same setting. Concretely, we take numbers for KV-Mem [27], GraftNet [38], EmbedKGQA [35], NSM [10], TransferNet [36], SR+NSM and its end-to-end variant (SR+NSM+E2E) [50], UniKGQA [17], RoG [25], StructGPT [15], Think-on-Graph [39], GoG [44], and FiDeLiS [37] from their papers or consolidated tables under equivalent Freebase protocols. We further include a pure-LLM category (ChatGPT, Davinci-003, GPT-4) using the unified results reported by KG-Agent [16]; note that its GrailQA scores are on the dev split. For KG-Agent itself, we use the official numbers reported in its paper [16].
For PathHD, all LLMs and sentence encoders are used off the shelf with no fine-tuning. The block size $m$ of the unitary blocks is chosen from a small range motivated by the VSA literature; following GHRR, we fix a small block size $m=4$ and primarily tune the overall dimensionality $d$ . Prior work on GHRR [48] shows that, for a fixed total dimension $d$ , moderate changes in $m$ trade off non-commutativity and saturation behaviour, but do not lead to instability. In our experiments, we therefore treat $d$ as the main tuning parameter (set on the dev split), while fixing $m$ across all runs. Additional hyperparameters for candidate enumeration and calibration are detailed in Sections ˜ I.1 and I.2.
I.1 Additional Analytic Efficiency
| Method | # LLM calls / query | Planning depth | Retrieval fanout/beam | Executor/Tools |
| --- | --- | --- | --- | --- |
| KV-Mem [27] | $0$ | multi-hop (learned) | moderate | Yes (neural mem) |
| EmbedKGQA [35] | $0$ | multi-hop (seq) | moderate | No |
| NSM [10] | $0$ | multi-hop (neural) | moderate | Yes (neural executor) |
| TransferNet [36] | $0$ | multi-hop | moderate | No |
| GraftNet [38] | $0$ | multi-hop | graph fanout | No |
| SR+NSM [50] | $0$ | multi-hop | subgraph (beam) | Yes (neural exec) |
| SR+NSM+E2E [50] | $0$ | multi-hop | subgraph (beam) | Yes (end-to-end) |
| ChatGPT [30] | $1$ | $0$ | n/a | No |
| Davinci-003 [30] | $1$ | $0$ | n/a | No |
| GPT-4 [1] | $1$ | $0$ | n/a | No |
| UniKGQA [17] | $1\text{--}2$ | shallow | small/merged | No (unified model) |
| StructGPT [15] | $1\text{--}2$ | $1$ | n/a | Yes (tool use) |
| RoG [25] | $≈ d\!×\!b$ | $d$ | $b$ (per step) | No (LLM scoring) |
| Think-on-Graph [39] | $3\text{--}6$ | multi | small/beam | Yes (plan & react) |
| GoG [44] | $3\text{--}5$ | multi | small/iterative | Yes (generate-retrieve loop) |
| KG-Agent [16] | $3\text{--}8$ | multi | small | Yes (agent loop) |
| FiDeLiS [37] | $1\text{--}3$ | shallow | small | Optional (verifier) |
| PathHD (ours) | $\mathbf{1}$ (final only) | $\mathbf{0}$ | vector ops only | No (vector ops) |
Table 10: Full analytical comparison (no implementation). Ranges reflect algorithm design; $d$ and $b$ denote planning depth and beam/fanout as specified in RoG, which uses beam-search with $B=3$ and path length bounded by dataset hops (WebQSP $≤ 2$ , CWQ $≤ 4$ ).
I.2 More Hyperparameter Tuning Details
We sweep $\alpha,\beta∈\{0,0.1,0.2,...,0.5\}$ and $\lambda∈\{0.6,0.7,0.8,0.9\}$ and pick the best-performing triple on the validation Hits@ $1$ for each dataset.
| Dataset | $\mathbf{\alpha}$ | $\mathbf{\beta}$ | $\lambda$ |
| --- | --- | --- | --- |
| WebQSP | 0.2 | 0.1 | 0.8 |
| CWQ | 0.3 | 0.1 | 0.8 |
| GrailQA | 0.2 | 0.2 | 0.8 |
Table 11: Calibration hyperparameters $(\alpha,\beta,\lambda)$ used for each dataset.
Appendix J Additional Experiments
J.1 Scoring Metric
<details>
<summary>x7.png Details</summary>

### Visual Description
## Bar Chart: F1 Scores for Different Similarity Measures on Three Datasets
### Overview
The image presents three bar charts side-by-side, each displaying the F1 scores of different similarity measures (Blockwise Cosine, Global Cosine, Dot Product, L2 Distance, Max-block Cosine) on a different dataset (WebQSP, CWQ, GrailQA). The y-axis represents F1 score in percentage, and the x-axis represents the different similarity measures.
### Components/Axes
* **Titles:**
* Top-left: WebQSP
* Top-center: CWQ
* Top-right: GrailQA
* **Y-axis:**
* Label: F1 (%)
* Scale: 64.0 to 79.0 for WebQSP, 64.0 to 66.0 for CWQ, 85.5 to 87.0 for GrailQA, with increments of 0.5.
* **X-axis:**
* Labels (Similarity Measures): Blockwise Cosine, Global Cosine, Dot Product, L2 Distance, Max-block Cosine.
* **Legend:** Located at the bottom of the image.
* Blockwise Cosine: Light Green
* Global Cosine: Dark Green
* Dot Product: Light Yellow
* L2 Distance: Dark Yellow
* Max-block Cosine: Light Blue
### Detailed Analysis
**WebQSP Dataset (Left Chart):**
* **Blockwise Cosine (Light Green):** F1 score of approximately 78.6%.
* **Global Cosine (Dark Green):** F1 score of approximately 78.0%.
* **Dot Product (Light Yellow):** F1 score of approximately 77.8%.
* **L2 Distance (Dark Yellow):** F1 score of approximately 77.9%.
* **Max-block Cosine (Light Blue):** F1 score of approximately 78.2%.
**CWQ Dataset (Center Chart):**
* **Blockwise Cosine (Light Green):** F1 score of approximately 65.8%.
* **Global Cosine (Dark Green):** F1 score of approximately 65.0%.
* **Dot Product (Light Yellow):** F1 score of approximately 64.7%.
* **L2 Distance (Dark Yellow):** F1 score of approximately 64.8%.
* **Max-block Cosine (Light Blue):** F1 score of approximately 65.3%.
**GrailQA Dataset (Right Chart):**
* **Blockwise Cosine (Light Green):** F1 score of approximately 86.7%.
* **Global Cosine (Dark Green):** F1 score of approximately 86.1%.
* **Dot Product (Light Yellow):** F1 score of approximately 85.8%.
* **L2 Distance (Dark Yellow):** F1 score of approximately 85.9%.
* **Max-block Cosine (Light Blue):** F1 score of approximately 86.3%.
### Key Observations
* Across all three datasets, Blockwise Cosine generally achieves the highest F1 score.
* Dot Product and L2 Distance consistently show lower F1 scores compared to Blockwise Cosine, Global Cosine, and Max-block Cosine.
* The F1 scores vary significantly across the datasets, with GrailQA showing the highest scores and CWQ showing the lowest.
### Interpretation
The bar charts compare the performance of different similarity measures on three question-answering datasets. The F1 score, a measure of accuracy, is used to evaluate the effectiveness of each similarity measure. The results suggest that Blockwise Cosine is a strong performer across all datasets, while Dot Product and L2 Distance tend to underperform. The differences in F1 scores across datasets indicate that the choice of similarity measure can be dataset-dependent. The GrailQA dataset appears to be easier or more suited to these similarity measures, resulting in higher F1 scores compared to WebQSP and CWQ.
</details>
Figure 5: Scoring measurement ablation. We evaluate F1 (%) on WebQSP, CWQ, and GrailQA using different scoring strategies in our model. PathHD achieves the best or competitive results when using blockwise cosine similarity, highlighting its effectiveness in capturing fine-grained matching signals across vector blocks.
J.2 Text-projection Variant
We use SBERT as the sentence encoder, so $d_{t}$ is fixed to the encoder hidden size $768$ , and $P$ is sampled once from $\mathcal{N}(0,1/d_{t})$ and kept fixed for all experiments.
| Final step | WebQSP | CWQ |
| --- | --- | --- |
| PathHD (text-projection query) | 83.4 | 69.8 |
| PathHD (plan-based, default) | 86.2 | 71.5 |
Table 12: Comparison of query encoding variants in PathHD. We report Hits@ $1$ on WebQSP and CWQ for the default plan-based encoding and the text-projection variant.
J.3 Additional Visualization
<details>
<summary>x8.png Details</summary>

### Visual Description
## Horizontal Bar Chart: CWQ Latency Comparison (with PathHD pruning)
### Overview
The image is a horizontal bar chart comparing the per-query latency of different methods on the CWQ dataset. The x-axis represents the per-query latency in seconds (log scale), and the y-axis lists the different methods being compared. The chart displays the median latency with a p90 whisker, indicating the 90th percentile latency.
### Components/Axes
* **Title:** CWQ Latency comparison (with PathHD pruning)
* **X-axis:**
* Label: Per-query latency (s) - median with p90 whisker
* Scale: Logarithmic (base 10)
* Ticks: 10<sup>-1</sup>, 10<sup>0</sup>, 10<sup>1</sup>
* **Y-axis:**
* Labels (from top to bottom): KV-Mem, NSM, ChatGPT (1 call), GPT-4 (1 call), UniKGQA (1-2 calls), StructGPT (1-2 calls), Think-on-Graph (3-6), GoG (3-5), KG-Agent (3-8), RoG (B=3, D≤4 → 12), PathHD (ours, 1 call)
* **Data Representation:** Each method is represented by a horizontal bar. The start of the bar indicates the median latency, and the whisker extends to the 90th percentile latency. A colored dot marks the median latency.
### Detailed Analysis
Here's a breakdown of the latency for each method, including the median and approximate p90 whisker endpoint. Note that the x-axis is logarithmic.
* **KV-Mem:**
* Median: Approximately 0.6 seconds (orange dot)
* P90 whisker endpoint: Approximately 1.2 seconds
* **NSM:**
* Median: Approximately 0.4 seconds (orange dot)
* P90 whisker endpoint: Approximately 0.8 seconds
* **ChatGPT (1 call):**
* Median: Approximately 1.2 seconds (green dot)
* P90 whisker endpoint: Approximately 2.5 seconds
* **GPT-4 (1 call):**
* Median: Approximately 2.5 seconds (red dot)
* P90 whisker endpoint: Approximately 4 seconds
* **UniKGQA (1-2 calls):**
* Median: Approximately 1.7 seconds (purple dot)
* P90 whisker endpoint: Approximately 3 seconds
* **StructGPT (1-2 calls):**
* Median: Approximately 1.7 seconds (purple dot)
* P90 whisker endpoint: Approximately 3 seconds
* **Think-on-Graph (3-6):**
* Median: Approximately 3.5 seconds (pink dot)
* P90 whisker endpoint: Approximately 6 seconds
* **GoG (3-5):**
* Median: Approximately 2.7 seconds (brown dot)
* P90 whisker endpoint: Approximately 5 seconds
* **KG-Agent (3-8):**
* Median: Approximately 4.5 seconds (yellow dot)
* P90 whisker endpoint: Approximately 7 seconds
* **RoG (B=3, D≤4 → 12):**
* Median: Approximately 12 seconds (teal dot)
* P90 whisker endpoint: Approximately 20 seconds
* **PathHD (ours, 1 call):**
* Median: Approximately 0.8 seconds (blue dot)
* P90 whisker endpoint: Approximately 1.5 seconds
### Key Observations
* The latency varies significantly across different methods.
* KV-Mem, NSM, and PathHD exhibit the lowest median latencies.
* RoG has the highest median latency.
* The p90 whisker indicates the variability in latency for each method.
### Interpretation
The chart compares the performance of different methods in terms of per-query latency on the CWQ dataset. The results suggest that methods like KV-Mem, NSM, and PathHD are more efficient in terms of latency compared to other methods like RoG. The p90 whisker provides insights into the consistency of the latency, with longer whiskers indicating greater variability. The "PathHD pruning" mentioned in the title suggests that the PathHD method utilizes a pruning technique to optimize its performance.
The assumptions at the bottom provide context for interpreting the results:
* Each LLM call median ≈2.2s, p90 ≈3.4s; non-LLM ops 0.3-0.8s.
* RoG uses beam B=3, depth D≤4 (≈12 calls). PathHD uses vector scoring + top-K pruning; here PRUNE\_FACTOR=0.85, TAIL\_SHRINK=0.9.
</details>
Figure 6: CWQ latency comparison (lollipop). Dots indicate median per-query latency; right whiskers show the 90th percentile (p90). The x-axis is log-scaled. Values are estimated under a unified setup: per-LLM-call median $≈$ 2.2 s and p90 $≈$ 3.4 s; non-LLM operations add 0.3-0.8 s. RoG follows beam width $B{=}3$ with depth bounded by dataset hops ( $D{≤}4$ , $≈$ 12 calls), whereas PathHD uses a single LLM call plus vector operations for scoring.
J.4 Effect of Backbone Models
Performance across different LLM backbones is shown in Table ˜ 13.
Table 13: Performance across different LLM backbones. Each block fixes the backbone and varies the reasoning framework: a pure LLM control (CoT), our single-call PathHD, and 1-2 multi-step LLM+KG baselines. Metrics follow the unified Freebase setup.
| Backbone | Method | WebQSP | CWQ | GrailQA (F1) | #Calls |
| --- | --- | --- | --- | --- | --- |
| Hits@1 / F1 | Hits@1 / F1 | Overall / IID | (/query) | | |
| GPT-4 (API) | CoT [43] | 73.2 / 62.3 | 55.6 / 49.9 | 31.7 / 25.0 | 1 |
| RoG [25] | 85.7 / 70.8 | 62.6 / 56.2 | – / – | $≈ 12$ | |
| KG-Agent [16] | 83.3 / 81.0 | 72.2 / 69.8 | 86.1 / 92.0 | 3–8 | |
| PathHD (single-call) | 86.2 / 78.6 | 71.5 / 65.8 | 86.7 / 92.4 | 1 | |
| GPT - 3.5 / ChatGPT | CoT [43] | 67.4 / 59.3 | 47.5 / 43.2 | 25.3 / 19.6 | 1 |
| StructGPT [15] | 72.6 / 63.7 | 54.3 / 49.6 | 54.6 / 70.4 | 1–2 | |
| RoG [25] | 85.0 / 70.2 | 61.8 / 55.5 | – / – | $≈ 12$ | |
| PathHD (single-call) | 85.6 / 78.0 | 70.8 / 65.1 | 85.9 / 91.7 | 1 | |
| Llama - 3 - 8B - Instruct (open) | CoT (prompt-only) | 62.0 / 55.0 | 43.0 / 40.0 | 20.0 / 16.0 | 1 |
| ReAct - Lite (retrieval+CoT) | 70.5 / 62.0 | 52.0 / 47.5 | 48.0 / 62.0 | 3–5 | |
| BM25+LLM - Verifier (1 $×$ ) | 74.5 / 66.0 | 55.0 / 50.0 | 52.0 / 66.0 | 1 | |
| PathHD (single-call) | 84.8 / 77.2 | 69.8 / 64.2 | 84.9 / 90.9 | 1 | |
J.5 Prompt Sensitivity of the LLM Adjudicator
Since PathHD relies on a single LLM call to adjudicate among the Top- $K$ candidate paths, it is natural to ask how sensitive the system is to the exact phrasing of this adjudication prompt. To investigate this, we compare our default adjudication prompt (Prompt A) with a slightly rephrased variant (Prompt B) that uses different wording but conveys the same task description. For example, Prompt A asks the model to “select the most plausible reasoning path and answer the question based on it”, whereas Prompt B paraphrases this as “choose the best supporting path and use it to answer the question”.
| Prompt | WebQSP | CWQ | GrailQA (Overall / IID) |
| --- | --- | --- | --- |
| Prompt A (default) | $\mathbf{86.2}$ / $\mathbf{78.6}$ | $\mathbf{71.5}$ / $\mathbf{65.8}$ | $\mathbf{86.7}$ / $\mathbf{92.4}$ |
| Prompt B (paraphrased) | 85.7 / 78.3 | 70.9 / 63.4 | 85.2 / 90.8 |
Table 14: Prompt sensitivity of the LLM adjudicator. We compare the default adjudication prompt (Prompt A) with a paraphrased variant (Prompt B). Numbers are Hits@1 and F1 for WebQSP and CWQ, and Overall / IID F1 for GrailQA.
Table ˜ 14 reports Hits@1 and F1 on the three datasets under these two prompts. We observe that while minor prompt changes can occasionally flip individual predictions, the overall performance remains very close for all datasets, and the qualitative behavior of the path-grounded rationales is also stable. This suggests that, in our setting, PathHD is reasonably robust to small, natural variations in the adjudication prompt.
Appendix K Detailed Introduction of the Modules
K.1 Binding Operations
Below, we summarize the binding operators considered in our system and ablations. All bindings produce a composed hypervector $\mathbf{s}$ from two inputs $\mathbf{x}$ and $\mathbf{y}$ of the same dimensionality.
(1) XOR / Bipolar Product (commutative).
For binary hypervectors $\mathbf{x},\mathbf{y}∈\{0,1\}^{d}$ ,
$$
\mathbf{s}=\mathbf{x}\oplus\mathbf{y},\qquad s_{i}=(x_{i}+y_{i})\bmod 2.
$$
Under the bipolar code $\{-1,+1\}$ , XOR is equivalent to element-wise multiplication:
$$
s_{i}=x_{i}\cdot y_{i},\qquad x_{i},y_{i}\in\{-1,+1\}.
$$
This is the classical commutative bind baseline used in our ablation.
(2) Real-valued Element-wise Product (commutative).
For real vectors $\mathbf{x},\mathbf{y}∈\mathbb{R}^{d}$ ,
$$
\mathbf{s}=\mathbf{x}\odot\mathbf{y},\qquad s_{i}=x_{i}\,y_{i}.
$$
Unbinding is approximate by element-wise division (with small $\epsilon$ for stability): $x_{i}≈ s_{i}/(y_{i}+\epsilon)$ . This is another variant of the commutative bind.
(3) HRR: Circular Convolution (commutative).
For $\mathbf{x},\mathbf{y}∈\mathbb{R}^{d}$ ,
$$
\mathbf{s}=\mathbf{x}\circledast\mathbf{y},\qquad s_{k}=\sum_{i=0}^{d-1}x_{i}\,y_{(k-i)\bmod d}.
$$
Approximate unbinding uses circular correlation:
$$
\mathbf{x}\approx\mathbf{s}\circledast^{-1}\mathbf{y},\qquad x_{i}\approx\sum_{k=0}^{d-1}s_{k}\,y_{(k-i)\bmod d}.
$$
This is the Circ. conv condition in our ablation.
(4) FHRR / Complex Phasor Product (commutative).
Let $\mathbf{x},\mathbf{y}∈\mathbb{C}^{d}$ with unit-modulus components $x_{i}=e^{i\phi_{i}}$ , $y_{i}=e^{i\psi_{i}}$ . Binding is element-wise complex multiplication
$$
\mathbf{s}=\mathbf{x}\odot\mathbf{y},\qquad s_{i}=x_{i}y_{i}=e^{i(\phi_{i}+\psi_{i})},
$$
and unbinding is conjugation: $\mathbf{x}≈\mathbf{s}\odot\mathbf{y}^{\ast}$ . FHRR is often used as a complex analogue of HRR.
(5) Block-diagonal GHRR (non-commutative, ours).
We use Generalized HRR with block-unitary components. A hypervector is a block vector $\mathbf{H}=[A_{1};...;A_{D}]$ , $A_{j}∈\mathrm{U}(m)$ (so total dimension $d=Dm^{2}$ when flattened). Given $\mathbf{X}=[X_{1};...;X_{D}]$ and $\mathbf{Y}=[Y_{1};...;Y_{D}]$ , binding is the block-wise product
$$
\mathbf{Z}=\mathbf{X}\,\mathbin{\raisebox{0.86108pt}{\scalebox{0.9}{$\circledast$}}}\,\mathbf{Y},\qquad Z_{j}=X_{j}Y_{j}\;\;(j=1,\dots,D).
$$
Since matrix multiplication is generally non-commutative ( $X_{j}Y_{j}≠ Y_{j}X_{j}$ ), GHRR preserves order/direction of paths. Unbinding exploits unitarity:
$$
X_{j}\approx Z_{j}Y_{j}^{\ast},\qquad Y_{j}\approx X_{j}^{\ast}Z_{j}.
$$
This Block-diag (GHRR) operator is our default choice and achieves the best performance in the operation study (Table ˜ 3), compared to Comm. bind and Circ. conv.
K.2 Additional Case Study
Table 15: Case studies for PathHD with an illustrative display of candidates. For each query, we list the four highest-scoring relation paths (Top-4) for readability, then prune to $K=2$ before the vector-only choice and a single-LLM adjudication.
| Case 3: where are the gobi desert located on a map | |
| --- | --- |
| Top-3 candidates | 1) location.location.containedby (0.3410) |
| 2) location.location.partially_containedby (0.3335) | |
| 3) location.location.contains (0.3255) | |
| Top- $K$ after pruning (K=2) | containedby |
| partially_containedby | |
| Vector-only (no LLM) | Pick containedby ✓ — returns parent region; predicts Asia. |
| 1 $×$ LLM adjudication | Rationale: “Gobi Desert lies across Mongolia and China, which are contained by the continent of Asia; ‘contains’ would flip direction.” |
| Final Answer / GT | Asia (predict) / Asia ✓ |
| Case 4: in which continent is germany | |
| Top-3 candidates | 1) location.location.containedby (0.3405) |
| 2) base.locations.countries.continent (0.3325) | |
| 3) location.location.contains (0.3270) | |
| Top- $K$ after pruning (K=2) | containedby |
| countries.continent | |
| Vector-only (no LLM) | Pick containedby ✗ — tends to surface EU or administrative parents, hurting precision. |
| 1 $×$ LLM adjudication | Rationale: “The target is a country $→$ continent query; use countries.continent to directly map Germany to Europe.” |
| Final Answer / GT | Europe (predict) / Europe ✓ |
| Case 5: what is the hometown of the person who said “Forgive your enemies, but never forget their names?” | |
| Top-3 candidates | 1) quotation.author $→$ person.place_of_birth (0.3380) |
| 2) family.members $→$ person.place_of_birth (0.3310) | |
| 3) quotation.author $→$ location.people_born_here (0.3310) | |
| Top- $K$ after pruning (K=2) | quotation.author $→$ place_of_birth |
| family.members $→$ place_of_birth | |
| Vector-only (no LLM) | Pick quotation.author $→$ place_of_birth ✓ — direct trace from quote to person to birthplace. |
| 1 $×$ LLM adjudication | Rationale: “The quote’s author is key; once identified, linking to their birthplace via person-level relation gives the hometown.” |
| Final Answer / GT | Brooklyn (predict) / Brooklyn ✓ |
| Case 6: what is the name of the capital of Australia where the film “The Squatter’s Daughter” was made | |
| Top-3 candidates | 1) film.film_location.featured_in_films (0.3360) |
| 2) notable_types $→$ newspaper_circulation_area.newspapers | |
| $→$ newspapers (0.3330) | |
| 3) film_location.featured_in_films $→$ | |
| bibs_location.country (0.3310) | |
| Top- $K$ after pruning (K=2) | film.film_location.featured_in_films |
| notable_types $→$ newspaper_circulation_area.newspapers | |
| Vector-only (no LLM) | Pick film.film_location.featured_in_films ✓ — retrieves filming location; indirectly infers capital via metadata. |
| 1 $×$ LLM adjudication | Rationale: “The film’s production location helps localize the city. Although not all locations are capitals, this film was made in Australia, where identifying the filming city leads to the capital.” |
| Final Answer / GT | Canberra (predict) / Canberra ✓ |
Table ˜ 15 presents additional WebQSP case studies for PathHD. Unlike the main paper’s case table (Top - 4 candidates with pruning to $K{=}3$ ), this appendix visualizes the Top - 3 highest-scoring relation paths for readability and then prunes to $K{=}2$ before a single-LLM adjudication.
Across the four examples (Cases 3-6), pruning to $K{=}2$ often retains the correct path and achieves strong final answers after LLM adjudication. However, we also observe a typical failure mode of the vector-only selector under $K{=}2$ : when multiple plausible paths exist (e.g., country vs. continent, or actor vs. editor constraints), the vector-only choice can become brittle and select a high-scoring but underconstrained path, after which the LLM must recover the correct answer using the remaining candidate (see Case 4). In contrast, the main-paper setting with $K{=}3$ keeps one more candidate, which more reliably preserves a constraint-satisfying path (e.g., explicitly encoding actor or continent relations). This extra coverage reduces reliance on the LLM to repair mistakes and improves robustness under compositional queries.
While $K{=}2$ is cheaper and can work well in many instances, $K{=}3$ offers a better coverage-precision trade-off on average: it mitigates pruning errors in compositional cases and lowers the risk of discarding the key constraint path. This aligns with our main experimental choice of $K{=}3$ , which we use for all reported metrics in the paper.
Case-study note.
For the qualitative case studies only, we manually verified the final entity answers using publicly available sources (e.g., film credits and encyclopedia entries). This light-weight human verification was used solely to present readable examples; it does not affect any quantitative metric. All reported metrics (e.g., Hits@1 and F1) are computed from dataset-provided supervision and ground-truth paths without human annotation.