## Screenshot: Agent Comparison for Multi-Hop Question Answering
### Overview
The image compares two AI agent implementations (Agent-as-tool-Base vs. Agent-as-tool-Instruct) attempting to answer the question: "Where did Edward Dunn (Bishop)'s father graduate from?" The expected answer is **Corpus Christi College, Cambridge**. The comparison shows the agents' reasoning processes, tool calls, and final answers through a structured dialogue format with color-coded annotations.
---
### Components/Axes
1. **Left Panel (Agent-as-tool-Base)**:
- **Structure**: Sequential thought process with tool calls and observations.
- **Annotations**: Orange boxes highlight key decision points (e.g., "Planner try to ask 2 hops simultaneously").
- **Final Answer**: Incorrectly identifies **Marlborough and Pembroke College, Cambridge**.
2. **Right Panel (Agent-as-tool-Instruct)**:
- **Structure**: Step-by-step reasoning with explicit tool calls and observations.
- **Annotations**: Orange boxes emphasize strategic decisions (e.g., "Learned to Ask the prerequisite question before the next hop").
- **Final Answer**: Correctly identifies **Corpus Christi College, Cambridge**.
---
### Detailed Analysis
#### Left Panel (Agent-as-tool-Base)
1. **Initial Query**: Asks for information on Edward Dunn's father's education.
2. **First Observation**: Fails to find specific details about the father's education but provides info on Edward Dunn's own education.
3. **Second Attempt**: Tries to ask two hops simultaneously but receives no useful data.
4. **Final Answer**: Incorrectly cites Marlborough and Pembroke College, Cambridge, based on incomplete data.
#### Right Panel (Agent-as-tool-Instruct)
1. **Initial Query**: Asks for information on Edward Dunn's father.
2. **First Observation**: Identifies Andrew Hunter Dunn as the father and notes his role as Bishop of Quebec.
3. **Second Attempt**: Explicitly asks where Andrew Hunter Dunn graduated from.
4. **Final Answer**: Correctly cites Corpus Christi College, Cambridge, using precise tool calls.
---
### Key Observations
1. **Agent-as-tool-Base**:
- Struggles with multi-hop reasoning, attempting parallel queries that yield no results.
- Final answer is incorrect due to reliance on ambiguous or unrelated data.
2. **Agent-as-tool-Instruct**:
- Uses sequential, targeted tool calls to isolate the correct information.
- Final answer matches the expected result, demonstrating effective structured reasoning.
---
### Interpretation
The comparison highlights the importance of **structured, sequential reasoning** in multi-hop question answering. The Agent-as-tool-Instruct successfully isolates the correct information by:
1. **Asking prerequisite questions** before proceeding (e.g., identifying the father first).
2. **Explicitly targeting** the father's educational background after establishing his identity.
3. **Leveraging precise tool calls** to extract specific data points.
In contrast, the Agent-as-tool-Base's attempt to handle multiple hops simultaneously leads to confusion and incorrect conclusions. This underscores the value of **stepwise, hypothesis-driven reasoning** in complex information retrieval tasks.