2505.24157v3
Model: nemotron-free
# Experience-based Knowledge Correction for Robust Planning in Minecraft
footnotetext: Corresponding author: Jungseul Ok <jungseul@postech.ac.kr>
Abstract
Large Language Model (LLM)-based planning has advanced embodied agents in long-horizon environments such as Minecraft, where acquiring latent knowledge of goal (or item) dependencies and feasible actions is critical. However, LLMs often begin with flawed priors and fail to correct them through prompting, even with feedback. We present XENON (eXpErience-based kNOwledge correctioN), an agent that algorithmically revises knowledge from experience, enabling robustness to flawed priors and sparse binary feedback. XENON integrates two mechanisms: Adaptive Dependency Graph, which corrects item dependencies using past successes, and Failure-aware Action Memory, which corrects action knowledge using past failures. Together, these components allow XENON to acquire complex dependencies despite limited guidance. Experiments across multiple Minecraft benchmarks show that XENON outperforms prior agents in both knowledge learning and long-horizon planning. Remarkably, with only a 7B open-weight LLM, XENON surpasses agents that rely on much larger proprietary models. Project page: https://sjlee-me.github.io/XENON
1 Introduction
Large Language Model (LLM)-based planning has advanced in developing embodied AI agents that tackle long-horizon goals in complex, real-world-like environments (Szot et al., 2021; Fan et al., 2022). Among such environments, Minecraft has emerged as a representative testbed for evaluating planning capability that captures the complexity of such environments (Wang et al., 2023b; c; Zhu et al., 2023; Yuan et al., 2023; Feng et al., 2024; Li et al., 2024b). Success in these environments often depends on agents acquiring planning knowledge, including the dependencies among goal items and the valid actions needed to obtain them. For instance, to obtain an iron nugget
<details>
<summary>x1.png Details</summary>

### Visual Description
Icon/Small Image (23x20)
</details>
, an agent should first possess an iron ingot
<details>
<summary>x2.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
, which can only be obtained by the action smelt.
However, LLMs often begin with flawed priors about these dependencies and actions. This issue is indeed critical, since a lack of knowledge for a single goal can invalidate all subsequent plans that depend on it (Guss et al., 2019; Lin et al., 2021; Mao et al., 2022). We find several failure cases stemming from these flawed priors, a problem that is particularly pronounced for the lightweight LLMs suitable for practical embodied agents. First, an LLM often fails to predict planning knowledge accurately enough to generate a successful plan (Figure Ë 1 b), resulting in a complete halt in progress toward more challenging goals. Second, an LLM cannot robustly correct its flawed knowledge, even when prompted to self-correct with failure feedback (Shinn et al., 2023; Chen et al., 2024), often repeating the same errors (Figures 1 c and 1 d). To improve self-correction, one can employ more advanced techniques that leverage detailed reasons for failure (Zhang et al., 2024; Wang et al., 2023a). Nevertheless, LLMs often stubbornly adhere to their erroneous parametric knowledge (i.e. knowledge implicitly stored in model parameters), as evidenced by Stechly et al. (2024) and Du et al. (2024).
<details>
<summary>x3.png Details</summary>

### Visual Description
## Diagram: LLM Dependency Graph Analysis with Self-Correction Attempts
### Overview
The diagram compares true and predicted dependency graphs in a game context (likely Minecraft), illustrating an LLM's ability to identify and correct errors in dependency relationships. It includes four panels: (a) True Dependency Graph, (b) LLM-predicted Graph, (c) LLM self-correction for dependencies, and (d) LLM self-correction for actions. Speech bubbles show iterative reasoning, while a legend on the left defines symbolic elements.
---
### Components/Axes
#### Legend (Left Panel)
- **Correct dependency**: Solid black arrow
- **Missed dependency**: Dashed orange arrow
- **Redundant dependency**: Dashed red arrow
- **Hallucinated item**: Red bug icon
- **Wrong knowledge**: Red rectangle
- **Ground-truth**: Blue rectangle
#### Panels
1. **(a) True Dependency Graph**
- Central block (gray) with arrows to:
- Missed dependency (wooden crate, dashed orange)
- Redundant dependency (stone block, dashed red)
- Correct dependencies (diamond ore, pickaxe, etc., solid black)
- Hallucinated item (red bug) connected to stone block.
2. **(b) LLM-predicted Graph**
- Similar structure to (a) but with:
- Incorrect redundant dependency (stone block, dashed red)
- Missed correct dependency (diamond ore, dashed orange)
- Hallucinated item (red bug) connected to stone block.
3. **(c) LLM Self-Correction for Dependencies**
- **Prior attempt**: "requires" speech bubble with stone block and wooden crate.
- **Correction prompt**: "You failed to get [stone block] many times. You had [stone block] at those times."
- **Next attempt**: "I still think [stone block] requires [wooden crate]."
- Outcome: "Fail!" in red.
4. **(d) LLM Self-Correction for Actions**
- **Prior attempt**: "I will do 'mine' [stone block]."
- **Correction prompt**: "You failed to 'mine' [stone block] many times. You had [stone block] at those times."
- **Next attempt**: "I will 'mine' [stone block] again. I failed since I had no [stone block] and [diamond ore]."
- Outcome: "Fail!" in red.
---
### Detailed Analysis
#### Panel (a) vs. (b): True vs. Predicted Graphs
- **True Graph**: Accurately maps dependencies (e.g., stone block requires pickaxe).
- **Predicted Graph**: Introduces errors:
- Missed correct dependency (diamond ore â stone block).
- Added redundant dependency (stone block â wooden crate).
- Hallucinated red bug connected to stone block.
#### Panels (c) and (d): Self-Correction Process
- **Dependencies (c)**:
- LLM identifies missed dependency but fails to resolve it, persisting in incorrect belief.
- Speech bubbles show iterative reasoning but no resolution.
- **Actions (d)**:
- LLM attempts to "mine" stone block but fails due to missing prerequisites (diamond ore).
- Correction prompt highlights missing items, but next attempt still fails due to unresolved dependencies.
---
### Key Observations
1. **Persistent Errors**: The LLM struggles to resolve missed dependencies (e.g., diamond ore requirement).
2. **Hallucinations**: Red bug icon appears in both true and predicted graphs, suggesting a recurring error.
3. **Redundant Dependencies**: LLM incorrectly adds stone block â wooden crate in (b), which is absent in (a).
4. **Action Failures**: Self-correction for actions (d) fails due to incomplete prerequisite resolution.
---
### Interpretation
The diagram demonstrates the challenges of LLM-based error correction in dependency resolution. While the model identifies some errors (e.g., missed dependencies), it fails to fully resolve them, leading to persistent hallucinations and incorrect actions. The iterative self-correction process highlights the need for improved grounding in prerequisite relationships. The "Fail!" outcomes underscore limitations in current LLM architectures for complex, interdependent systems like game mechanics.
</details>
Figure 1: An LLM exhibits flawed planning knowledge and fails at self-correction. (b) The dependency graph predicted by Qwen2.5-VL-7B (Bai et al., 2025) contains multiple errors (e.g., missed dependencies, hallucinated items) compared to (a) the ground truth. (c, d) The LLM fails to correct its flawed knowledge about dependencies and actions from failure feedbacks, often repeating the same errors. See Appendix Ë B for the full prompts and LLMâs self-correction examples.
In response, we propose XENON (eXpErience-based kNOwledge correctioN), an agent that robustly learns planning knowledge from only binary success/failure feedback. To this end, instead of relying on an LLM for correction, XENON algorithmically and directly revises its external knowledge memory using its own experience, which in turn guides its planning. XENON learns this planning knowledge through two synergistic components. The first component, Adaptive Dependency Graph (ADG), revises flawed dependency knowledge by leveraging successful experiences to propose plausible new required items. The second component, Failure-aware Action Memory (FAM), builds and corrects its action knowledge by exploring actions upon failures. In the challenging yet practical setting of using only binary feedbacks, FAM enables XENON to disambiguate the cause of a failure, distinguishing between flawed dependency knowledge and invalid actions, which in turn triggers a revision in ADG for the former.
Extensive experiments in three Minecraft testbeds show that XENON excels at both knowledge acquisition and planning. XENON outperforms prior agents in learning knowledge, showing unique robustness to LLM hallucinations and modified ground-truth environmental rules. Furthermore, with only a 7B LLM, XENON significantly outperforms prior agents that rely on much larger proprietary models like GPT-4 in solving diverse long-horizon goals. These results suggest that robust algorithmic knowledge management can be a promising direction for developing practical embodied agents with lightweight LLMs (Belcak et al., 2025).
Our contributions are as follows. First, we propose XENON, an LLM-based agent that robustly learns planning knowledge from experience via algorithmic knowledge correction, instead of relying on the LLM to self-correct its own knowledge. We realize this idea through two synergistic mechanisms that explicitly store planning knowledge and correct it: Adaptive Dependency Graph (ADG) for correcting dependency knowledge based on successes, and Failure-aware Action Memory (FAM) for correcting action knowledge and disambiguating failure causes. Second, extensive experiments demonstrate that XENON significantly outperforms prior state-of-the-art agents in both knowledge learning and long-horizon goal planning in Minecraft.
2 Related work
2.1 LLM-based planning in Minecraft
Prior work has often address LLMsâ flawed planning knowledge in Minecraft using impractical methods. For example, such methods typically involve directly injecting knowledge through LLM fine-tuning (Zhao et al., 2023; Feng et al., 2024; Liu et al., 2025; Qin et al., 2024) or relying on curated expert data (Wang et al., 2023c; Zhu et al., 2023; Wang et al., 2023a).
Another line of work attempts to learn planning knowledge via interaction, by storing the experience of obtaining goal items in an external knowledge memory. However, these approaches are often limited by unrealistic assumptions or lack robust mechanisms to correct the LLMâs flawed prior knowledge. For example, ADAM and Optimus-1 artificially simplify the challenge of predicting and learning dependencies via shortcuts like pre-supplied items, while also relying on expert data such as learning curriculum (Yu and Lu, 2024) or Minecraft wiki (Li et al., 2024b). They also lack a robust way to correct wrong action choices in a plan: ADAM has none, and Optimus-1 relies on unreliable LLM self-correction. Our most similar work, DECKARD (Nottingham et al., 2023), uses an LLM to predict item dependencies but does not revise its predictions for items that repeatedly fail, and when a plan fails, it cannot disambiguate whether the failure is due to incorrect dependencies or incorrect actions. In contrast, our work tackles the more practical challenge of learning planning knowledge and correcting flawed priors from only binary success/failure feedback.
2.2 LLM-based self-correction
LLM self-correction, i.e., having an LLM correct its own outputs, is a promising approach to overcome the limitations of flawed parametric knowledge. However, for complex tasks like planning, LLMs struggle to identify and correct their own errors without external feedback (Huang et al., 2024; Tyen et al., 2024). To improve self-correction, prior works fine-tune LLMs (Yang et al., 2025) or prompt LLMs to correct themselves using environmental feedback (Shinn et al., 2023) and tool-execution results (Gou et al., 2024). While we also use binary success/failure feedbacks, we directly correct the agentâs knowledge in external memory by leveraging experience, rather than fine-tuning the LLM or prompting it to self-correct.
3 Preliminaries
We aim to develop an agent capable of solving long-horizon goals by learning planning knowledge from experience. As a representative environment which necessitates accurate planning knowledge, we consider Minecraft as our testbed. Minecraft is characterized by strict dependencies among game items (Guss et al., 2019; Fan et al., 2022), which can be formally represented as a directed acyclic graph $\mathcal{G}^{*}=(\mathcal{V}^{*},\mathcal{E}^{*})$ , where $\mathcal{V}^{*}$ is the set of all items and each edge $(u,q,v)â\mathcal{E}^{*}$ indicates that $q$ quantities of an item $u$ are required to obtain an item $v$ . In our actual implementation, each edge also stores the resulting item quantity, but we omit it from the notation for presentation simplicity, since most edges have resulting item quantity 1 and this multiplicity is not essential for learning item dependencies. A goal is to obtain an item $gâ\mathcal{V}^{*}$ . To obtain $g$ , an agent must possess all of its prerequisites as defined by $\mathcal{G}^{*}$ in its inventory, and perform the valid high-level action in $\mathcal{A}=\{\text{``mine'', ``craft'', ``smelt''}\}$ .
Framework: Hierarchical agent with graph-augmented planning
We employ a hierarchical agent with an LLM planner and a low-level controller, adopting a graph-augmented planning strategy (Li et al., 2024b; Nottingham et al., 2023). In this strategy, agent maintains its knowledge graph $\mathcal{G}$ and plans with $\mathcal{G}$ to decompose a goal $g$ into subgoals in two stages. First, the agent identifies prerequisite items it does not possess by traversing $\hat{\mathcal{G}}$ backward from $g$ to nodes with no incoming edges (i.e., basic items with no known requirements), and aggregates them into a list of (quantity, item) tuples, $((q_{1},u_{1}),...,(q_{L_{g}},u_{L_{g}})=(1,g))$ . Second, the planner LLM converts this list into executable language subgoals $\{(a_{l},q_{l},u_{l})\}_{l=1}^{L_{g}}$ , where it takes each $u_{l}$ as input and outputs a high-level action $a_{l}$ to obtain $u_{l}$ . Then the controller executes each subgoal, i.e., it takes each language subgoal as input and outputs a sequence of low-level actions in the environment to achieve it. After each subgoal execution, the agent receives only binary success/failure feedback.
Problem formulation: Dependency and action learning
To plan correctly, the agent must acquire knowledge of the true dependency graph $\mathcal{G}^{*}$ . However, $\mathcal{G}^{*}$ is latent, making it necessary for the agent to learn this structure from experience. We model this as revising a learned graph, $\hat{\mathcal{G}}=(\hat{\mathcal{V}},\hat{\mathcal{E}})$ , where $\hat{\mathcal{V}}$ contains known items and $\hat{\mathcal{E}}$ represents the agentâs current belief about item dependencies. Following Nottingham et al. (2023), whenever the agent obtains a new item $v$ , it identifies the experienced requirement set $\mathcal{R}_{\text{exp}}(v)$ , the set of (item, quantity) pairs consumed during this item acquisition. The agent then updates $\hat{\mathcal{G}}$ by replacing all existing incoming edges to $v$ with the newly observed $\mathcal{R}_{\text{exp}}(v)$ . The detailed update procedure is in Appendix C.
We aim to maximize the accuracy of learned graph $\hat{\mathcal{G}}$ against true graph $\mathcal{G}^{*}$ . We define this accuracy $N_{true}(\hat{\mathcal{G}})$ as the number of items whose incoming edges are identical in $\hat{\mathcal{G}}$ and $\mathcal{G}^{*}$ , i.e.,
$$
\displaystyle N_{true}(\hat{\mathcal{G}}) \displaystyle\coloneqq\sum_{v\in\mathcal{V}^{*}}\mathbb{I}(\mathcal{R}(v,\hat{\mathcal{G}})=\mathcal{R}(v,\mathcal{G}^{*}))\ , \tag{1}
$$
where the dependency set, $\mathcal{R}(v,\mathcal{G})$ , denotes the set of all incoming edges to the item $v$ in the graph $\mathcal{G}$ .
4 Methods
XENON is an LLM-based agent with two core components: Adaptive Dependency Graph (ADG) and Failure-aware Action Memory (FAM), as shown in Figure Ë 3. ADG manages dependency knowledge, while FAM manages action knowledge. The agent learns this knowledge in a loop that starts by selecting an unobtained item as an exploratory goal (detailed in Appendix Ë G). Once an item goal $g$ is selected, ADG, our learned dependency graph $\mathcal{G}$ , traverses itself to construct $((q_{1},u_{1}),...,(q_{L_{g}},u_{L_{g}})=(1,g))$ . For each $u_{l}$ in this list, FAM either reuses a previously successful action for $u_{l}$ or, if none exists, the planner LLM selects a high-level action $a_{l}â\mathcal{A}$ given $u_{l}$ and action histories from FAM. The resulting actions form language subgoals $\{(a_{l},q_{l},u_{l})\}_{l=1}^{L_{g}}$ . The controller then takes each subgoal as input, executes a sequence of low-level actions to achieve it, and returns binary success/failure feedback, which is used to update both ADG and FAM. The full procedure is outlined in Algorithm Ë 1 in Appendix Ë D. We next detail each component, beginning with ADG.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Flowchart: AI System Workflow with Adaptive Dependency and Failure Handling
### Overview
The diagram illustrates a cyclical AI system workflow involving goal processing, action memory, and failure recovery. Key components include an **Adaptive Dependency Graph**, **Failure-aware Action Memory**, **LLM (Large Language Model)**, **Controller**, and **Environment**. Arrows indicate data flow and decision logic, with color-coded paths (red, green, blue, yellow, purple) representing different stages or failure states.
---
### Components/Axes
1. **Adaptive Dependency Graph** (Green box, top-left):
- Represents dependencies between goals/items.
- Connected to LLM via step (1): "Goal & item requirements."
- Red dashed loop indicates invalid actions (step 5).
2. **Failure-aware Action Memory** (Purple box, bottom-left):
- Stores **Action History** (step 2).
- Handles **Subgoal Failures** (step 4).
- Red dashed loop for retry logic.
3. **LLM** (Gray box, top-right):
- Processes inputs via:
- **(3)-X Call LLM**: New subgoal generation.
- **(3)-O Reuse Subgoal**: Leverages past successes.
4. **Controller** (White box, center-right):
- Mediates between LLM and Environment.
- Receives output from LLM and sends commands to Environment.
5. **Environment** (3D cube icon, bottom-right):
- Represents the external system interacting with the Controller.
6. **Decision Logic** (Blue oval, center):
- Conditional check: "If (past successful subgoal exists)."
- Directs flow to reuse subgoals (step 3-O) or generate new ones (step 3-X).
---
### Detailed Analysis
- **Step 1**: The Adaptive Dependency Graph sends goal/item requirements to the LLM.
- **Step 2**: Action history is stored in Failure-aware Action Memory for reference.
- **Step 3**:
- If a past successful subgoal exists (blue oval), the LLM reuses it (3-O).
- Otherwise, the LLM generates a new subgoal (3-X).
- **Step 4**: Subgoal failures trigger feedback to the Failure-aware Action Memory.
- **Step 5**: If all actions are invalid, the system loops back to the Adaptive Dependency Graph.
**Color Coding**:
- Red: Failure paths (steps 4, 5).
- Green: Initial goal processing (step 1).
- Blue: Success-based reuse (step 3-O).
- Yellow: New subgoal generation (step 3-X).
- Purple: Memory-related processes (steps 2, 4).
---
### Key Observations
1. **Cyclical Workflow**: The system continuously adapts by reusing past successes or retrying after failures.
2. **Failure Handling**: Red loops emphasize robustness, ensuring the system avoids infinite invalid actions.
3. **Memory Integration**: Action history directly informs subgoal decisions, enabling learning from past experiences.
4. **LLM Centrality**: The LLM acts as the decision engine, balancing exploration (new subgoals) and exploitation (reuse).
---
### Interpretation
This diagram represents an **adaptive AI architecture** designed for dynamic environments. The **Adaptive Dependency Graph** ensures goals align with item capabilities, while the **Failure-aware Action Memory** prevents redundant failures by leveraging historical data. The LLMâs dual role (generating or reusing subgoals) optimizes efficiency, and the Controller-Environment loop enables real-world interaction. The red dashed loops highlight a critical safeguard: if no valid actions exist, the system resets dependencies to avoid stagnation.
The workflow prioritizes **efficiency** (reuse) and **resilience** (failure recovery), suggesting applications in robotics, autonomous systems, or complex task automation where adaptability is paramount. The absence of explicit numerical data implies a focus on logical flow rather than quantitative metrics.
</details>
Figure 2: Overview. XENON updates Adaptive Dependency Graph and Failure-aware Action Memory with environmental experiences.
4.1 Adaptive Dependency Graph (ADG)
<details>
<summary>x5.png Details</summary>

### Visual Description
## Flowchart: System Architecture for Adaptive Task Management
### Overview
The diagram illustrates a dynamic system where an LLM (Large Language Model) interacts with a Controller and Environment to manage tasks. Key components include an **Adaptive Dependency Graph**, **Failure-aware Action Memory**, and feedback loops for handling subgoal failures. The flow emphasizes adaptability, memory of past actions, and conditional logic for reusing or calling new subgoals.
---
### Components/Axes
1. **Adaptive Dependency Graph** (green): Represents task dependencies and constraints.
2. **Failure-aware Action Memory** (purple): Stores historical action data and failures.
3. **LLM** (yellow): Interacts with the Controller to call or reuse subgoals.
4. **Controller** (black): Mediates between LLM, Environment, and Memory.
5. **Environment** (blue cube): External system interacting with the Controller.
6. **Legend Colors**:
- Green: Goal & item requirements
- Blue: Action history
- Yellow: LLM interactions
- Red: Subgoal failures/invalid actions
- Purple: Memory updates
- Black: Controller
---
### Detailed Analysis
1. **Goal & Item Requirements** (green arrow):
- Input to the Adaptive Dependency Graph.
- Triggers the system workflow.
2. **Action History** (blue arrow):
- Feeds into the Failure-aware Action Memory.
- Used to determine if a past successful subgoal exists.
3. **LLM Interactions** (yellow arrows):
- **(3)-X Call LLM**: Invokes the LLM when no past subgoal exists.
- **(3)-O Reuse Subgoal**: Reuses a past subgoal if available.
4. **Subgoal Failures** (red arrow):
- Loops back to the Failure-aware Action Memory for analysis.
5. **Invalid Actions** (red dashed loop):
- All actions marked invalid trigger a memory update.
6. **Controller-Environment Loop** (black arrows):
- The Controller sends outputs to the Environment and receives feedback.
---
### Key Observations
- **Feedback Loops**: The system prioritizes learning from failures (red) and past successes (blue).
- **Conditional Logic**: The LLMâs role shifts between calling new subgoals (3-X) and reusing existing ones (3-O) based on memory.
- **Central Role of Memory**: The Failure-aware Action Memory acts as a hub for storing and retrieving action history.
- **Adaptive Graph**: The green graph dynamically adjusts based on task requirements and memory updates.
---
### Interpretation
This system is designed for **adaptive task management**, where the LLM and Controller collaborate to optimize workflows. The **Failure-aware Action Memory** ensures robustness by learning from past errors, while the **Adaptive Dependency Graph** maintains task constraints. The Controllerâs dual role in mediating LLM-Environment interactions and memory updates highlights its critical function in balancing efficiency and adaptability. The emphasis on reusing past subgoals (3-O) suggests a focus on reducing computational overhead, while the red failure loops indicate a proactive approach to error handling. Overall, the diagram reflects a **closed-loop system** where feedback drives continuous improvement.
</details>
Figure 3: Overview. XENON updates Adaptive Dependency Graph and Failure-aware Action Memory with environmental experiences.
Dependency graph initialization
To make the most of the LLMâs prior knowledge, albeit incomplete, we initialize the learned dependency graph $\hat{\mathcal{G}}=(\hat{\mathcal{V}},\hat{\mathcal{E}})$ using an LLM. We follow the initialization process of DECKARD (Nottingham et al., 2023), which consists of two steps. First, $\hat{\mathcal{V}}$ is assigned $\mathcal{V}_{0}$ , which is the set of goal items whose dependencies must be learned, and $\hat{\mathcal{E}}$ is assigned $\emptyset$ . Second, for each item $v$ in $\hat{\mathcal{V}}$ , the LLM is prompted to predict its requirement set (i.e. incoming edges of $v$ ), aggregating them to construct the initial graph.
However, those LLM-predicted requirement sets often include items not present in the initial set $\mathcal{V}_{0}$ , which is a phenomenon overlooked by DECKARD. Since $\mathcal{V}_{0}$ may be an incomplete subset of all possible game items $\mathcal{V}^{*}$ , we cannot determine whether such items are genuine required items or hallucinated items which do not exist in the environment. To address this, we provisionally accept all LLM requirement set predictions. We iteratively expand the graph by adding any newly mentioned item to $\hat{\mathcal{V}}$ and, in turn, querying the LLM for its own requirement set. This expansion continues until a requirement set has been predicted for every item in $\hat{\mathcal{V}}$ . Since we assume that the true graph $\mathcal{G}^{*}$ is a DAG, we algorithmically prevent cycles in $\hat{\mathcal{G}}$ ; see Section Ë E.2 for the cycle-check procedure. The quality of this initial LLM-predicted graph is analyzed in detail in Appendix K.1.
Dependency graph revision
Correcting the agentâs flawed dependency knowledge involves two challenges: (1) detecting and handling hallucinated items from the graph initialization, and (2) proposing a new requirement set. Simply prompting an LLM for corrections is ineffective, as it often predicts a new, flawed requirement set, as shown in Figures 1 c and 1 d. Therefore, we revise $\hat{\mathcal{G}}$ algorithmically using the agentâs experiences, without relying on the LLM.
To implement this, we introduce a dependency revision procedure called RevisionByAnalogy and a revision count $C(v)$ for each item $vâ\hat{\mathcal{V}}$ . This procedure outputs a revised graph by taking item $v$ whose dependency needs to be revised, its revision count $C(v)$ , and the current graph $\hat{\mathcal{G}}$ as inputs, leveraging the required items of previously obtained items. When a revision for an item $v$ is triggered by FAM (Section Ë 4.2), the procedure first discards $v$ âs existing requirement set ( $\text{i.e}.\hbox{},\mathcal{R}(v,\hat{\mathcal{G}})â\emptyset$ ). It increments the revision count $C(v)$ for $v$ . Based on whether $C(v)$ exceeds a hyperparameter $c_{0}$ , RevisionByAnalogy proceeds with one of the following two cases:
- Case 1: Handling potentially hallucinated items ( $C(v)>c_{0}$ ). If an item $v$ remains unobtainable after excessive revisions, the procedure flags it as inadmissible to signify that it may be a hallucinated item. This reveals a critical problem: if $v$ is indeed a hallucinated item, any of its descendants in $\hat{\mathcal{G}}$ become permanently unobtainable. To enable XENON to try these descendant items through alternative paths, we recursively call RevisionByAnalogy for all of $v$ âs descendants in $\hat{\mathcal{G}}$ , removing their dependency on the inadmissible item $v$ (Figure Ë 4 a, Case 1). Finally, to account for cases where $v$ may be a genuine item that is simply difficult to obtain, its requirement set $\mathcal{R}(v,\hat{\mathcal{G}})$ is reset to a general set of all resource items (i.e. items previously consumed for crafting other items), each with a quantity of hyperparameter $\alpha_{i}$ .
- Case 2: Plausible revision for less-tried items ( $C(v)†c_{0}$ ). The item $v$ âs requirement set, $\mathcal{R}(v,\hat{\mathcal{G}})$ , is revised to determine both a plausible set of new items and their quantities. First, for plausible required items, we use an idea that similar goals often share similar preconditions (Yoon et al., 2024). Therefore, we set the new required items referencing the required items of the top- $K$ similar, successfully obtained items (Figure Ë 4 a, Case 2). We compute this item similarity as the cosine similarity between the Sentence-BERT (Reimers and Gurevych, 2019) embeddings of item names. Second, to determine their quantities, the agent should address the trade-off between sufficient amounts to avoid failures and an imperfect controllerâs difficulty in acquiring them. Therefore, the quantities of those new required items are determined by gradually scaling with the revision count, $\alpha_{s}C(v)$ .
Here, the hyperparameter $c_{0}$ serves as the revision count threshold for flagging an item as inadmissible. $\alpha_{i}$ and $\alpha_{s}$ control the quantity of each required item for inadmissible items (Case 1), and for less-tried items (Case 2), respectively, to maintain robustness when dealing with an imperfect controller. $K$ determines the number of similar, successfully obtained items to reference for (Case 2). Detailed pseudocode of RevisionByAnalogy is in Section Ë E.3, Algorithm Ë 3.
<details>
<summary>x6.png Details</summary>

### Visual Description
## Diagram: Dependency and Action Correction Framework
### Overview
The image depicts a two-part technical framework for correcting dependencies and actions in a system, likely related to game mechanics or automated processes. It combines dependency resolution (ADG) and failure analysis (FAM) with visual workflows and decision logic.
---
### Components/Axes
#### Part (a): Dependency Correction for ADG
- **Case 1 (ADG)**:
- **Components**:
- "Descendant (Leaf)" (green box with leaf icon)
- "Descendant" (green box with tool icon)
- "Hallucinated item" (red box with bug icon)
- **Flow**:
- Arrows indicate recursive calls to `RevisionByAnalogy`.
- Hallucinated item triggers dependency correction.
- **Case 2 (ADG)**:
- **Components**:
- "Search similar, obtained items" (green checkmarks)
- "Replace the wrong dependency" (red X and green checkmarks)
- **Flow**:
- Invalid dependencies (red X) are replaced with valid ones (green checkmarks).
#### Part (b): Action Correction for FAM
- **Components**:
- **Prompt**: "Select an action for: mine, craft, smelt..."
- **Failure Analysis (FAM)**:
- Failure counts:
- "mine": 2 (red highlight)
- "craft": 1
- "smelt": 0
- **Subgoal**: "craft" (highlighted in yellow).
- **Flow**:
- Invalid actions (e.g., "mine") are removed.
- System suggests trying under-explored actions (e.g., "craft").
---
### Detailed Analysis
#### Part (a): Dependency Correction
- **Case 1**:
- Recursive dependency resolution (`RevisionByAnalogy`) addresses hallucinated items.
- Hallucinated items (red) are flagged for correction.
- **Case 2**:
- Similar items are searched (green checkmarks) to replace invalid dependencies.
- Visual contrast between red X (invalid) and green checkmarks (valid).
#### Part (b): Action Correction
- **Failure Analysis (FAM)**:
- "mine" has the highest failure count (2), marked as invalid.
- "craft" is prioritized as the subgoal (yellow highlight).
- **Action Selection**:
- System iteratively selects actions (mine, craft, smelt) and removes invalid ones.
- Final step: "Try under-explored action" (craft).
---
### Key Observations
1. **Dependency Correction**:
- Hallucinated items and invalid dependencies are resolved through recursive analysis and replacement.
2. **Action Correction**:
- Failure counts directly influence action prioritization (e.g., "mine" is deprioritized).
- Subgoal alignment ("craft") suggests adaptive decision-making.
---
### Interpretation
The framework combines **dependency resolution** (ADG) and **failure-driven action correction** (FAM) to optimize system behavior.
- **ADG** focuses on structural integrity (e.g., fixing invalid item dependencies).
- **FAM** uses failure metrics to guide action selection, favoring under-explored or high-priority subgoals.
- The use of color coding (red for errors, green for valid steps) and hierarchical flowcharts emphasizes a systematic, reactive approach to errors.
- The subgoal "craft" being prioritized despite lower failure counts suggests a strategic bias toward resource generation or crafting in the systemâs objectives.
This framework likely applies to scenarios requiring robust error handling, such as game AI, automated workflows, or dependency management systems.
</details>
Figure 4: XENONâs algorithmic knowledge correction. (a) Dependency Correction via RevisionByAnalogy. Case 1: For an inadmissible item (e.g., a hallucinated item), its descendants are recursively revised to remove the flawed dependency. Case 2: A flawed requirement set is revised by referencing similar, obtained items. (b) Action Correction via FAM. FAM prunes invalid actions from the LLMâs prompt based on failures, guiding it to select an under-explored action.
4.2 Failure-aware Action Memory (FAM)
FAM is designed to address two challenges of learning only from binary success/failure feedback: (1) discovering valid high-level actions for each item, and (2) disambiguating the cause of persistent failures between invalid actions and flawed dependency knowledge. This section first describes FAMâs core mechanism, and then details how it addresses each of these challenges in turn.
Core mechanism: empirical action classification
FAM classifies actions as either empirically valid or empirically invalid for each item, based on their history of past subgoal outcomes. Specifically, for each item $vâ\hat{\mathcal{V}}$ and action $aâ\mathcal{A}$ , FAM maintains the number of successful and failed outcomes, denoted as $S(a,v)$ and $F(a,v)$ respectively. Based on these counts, an action $a$ is classified as empirically invalid for $v$ if it has failed repeatedly, (i.e., $F(a,v)â„ S(a,v)+x_{0}$ ); otherwise, it is classified as empirically valid if it has succeeded at least once (i.e., $S(a,v)>0$ and $S(a,v)>F(a,v)-x_{0}$ ). The hyperparameter $x_{0}$ controls the tolerance for this classification, accounting for the possibility that an imperfect controller might fail even with an indeed valid action.
Addressing challenge 1: discovering valid actions
FAM helps XENON discover valid actions by avoiding repeatedly failed actions when making a subgoal $sg_{l}=(a_{l},q_{l},u_{l})$ . Only when FAM has no empirically valid action for $u_{l}$ , XENON queries the LLM to select an under-explored action for constructing $sg_{l}$ . To accelerate this search for a valid action, we query the LLM with (i) the current subgoal item $u_{l}$ , (ii) empirically valid actions for top- $K$ similar items successfully obtained and stored in FAM (using Sentence-BERT similarity as in Section Ë 4.1), and (iii) candidate actions for $u_{l}$ that remain after removing all empirically invalid actions from $\mathcal{A}$ (Figure Ë 4 b). We prune action candidates rather than include the full failure history because LLMs struggle to effectively utilize long prompts (Li et al., 2024a; Liu et al., 2024). If FAM already has an empirically valid one, XENON reuses it to make $sg_{l}$ without using LLM. Detailed procedures and prompts are in Appendix Ë F.
Addressing challenge 2: disambiguating failure causes
By ensuring systematic action exploration, FAM allows XENON to determine that persistent subgoal failures stem from flawed dependency knowledge rather than from the actions. Specifically, once FAM classifies all actions in $\mathcal{A}$ for an item as empirically invalid, XENON concludes that the error lies within ADG and triggers its revision. Subsequently, XENON resets the itemâs history in FAM to allow for a fresh exploration of actions with the revised ADG.
4.3 Additional technique: context-aware reprompting (CRe) for controller
In real-world-like environments, an imperfect controller can stall (e.g., in deep water). To address this, XENON employs context-aware reprompting (CRe), where an LLM uses the current image observation and the controllerâs language subgoal to decide whether to replace the subgoal and propose a new temporary subgoal to escape the stalled state (e.g., âget out of the waterâ). Our CRe is adapted from Optimus-1 (Li et al., 2024b) to be suitable for smaller LLMs, with two differences: (1) a two-stage reasoning process that captions the observation first and then makes a text-only decision on whether to replace the subgoal, and (2) a conditional trigger that activates only when the subgoal for item acquisition makes no progress, rather than at fixed intervals. See Appendix Ë H for details.
5 Experiments
5.1 Setups
Environments
We conduct experiments in three Minecraft environments, which we separate into two categories based on their controller capacity. First, as realistic, visually-rich embodied AI environments, we use MineRL (Guss et al., 2019) and Mineflayer (PrismarineJS, 2023) with imperfect low-level controllers: STEVE-1 (Lifshitz et al., 2023) in MineRL and hand-crafted codes (Yu and Lu, 2024) in Mineflayer. Second, we use MC-TextWorld (Zheng et al., 2025) as a controlled testbed with a perfect controller. Each experiment in this environment is repeated over 15 runs; in our results, we report the mean and standard deviation, omitting the latter when it is negligible. In all environments, the agent starts with an empty inventory. Further details on environments are provided in Appendix Ë J. Additional experiments in a household task planning domain other than Minecraft are reported in Appendix Ë A, where XENON also exhibits robust performance.
Table 1: Comparison of knowledge correction mechanisms across agents. â: Our proposed mechanism (XENON), $\triangle$ : LLM self-correction, â: No correction, â: Not applicable.
| Agent | Dependency Correction | Action Correction |
| --- | --- | --- |
| XENON | â | â |
| SC | $\triangle$ | $\triangle$ |
| DECKARD | â | â |
| ADAM | - | â |
| RAND | â | â |
Evaluation metrics
For both dependency learning and planning evaluations, we utilize the 67 goals from 7 groups proposed in the long-horizon task benchmark (Li et al., 2024b). To evaluate dependency learning with an intuitive performance score between 0 and 1, we report $N_{\text{true}}(\hat{\mathcal{G}})/67$ , where $N_{\text{true}}(\hat{\mathcal{G}})$ is defined in Equation Ë 1. We refer to this normalized score as Experienced Graph Accuracy (EGA). To evaluate planning performance, we follow the benchmark setting (Li et al., 2024b): at the beginning of each episode, a goal item is specified externally for the agent, and we measure the average success rate (SR) of obtaining this goal item in MineRL. See Table Ë 10 for the full list of goals.
Implementation details
For the planner, we use Qwen2.5-VL-7B (Bai et al., 2025). The learned dependency graph is initialized with human-written plans for three goals (âcraft an iron sword
<details>
<summary>x7.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
â, âcraft a golden sword
<details>
<summary>x8.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
,â âmine a diamond
<details>
<summary>x9.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
â), providing minimal knowledge; the agent must learn dependencies for over 80% of goal items through experience. We employ CRe only for long-horizon goal planning in MineRL. All hyperparameters are kept consistent across experiments. Further details on hyperparameters and human-written plans are in Appendix Ë I.
Baselines
As no prior work learns dependencies in our exact setting, we adapt four baselines, whose knowledge correction mechanisms are summarized in Table 1. For dependency knowledge, (1) LLM Self-Correction (SC) starts with an LLM-predicted dependency graph and prompts the LLM to revise it upon failures; (2) DECKARD (Nottingham et al., 2023) also relies on an LLM-predicted graph but with no correction mechanism; (3) ADAM (Yu and Lu, 2024) assumes that any goal item requires all previously used resource items, each in a sufficient quantity; and (4) RAND, the simplest baseline, uses a static graph similar to DECKARD. Regarding action knowledge, all baselines except for RAND store successful actions. However, only the SC baseline attempts to correct its flawed knowledge upon failures. The SC prompts the LLM to revise both its dependency and action knowledge using previous LLM predictions and interaction trajectories, as done in many self-correction methods (Shinn et al., 2023; Stechly et al., 2024). See Appendix Ë B for the prompts of SC and Section Ë J.1 for detailed descriptions of these baselines. To evaluate planning on diverse long-horizon goals, we further compare XENON with recent planning agents that are provided with oracle dependencies: DEPS Wang et al. (2023b), Jarvis-1 Wang et al. (2023c), Optimus-1 Li et al. (2024b), and Optimus-2 Li et al. (2025b).
5.2 Robust dependency learning against flawed prior knowledge
<details>
<summary>x10.png Details</summary>

### Visual Description
## Line Chart: Algorithm Performance Over Episodes
### Overview
The image is a line chart comparing the performance of five algorithms (XENON, SC, DECKARD, ADAM, RAND) across 400 episodes, measured by EGA (Expected Goal Achievement). The y-axis ranges from 0.0 to 1.0, and the x-axis spans 0 to 400 episodes. The legend is positioned on the right, with distinct colors for each algorithm.
### Components/Axes
- **X-Axis**: Labeled "Episode," with increments at 0, 100, 200, 300, and 400.
- **Y-Axis**: Labeled "EGA," scaled from 0.0 to 1.0 in 0.2 increments.
- **Legend**: Located on the right, with the following mappings:
- **Blue**: XENON
- **Pink**: SC
- **Green**: DECKARD
- **Orange**: ADAM
- **Gray**: RAND
### Detailed Analysis
1. **XENON (Blue)**:
- Starts at ~0.15 EGA at 0 episodes.
- Increases steadily, reaching ~0.65 EGA by 400 episodes.
- Slope: Consistent upward trend with no plateaus.
2. **SC (Pink)**:
- Begins at ~0.15 EGA, rising to ~0.38 EGA by 100 episodes.
- Plateaus between 100 and 400 episodes (~0.38â0.42 EGA).
3. **DECKARD (Green)**:
- Starts at ~0.15 EGA, peaks at ~0.42 EGA by 100 episodes.
- Dips slightly (~0.38 EGA) by 200 episodes, then stabilizes.
4. **ADAM (Orange)**:
- Remains flat at ~0.15 EGA across all episodes.
5. **RAND (Gray)**:
- Starts at ~0.15 EGA, slightly increases to ~0.17 EGA by 400 episodes.
- Minimal upward trend compared to others.
### Key Observations
- **XENON** demonstrates the highest and most consistent growth in EGA.
- **SC** and **DECKARD** show mid-range performance, with SC plateauing earlier than DECKARD.
- **ADAM** and **RAND** exhibit negligible improvement, with RAND slightly outperforming ADAM by ~0.02 EGA at 400 episodes.
- **Crossing Point**: SC and DECKARD lines intersect near 100 episodes, with DECKARD briefly outperforming SC before both plateau.
### Interpretation
The data suggests **XENON** is the most effective algorithm for maximizing EGA over time, with a clear linear improvement. **SC** and **DECKARD** may employ different strategies, as evidenced by their divergent trajectories (SCâs early plateau vs. DECKARDâs delayed dip). **ADAM** and **RAND** likely represent baseline or random performance, with RAND showing marginally better results than ADAM. The lack of improvement in ADAM and RAND implies they may not adapt to increasing episode complexity. The crossing of SC and DECKARD lines highlights potential trade-offs in their design, warranting further investigation into their underlying mechanisms.
</details>
(a) MineRL
<details>
<summary>x11.png Details</summary>

### Visual Description
## Line Chart: EGA Performance Across Episodes
### Overview
The image is a line chart depicting the performance of five distinct entities (labeled as Blue Line, Orange Line, Green Line, Pink Line, and Dark Blue Line) across episodes. The y-axis represents "EGA" (a metric ranging from 0.0 to 1.0), while the x-axis represents "Episode" (from 000 to 400). Each line shows a unique trend, with some lines plateauing after an initial increase.
### Components/Axes
- **Y-axis (EGA)**: Labeled "EGA" with a scale from 0.0 to 1.0 in increments of 0.2.
- **X-axis (Episode)**: Labeled "Episode" with a scale from 000 to 400 in increments of 100.
- **Legend**: Positioned on the right side of the chart, with five entries:
- **Blue Line**: Blue color, circular markers.
- **Orange Line**: Orange color, square markers.
- **Green Line**: Green color, diamond markers.
- **Pink Line**: Pink color, triangle markers.
- **Dark Blue Line**: Dark blue color, square markers.
### Detailed Analysis
1. **Blue Line**:
- Starts at 0.0 (Episode 000).
- Rises sharply to approximately 0.9 by Episode 100.
- Plateaus at ~0.9 from Episode 100 to 400.
- **Trend**: Steep upward slope followed by a flat line.
2. **Orange Line**:
- Starts at 0.0 (Episode 000).
- Rises to ~0.65 by Episode 100.
- Remains flat at ~0.65 from Episode 100 to 400.
- **Trend**: Moderate upward slope followed by a plateau.
3. **Green Line**:
- Starts at 0.0 (Episode 000).
- Gradually increases to ~0.45 by Episode 300.
- Plateaus at ~0.45 from Episode 300 to 400.
- **Trend**: Slow upward slope followed by a plateau.
4. **Pink Line**:
- Starts at 0.0 (Episode 000).
- Rises to ~0.4 by Episode 300.
- Plateaus at ~0.4 from Episode 300 to 400.
- **Trend**: Gradual upward slope followed by a plateau.
5. **Dark Blue Line**:
- Starts at 0.0 (Episode 000).
- Increases to ~0.2 by Episode 300.
- Plateaus at ~0.2 from Episode 300 to 400.
- **Trend**: Slow upward slope followed by a plateau.
### Key Observations
- **Blue Line** consistently achieves the highest EGA, reaching ~0.9 and maintaining it.
- **Orange Line** is the second-highest, peaking at ~0.65.
- **Green Line** and **Pink Line** show similar trends but with lower EGA values (~0.45 and ~0.4, respectively).
- **Dark Blue Line** has the lowest EGA, peaking at ~0.2.
- All lines plateau after their initial rise, suggesting stabilization of EGA over time.
### Interpretation
The chart demonstrates that the **Blue Line** (likely representing a specific entity or strategy) is the most effective, achieving the highest EGA and maintaining it across episodes. The **Orange Line** follows as the second-most effective, while the **Green**, **Pink**, and **Dark Blue Lines** show progressively lower performance. The plateauing trends indicate that EGA stabilizes after a certain number of episodes, suggesting diminishing returns or convergence in performance. The sharp rise of the Blue Line implies a rapid improvement in effectiveness early on, which is not observed in the other lines. This could reflect differences in initial conditions, strategies, or inherent capabilities of the entities being measured.
</details>
(b) Mineflayer
Figure 5: Robustness against flawed prior knowledge. EGA over 400 episodes in (a) MineRL and (b) Mineflayer. XENON consistently outperforms the baselines.
Table 2: Robustness to LLM hallucinations. The number of correctly learned dependencies of items that are descendants of a hallucinated item in the initial LLM-predicted dependency graph (out of 12).
| Agent | Learned descendants of hallucinated items |
| --- | --- |
| XENON | 0.33 |
| SC | 0 |
| ADAM | 0 |
| DECKARD | 0 |
| RAND | 0 |
XENON demonstrates robust dependency learning from flawed prior knowledge, consistently outperforming baselines with an EGA of approximately 0.6 in MineRL and 0.9 in Mineflayer (Figure Ë 5), despite the challenging setting with imperfect controllers. This superior performance is driven by its algorithmic correction mechanism, RevisionByAnalogy, which corrects flawed dependency knowledge while also accommodating imperfect controllers by gradually scaling required items quantities. The robustness of this algorithmic correction is particularly evident in two key analyses of the learned graph for each agent from the MineRL experiments. First, as shown in Table Ë 2, XENON is uniquely robust to LLM hallucinations, learning dependencies for descendant items of non-existent, hallucinated items in the initial LLM-predicted graph. Second, XENON outperforms the baselines in learning dependencies for items that are unobtainable by the initial graph, as shown in Table Ë 13.
Our results demonstrate the unreliability of relying on LLM self-correction or blindly trusting an LLMâs flawed knowledge; in practice, SC achieves the same EGA as DECKARD, with both plateauing around 0.4 in both environments.
We observe that controller capacity strongly impacts dependency learning. This is evident in ADAM, whose EGA differs markedly between MineRL ( $â$ 0.1), which has a limited controller, and Mineflayer ( $â$ 0.6), which has a more competent controller. While ADAM unrealistically assumes a controller can gather large quantities of all resource items before attempting a new item, MineRLâs controller STEVE-1 (Lifshitz et al., 2023) cannot execute this demanding strategy, causing ADAMâs EGA to fall below even the simplest baseline, RAND. Controller capacity also accounts for XENONâs lower EGA in MineRL. For instance, XENON learns none of the dependencies of the Redstone group items, as STEVE-1 cannot execute XENONâs strategy for inadmissible items (Section Ë 4.1). In contrast, the more capable Mineflayer controller executes this strategy successfully, allowing XENON to learn the correct dependencies for 5 of 6 Redstone items. This difference highlights the critical role of controllers for dependency learning, as detailed in our analysis in Section Ë K.3
5.3 Effective planning to solve diverse goals
Table 3: Performance on long-horizon task benchmark. Average success rate of each group on the long-horizon task benchmark Li et al. (2024b) in MineRL. Oracle indicates that the true dependency graph is known in advance, Learned indicates that the graph is learned via experience across 400 episodes. For fair comparison across LLMs, we include Optimus-1 â , our reproduction of Optimus-1 using Qwen2.5-VL-7B. Due to resource limits, results for DEPS, Jarvis-1, Optimus-1, and Optimus-2 are cited directly from (Li et al., 2025b). See Section Ë K.12 for the success rate on each goal.
| Method | Dependency | Planner LLM | Overall |
<details>
<summary>x12.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x13.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x14.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x15.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x16.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x17.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x18.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Wood | Stone | Iron | Diamond | Gold | Armor | Redstone | | | | |
| DEPS | - | Codex | 0.22 | 0.77 | 0.48 | 0.16 | 0.01 | 0.00 | 0.10 | 0.00 |
| Jarvis-1 | Oracle | GPT-4 | 0.38 | 0.93 | 0.89 | 0.36 | 0.08 | 0.07 | 0.15 | 0.16 |
| Optimus-1 | Oracle | GPT-4V | 0.43 | 0.98 | 0.92 | 0.46 | 0.11 | 0.08 | 0.19 | 0.25 |
| Optimus-2 | Oracle | GPT-4V | 0.45 | 0.99 | 0.93 | 0.53 | 0.13 | 0.09 | 0.21 | 0.28 |
| Optimus-1 â | Oracle | Qwen2.5-VL-7B | 0.34 | 0.92 | 0.80 | 0.22 | 0.10 | 0.09 | 0.17 | 0.04 |
| XENON â | Oracle | Qwen2.5-VL-7B | 0.79 | 0.95 | 0.93 | 0.83 | 0.75 | 0.73 | 0.61 | 0.75 |
| XENON | Learned | Qwen2.5-VL-7B | 0.54 | 0.85 | 0.81 | 0.46 | 0.64 | 0.74 | 0.28 | 0.00 |
As shown in Table Ë 3, XENON significantly outperforms baselines in solving diverse long-horizon goals despite using the lightweight Qwen2.5-VL-7B LLM (Bai et al., 2025), while the baselines rely on large proprietary models such as Codex (Chen et al., 2021), GPT-4 (OpenAI, 2024), and GPT-4V (OpenAI, 2023). Remarkably, even with its learned dependency knowledge (Section Ë 5.2), XENON surpasses the baselines with the oracle knowledge on challenging late-game goals, achieving high SRs for item groups like Gold (0.74) and Diamond (0.64).
XENONâs superiority stems from two key factors. First, its FAM provides systematic, fine-grained action correction for each goal. Second, it reduces reliance on the LLM for planning in two ways: it shortens prompts and outputs by requiring it to predict one action per subgoal item, and it bypasses the LLM entirely by reusing successful actions from FAM. In contrast, the baselines lack a systematic, fine-grained action correction mechanism and instead make LLMs generate long plans from lengthy promptsâa strategy known to be ineffective for LLMs (Wu et al., 2024; Li et al., 2024a). This challenge is exemplified by Optimus-1 â . Despite using a knowledge graph for planning like XENON, its long-context generation strategy causes LLM to predict incorrect actions or omit items explicitly provided in its prompt, as detailed in Section Ë K.5.
We find that accurate knowledge is critical for long-horizon planning, as its absence can make even a capable agent ineffective. The Redstone group from Table Ë 3 provides an example: while XENON â with oracle knowledge succeeds (0.75 SR), XENON with learned knowledge fails entirely (0.00 SR), because it failed to learn the dependencies for Redstone goals due to the controllerâs limited capacity in MineRL (Section Ë 5.2). This finding is further supported by our comprehensive ablation study, which confirms that accurate dependency knowledge is most critical for success across all goals (See Table Ë 17 in Section Ë K.7).
5.4 Robust dependency learning against knowledge conflicts
<details>
<summary>x19.png Details</summary>

### Visual Description
## Legend: Data Series Identifiers
### Overview
The image displays a horizontal legend bar containing five distinct data series identifiers, each represented by a unique color and symbol. The legend is enclosed in a light gray rectangular border with a white background. Entries are arranged left-to-right in the following order: XENON, SC, ADAM, DECKARD, RAND.
### Components/Axes
- **Legend Structure**:
- **XENON**: Blue circle (â) with horizontal line through center
- **SC**: Pink diamond (â) with horizontal line through center
- **ADAM**: Orange hexagon (â) with horizontal line through center
- **DECKARD**: Green square (â ) with horizontal line through center
- **RAND**: Gray plus sign (â) with horizontal line through center
### Detailed Analysis
- **Color-Symbol Associations**:
- All symbols include a horizontal line through their center, suggesting a standardized design for data series representation
- Colors are distinct and high-contrast: blue (#0000FF), pink (#FFC0CB), orange (#FFA500), green (#00FF00), gray (#808080)
- Symbol shapes follow common data visualization conventions (circle, diamond, hexagon, square, plus)
- **Spatial Grounding**:
- Legend occupies central horizontal position in the image
- Each entry maintains equal horizontal spacing
- Symbols are left-aligned with their corresponding labels
### Key Observations
1. The legend uses five unique combinations of color and shape to differentiate data series
2. All symbols incorporate a horizontal line through their center, possibly indicating a shared visual property across data series
3. The order of entries (XENON â RAND) suggests a potential categorical or alphabetical sequence
### Interpretation
This legend serves as a key for interpreting data series in a technical visualization context. The standardized use of color-symbol pairs with horizontal lines suggests:
- A system designed for accessibility (clear visual differentiation)
- Potential application in scientific or analytical dashboards
- The "RAND" entry (gray plus) may represent random or uncontrolled variables
- The "DECKARD" green square could indicate a primary or reference data series
The absence of numerical values or axes confirms this is purely a categorical legend rather than a data visualization. The design prioritizes unambiguous identification of data series through distinct visual encoding.
</details>
<details>
<summary>x20.png Details</summary>

### Visual Description
## Line Chart: EGA Performance Across Perturbations
### Overview
The chart displays four data series representing EGA (Expected Goal Achievement) performance across different perturbation scenarios. The x-axis represents perturbation states (0-3 required items with action 0), while the y-axis shows EGA values from 0.0 to 1.0. The "Unperturbed" condition maintains perfect performance, while perturbed conditions show varying degrees of degradation.
### Components/Axes
- **X-axis**: Perturbed states labeled as (0,0), (1,0), (2,0), (3,0)
- **Y-axis**: EGA values (0.0-1.0)
- **Legend**:
- Blue circles: Unperturbed (1.0)
- Orange crosses: Item 1 perturbation
- Green squares: Item 2 perturbation
- Pink diamonds: Item 3 perturbation
### Detailed Analysis
1. **Unperturbed (Blue)**:
- Maintains perfect 1.0 EGA across all states
- No variation observed (flat line)
2. **Item 1 (Orange)**:
- Starts at ~0.65 at (0,0)
- Drops sharply to ~0.4 at (1,0)
- Fluctuates between 0.35-0.4 at (2,0) and (3,0)
3. **Item 2 (Green)**:
- Begins at ~0.5 at (0,0)
- Declines to ~0.35 at (1,0)
- Stabilizes around 0.35-0.4 at higher perturbation states
4. **Item 3 (Pink)**:
- Initial value ~0.6 at (0,0)
- Drops to ~0.4 at (1,0)
- Shows slight recovery to ~0.45 at (2,0) and (3,0)
### Key Observations
- **Unperturbed dominance**: Perfect performance maintained regardless of perturbation states
- **Item 1 sensitivity**: Most significant initial performance drop (23% decrease from 0.65 to 0.4)
- **Partial recovery pattern**: Items 2 and 3 show modest improvement at higher perturbation states
- **Consistent degradation**: All perturbed conditions show EGA <0.5 compared to unperturbed
### Interpretation
The data demonstrates that perturbations (required items) negatively impact EGA performance, with Item 1 causing the most severe initial degradation. While Items 2 and 3 show partial recovery at higher perturbation states, none reach the unperturbed performance level. This suggests:
1. **Threshold effects**: Initial perturbations create performance cliffs
2. **Adaptation potential**: Slight recovery in Items 2/3 may indicate system resilience
3. **Item-specific impacts**: Different perturbation types create distinct performance trajectories
4. **Operational implications**: System design must account for perturbation tolerance thresholds
The consistent performance gap between unperturbed and perturbed conditions highlights the critical importance of maintaining system integrity under real-world operational constraints.
</details>
(a) Perturbed True Required Items
<details>
<summary>x21.png Details</summary>

### Visual Description
## Line Graph: EGA Trends Across Perturbed States
### Overview
The image depicts a line graph comparing four data series labeled "EGA" across four perturbation states: (0,0), (0,1), (0,2), and (0,3). The y-axis represents EGA values (0.0â1.0), while the x-axis categorizes perturbations by "required items" and "action." Four distinct lines with unique markers and colors illustrate trends, with one line remaining constant and others showing declines.
### Components/Axes
- **X-Axis**: Labeled "Perturbed (required items, action)" with four categories:
- (0,0)
- (0,1)
- (0,2)
- (0,3)
- **Y-Axis**: Labeled "EGA" with a scale from 0.0 to 1.0 in increments of 0.2.
- **Legend**: Positioned on the right, associating colors and markers with labels:
- **Blue circles**: "EGA (no perturbation)"
- **Pink diamonds**: "EGA (perturbation: 1 item)"
- **Green squares**: "EGA (perturbation: 2 items)"
- **Orange crosses**: "EGA (perturbation: 3 items)"
### Detailed Analysis
1. **Blue Circles ("EGA (no perturbation)")**:
- Constant at **1.0** across all x-axis categories.
- Spatial grounding: Topmost line, horizontal trajectory.
2. **Pink Diamonds ("EGA (perturbation: 1 item)")**:
- Starts at **0.6** at (0,0), declines to **0.4** at (0,1), **0.3** at (0,2), and **0.2** at (0,3).
- Trend: Steady linear decrease.
3. **Green Squares ("EGA (perturbation: 2 items)")**:
- Starts at **0.5** at (0,0), declines to **0.4** at (0,1), **0.3** at (0,2), and **0.2** at (0,3).
- Trend: Gradual linear decrease, mirroring the pink line but with a lower baseline.
4. **Orange Crosses ("EGA (perturbation: 3 items)")**:
- Starts at **0.7** at (0,0), drops sharply to **0.15** at (0,1), then plateaus at **0.15** for (0,2) and (0,3).
- Trend: Abrupt decline followed by stabilization.
### Key Observations
- The **blue line** remains unaffected by perturbations, maintaining maximum EGA (1.0).
- **Orange crosses** exhibit the most drastic drop (0.7 â 0.15) between (0,0) and (0,1), suggesting a critical threshold for perturbations.
- **Pink and green lines** show proportional declines, with green consistently trailing pink by ~0.1 at each step.
- All non-blue lines converge at **0.2** by (0,3), indicating a shared lower bound under high perturbation.
### Interpretation
The data suggests that EGA is highly sensitive to perturbations in "required items" and "action." The **blue line's constancy** implies a baseline EGA unaffected by perturbations, possibly representing an ideal or control state. The **orange line's sharp decline** highlights a critical vulnerability when perturbations reach three items/actions, where EGA collapses by 80%. The gradual declines in pink and green lines indicate diminishing returns as perturbations increase, with green (2-item perturbations) consistently underperforming pink (1-item). This could reflect a hierarchical impact of perturbations on EGA, where higher perturbation levels disproportionately degrade performance. The convergence at 0.2 by (0,3) suggests a systemic limit to EGA under extreme conditions.
</details>
(b) Perturbed True Actions
<details>
<summary>x22.png Details</summary>

### Visual Description
## Line Graph: EGA Performance Across Perturbations
### Overview
The image depicts a line graph comparing the performance of four data series (labeled in the legend) across four perturbation points: (0,0), (1,1), (2,2), and (3,3). The y-axis measures "EGA" (Effective Goal Achievement) on a scale from 0.0 to 1.0, while the x-axis represents perturbation magnitude as a tuple of "required items" and "action" values. All lines show distinct trends, with one remaining constant and others declining sharply.
---
### Components/Axes
- **X-Axis**: Labeled "Perturbed (required items, action)" with discrete points at (0,0), (1,1), (2,2), and (3,3).
- **Y-Axis**: Labeled "EGA" with a linear scale from 0.0 to 1.0.
- **Legend**: Located on the right, associating:
- Blue circles â "Series A"
- Pink diamonds â "Series B"
- Green squares â "Series C"
- Orange crosses â "Series D"
---
### Detailed Analysis
1. **Series A (Blue Circles)**:
- **Trend**: Flat line at 1.0 across all x-values.
- **Data Points**:
- (0,0): 1.0
- (1,1): 1.0
- (2,2): 1.0
- (3,3): 1.0
2. **Series B (Pink Diamonds)**:
- **Trend**: Steady decline from 0.6 to 0.15.
- **Data Points**:
- (0,0): 0.6
- (1,1): 0.4
- (2,2): 0.25
- (3,3): 0.15
3. **Series C (Green Squares)**:
- **Trend**: Gradual decline from 0.5 to 0.12.
- **Data Points**:
- (0,0): 0.5
- (1,1): 0.35
- (2,2): 0.2
- (3,3): 0.12
4. **Series D (Orange Crosses)**:
- **Trend**: Sharpest decline from 0.7 to 0.05.
- **Data Points**:
- (0,0): 0.7
- (1,1): 0.1
- (2,2): 0.08
- (3,3): 0.05
---
### Key Observations
- **Series A** remains constant at 1.0, suggesting it is unaffected by perturbations.
- **Series D** exhibits the most significant drop, losing 93% of its EGA value from (0,0) to (3,3).
- **Series B** and **C** show intermediate declines, with Series B dropping 75% and Series C dropping 76%.
- All declining series converge near 0.1â0.2 at (3,3), indicating a threshold of minimal EGA under high perturbation.
---
### Interpretation
The graph demonstrates that EGA performance degrades as perturbation magnitude increases, with varying resilience across series. Series Aâs stability implies it represents a baseline or control condition (e.g., unperturbed system). Series Dâs rapid decline suggests it is highly sensitive to perturbations, possibly modeling a fragile system. The convergence of declining series at low EGA values hints at a critical threshold where perturbations render goal achievement nearly impossible. This could reflect real-world scenarios where resource allocation (required items) and action complexity jointly limit effectiveness.
**Notable Anomaly**: Series Dâs abrupt drop at (1,1) (from 0.7 to 0.1) suggests a nonlinear response to initial perturbations, warranting further investigation into its underlying mechanics.
</details>
(c) Perturbed Both Rules
Figure 6: Robustness against knowledge conflicts. EGA after 3,000 environment steps in MC-TextWorld under different perturbations of the ground-truth rules. The plots show performance with increasing intensities of perturbation applied to: (a) requirements only, (b) actions only, and (c) both (see Table Ë 4).
Table 4: Effect of ground-truth perturbations on prior knowledge.
| Perturbation Intensity | Goal items obtainable via prior knowledge |
| --- | --- |
| 0 | 16 (no perturbation) |
| 1 | 14 (12 %) |
| 2 | 11 (31 %) |
| 3 | 9 (44 %) |
To isolate dependency learning from controller capacity, we shift to the MC-TextWorld environment with a perfect controller. In this setting, we test each agentâs robustness to conflicts with its prior knowledge (derived from the LLMâs initial predictions and human-written plans) by introducing arbitrary perturbations to the ground-truth required items and actions. These perturbations are applied with an intensity level; a higher intensity affects a greater number of items, as shown in Table Ë 4. This intensity is denoted by a tuple (r,a) for required items and actions, respectively. (0,0) represents the vanilla setting with no perturbations. See Figure Ë 21 for the detailed perturbation process.
Figure Ë 6 shows XENONâs robustness to knowledge conflicts, as it maintains a near-perfect EGA ( $â$ 0.97). In contrast, the performance of all baselines degrades as perturbation intensity increases across all three perturbation scenarios (required items, actions, or both). We find that prompting an LLM to self-correct is ineffective when the ground truth conflicts with its parametric knowledge: SC shows no significant advantage over DECKARD, which lacks a correction mechanism. ADAM is vulnerable to action perturbations; its strategy of gathering all resource items before attempting a new item fails when the valid actions for those resources are perturbed, effectively halting its learning.
5.5 Ablation studies on knowledge correction mechanisms
Table 5: Ablation study of knowledge correction mechanisms. â: XENON; $\triangle$ : LLM self-correction; â: No correction. All entries denote the EGA after 3,000 environment steps. Columns denote the perturbation setting (r,a). For LLM self-correction, we use the same prompt as the SC baseline (see Appendix Ë B).
| Dependency Correction | Action Correction | (0,0) | (3,0) | (0,3) | (3,3) |
| --- | --- | --- | --- | --- | --- |
| â | â | 0.97 | 0.97 | 0.97 | 0.97 |
| â | $\triangle$ | 0.93 | 0.93 | 0.12 | 0.12 |
| â | â | 0.84 | 0.84 | 0.12 | 0.12 |
| $\triangle$ | â | 0.57 | 0.30 | 0.57 | 0.29 |
| â | â | 0.53 | 0.13 | 0.53 | 0.13 |
| â | â | 0.46 | 0.13 | 0.19 | 0.11 |
As shown in Table Ë 5, to analyze XENONâs knowledge correction mechanisms for dependencies and actions, we conduct ablation studies in MC-TextWorld. While dependency correction is generally more important for overall performance, action correction becomes vital under action perturbations. In contrast, LLM self-correction is ineffective for complex scenarios: it offers minimal gains for dependency correction even in the vanilla setting and fails entirely for perturbed actions. Its effectiveness is limited to simpler scenarios, such as action correction in the vanilla setting. These results demonstrate that our algorithmic knowledge correction approach enables robust learning from experience, overcoming the limitations of both LLM self-correction and flawed initial knowledge.
5.6 Ablation studies on hyperparameters
<details>
<summary>x23.png Details</summary>

### Visual Description
## Line Graph: EGA Convergence Across Environment Steps
### Overview
The image depicts a line graph illustrating the convergence of Expected Goal Achievement (EGA) across environment steps for four distinct configurations labeled `câ = 2`, `câ = 3`, `câ = 4`, and `câ = 5`. The graph shows how EGA values evolve over time (environment steps) and highlights differences in performance between configurations.
### Components/Axes
- **Y-Axis**: Labeled "EGA" with a scale from 0.0 to 1.0 in increments of 0.2.
- **X-Axis**: Labeled "Environment step" with a scale from 0 to 3000 in increments of 1000.
- **Legend**: Positioned in the bottom-right corner, mapping colors to `câ` values:
- **Dark blue**: `câ = 2`
- **Orange**: `câ = 3`
- **Light blue**: `câ = 4`
- **Green**: `câ = 5`
- **Shaded Regions**: Light gray bands around each line, likely representing variability or confidence intervals.
### Detailed Analysis
1. **`câ = 5` (Green Line)**:
- Starts at ~0.2 EGA at 0 steps.
- Rises sharply to ~1.0 EGA by ~1000 steps.
- Plateaus at 1.0 for the remainder of the graph.
- Shaded region indicates minimal variability (~0.05 range).
2. **`câ = 4` (Light Blue Line)**:
- Begins at ~0.25 EGA at 0 steps.
- Peaks at ~0.95 EGA by ~1000 steps.
- Slightly declines to ~0.98 EGA by 3000 steps.
- Shaded region shows moderate variability (~0.07 range).
3. **`câ = 3` (Orange Line)**:
- Starts at ~0.25 EGA at 0 steps.
- Reaches ~0.9 EGA by ~1000 steps.
- Plateaus at ~0.92 EGA by 3000 steps.
- Shaded region indicates higher variability (~0.1 range).
4. **`câ = 2` (Dark Blue Line)**:
- Begins at ~0.15 EGA at 0 steps.
- Peaks at ~0.85 EGA by ~1000 steps.
- Stabilizes at ~0.88 EGA by 3000 steps.
- Shaded region shows the widest variability (~0.12 range).
### Key Observations
- All configurations converge toward 1.0 EGA by ~1000 steps, with `câ = 5` achieving this fastest.
- Higher `câ` values correlate with faster and more stable convergence (e.g., `câ = 5` reaches 1.0 EGA earlier than others).
- Variability (shaded regions) decreases as `câ` increases, suggesting more consistent performance in higher configurations.
- No outliers or anomalies are visible; all lines follow a similar upward trend with plateauing.
### Interpretation
The graph demonstrates that increasing `câ` improves both the speed and stability of EGA convergence. Configurations with higher `câ` values achieve near-optimal performance (1.0 EGA) more rapidly and maintain it longer, while lower `câ` values exhibit slower convergence and greater variability. The shaded regions imply that experimental uncertainty decreases with higher `câ`, possibly due to improved model robustness or parameter tuning. This suggests `câ` is a critical hyperparameter for optimizing EGA in the tested environment.
</details>
(a) $c_{0}$
<details>
<summary>x24.png Details</summary>

### Visual Description
## Line Graph: EGA Convergence Across Environment Steps
### Overview
The image depicts a line graph illustrating the convergence of Expected Goal Achievement (EGA) values across different environment steps for four distinct α (alpha) parameter settings. The graph shows four colored lines representing α values of 7, 8, 9, and 10, with shaded regions indicating uncertainty bounds. All lines plateau at high EGA values after approximately 1,500 environment steps.
### Components/Axes
- **Y-axis (EGA)**: Ranges from 0.0 to 1.0 in increments of 0.2. Represents the Expected Goal Achievement metric.
- **X-axis (Environment step)**: Ranges from 0 to 3,000 in increments of 1,000. Represents sequential training steps in an environment.
- **Legend**: Located in the bottom-right corner, mapping colors to α values:
- Black: α = 7
- Orange: α = 8
- Blue: α = 9
- Green: α = 10
- **Shaded Regions**: Gray bands around each line indicate uncertainty estimates, with width decreasing as environment steps increase.
### Detailed Analysis
1. **α = 7 (Black Line)**:
- Starts at ~0.15 EGA at 0 steps.
- Gradually increases to ~0.95 EGA by 2,000 steps.
- Plateaus with minimal fluctuation beyond 2,000 steps.
- Uncertainty band widest at early steps (~0.1â0.2 range), narrowing to ±0.02 by 3,000 steps.
2. **α = 8 (Orange Line)**:
- Begins slightly higher (~0.18 EGA) than α = 7.
- Reaches ~0.97 EGA by 2,000 steps.
- Maintains a stable plateau with uncertainty band narrowing to ±0.015 by 3,000 steps.
3. **α = 9 (Blue Line)**:
- Initial EGA ~0.22 at 0 steps.
- Accelerates to ~0.98 EGA by 1,500 steps.
- Uncertainty band reduces to ±0.01 by 3,000 steps.
4. **α = 10 (Green Line)**:
- Highest starting EGA (~0.25).
- Rapidly converges to 1.0 EGA by 1,200 steps.
- Uncertainty band collapses to ±0.005 by 3,000 steps.
### Key Observations
- **Convergence Pattern**: All α values exhibit rapid EGA growth in early steps, with diminishing returns after ~1,500 steps.
- **α Sensitivity**: Higher α values achieve higher plateau EGA (α = 10 reaches 1.0 vs. α = 7 at 0.95).
- **Uncertainty Trends**: Confidence intervals shrink significantly with more environment steps, suggesting improved model stability.
- **Line Proximity**: Lines for α = 8â10 overlap closely after 2,000 steps, indicating similar performance at high α thresholds.
### Interpretation
The graph demonstrates that increasing α accelerates EGA convergence and achieves higher maximum values. The shaded uncertainty regions imply that early training is less reliable, with models stabilizing after ~1,500 steps. The near-identical performance of α = 8â10 at 3,000 steps suggests diminishing returns for α > 9. This could indicate an optimal α range for balancing computational cost and performance. The rapid rise in EGA for α = 10 (green line) implies that higher α values may prioritize goal achievement at the expense of other factors (e.g., exploration efficiency), warranting further investigation into trade-offs.
</details>
(b) $\alpha_{i}$
<details>
<summary>x25.png Details</summary>

### Visual Description
## Line Graph: EGA Convergence Across Environment Steps
### Overview
The image depicts a line graph illustrating the convergence of Expected Goal Achievement (EGA) across different environment steps for four distinct α_s values (1, 2, 3, 4). The graph shows four colored lines with shaded confidence intervals, all starting near 0.2 and converging toward 1.0 as environment steps increase.
### Components/Axes
- **X-axis**: "Environment step" (logarithmic scale, 0 to 3000)
- **Y-axis**: "EGA" (linear scale, 0.0 to 1.0)
- **Legend**: Located in the bottom-right corner, mapping colors to α_s values:
- Black: α_s = 1
- Orange: α_s = 2
- Blue: α_s = 3
- Green: α_s = 4
- **Shaded Regions**: Represent variability/confidence intervals around each line.
### Detailed Analysis
1. **Line Trends**:
- **α_s = 1 (Black)**: Starts at ~0.2, rises steeply to ~0.6 by 1000 steps, then plateaus. Confidence interval widest (~±0.15).
- **α_s = 2 (Orange)**: Begins at ~0.2, surpasses α_s=1 by ~500 steps, reaches ~0.8 by 1500 steps. Confidence interval narrower (~±0.10).
- **α_s = 3 (Blue)**: Starts at ~0.2, overtakes α_s=2 by ~1000 steps, reaches ~0.95 by 2000 steps. Confidence interval moderate (~±0.08).
- **α_s = 4 (Green)**: Highest initial slope, reaches ~0.98 by 1000 steps, plateaus at 1.0. Confidence interval narrowest (~±0.05).
2. **Convergence Patterns**:
- All lines converge to 1.0 by ~2000 steps, but α_s=4 achieves stability earliest (~1000 steps).
- Variability decreases with higher α_s values (green line has minimal shading).
### Key Observations
- Higher α_s values correlate with faster convergence and greater stability (narrower confidence intervals).
- α_s=1 exhibits the slowest convergence and highest uncertainty.
- Lines cross sequentially: α_s=2 > α_s=1 > α_s=3 > α_s=4 in early steps, but α_s=4 dominates after ~1000 steps.
### Interpretation
The data suggests that increasing α_s improves both the speed and reliability of EGA convergence. The green line (α_s=4) demonstrates optimal performance, achieving near-perfect EGA with minimal variability. This implies α_s=4 is the most efficient parameter setting for the modeled system. The shaded regions highlight the trade-off between exploration (wider intervals) and exploitation (narrower intervals) in reinforcement learning contexts. The logarithmic x-axis emphasizes early-stage performance differences, which are critical for parameter tuning.
</details>
(c) $\alpha_{s}$
<details>
<summary>x26.png Details</summary>

### Visual Description
## Line Chart: EGA Performance Across Environment Steps
### Overview
The chart displays four ascending lines representing EGA (Expected Growth Acceleration) performance over environment steps for different initial conditions (xâ = 1â4). Each line includes a shaded region indicating variability or confidence intervals. All lines converge to EGA = 1.0 by ~2000 environment steps.
### Components/Axes
- **X-axis**: "Environment step" (0â3000, linear scale)
- **Y-axis**: "EGA" (0.2â1.0, linear scale)
- **Legend**: Bottom-right corner, mapping colors to xâ values:
- Black: xâ = 1
- Orange: xâ = 2
- Blue: xâ = 3
- Green: xâ = 4
- **Shaded Regions**: ±~0.05â0.10 around each line (approximate uncertainty bounds)
### Detailed Analysis
1. **Line Trends**:
- **xâ = 1 (Black)**: Steepest initial ascent, reaches EGA = 1.0 by ~1000 steps. Shaded region widest at start (~0.10), narrowing to ~0.02 by end.
- **xâ = 2 (Orange)**: Slower start than xâ=1, reaches EGA = 1.0 by ~1500 steps. Shaded region narrower (~0.08 at start, ~0.03 at end).
- **xâ = 3 (Blue)**: Gradual rise, matches xâ=2 by ~2000 steps. Shaded region ~0.06â0.04.
- **xâ = 4 (Green)**: Slowest start, reaches EGA = 1.0 by ~2500 steps. Shaded region widest (~0.12 at start, ~0.05 at end).
2. **Key Data Points**:
- All lines start near (0, 0.2) and end at (3000, 1.0).
- Lines do not intersect; order remains xâ=1 > xâ=2 > xâ=3 > xâ=4 throughout.
### Key Observations
- **Convergence**: All xâ values achieve maximum EGA (1.0) by ~2000â2500 steps, regardless of initial condition.
- **Rate of Improvement**: Higher xâ values show faster acceleration after ~1000 steps, closing the gap with lower xâ.
- **Uncertainty**: Lower xâ values exhibit greater variability (wider shaded regions), especially early in the environment steps.
### Interpretation
The data suggests that initial conditions (xâ) influence the *rate* of EGA improvement but not the *final outcome*. Higher xâ values start with lower EGA but accelerate more rapidly, eventually matching the performance of lower xâ. The widening shaded regions for lower xâ imply greater uncertainty in early-stage measurements, possibly due to smaller sample sizes or higher variability in initial conditions. This pattern could reflect a "catch-up" mechanism where later-stage environmental interactions dominate over initial state dependencies.
</details>
(d) $x_{0}$
Figure 7: Hyperparameter ablation study in MC-TextWorld. EGA over 3,000 environment steps under different hyperparameters. The plots show EGA when varying: (a) $c_{0}$ (revision count threshold for inadmissible items), (b) $\alpha_{i}$ (required items quantities for inadmissible items), (c) $\alpha_{s}$ (required items quantities for less-tried items), and (d) $x_{0}$ (invalid action threshold). Each study varies one hyperparameter while keeping the others fixed to their default values ( $c_{0}=3$ , $\alpha_{i}=8$ , $\alpha_{s}=2$ , $x_{0}=2$ ).
<details>
<summary>x27.png Details</summary>

### Visual Description
## Line Chart: EGA Performance Across Episodes
### Overview
The chart illustrates the evolution of EGA (Expected Gain Accuracy) across 400 episodes for four distinct configurations (câ = 2, 3, 4, 5). All lines originate from the same starting point (0, 0.15) but diverge significantly by episode 400, demonstrating configuration-dependent performance trends.
### Components/Axes
- **X-axis (Episode)**: Discrete increments from 0 to 400 (steps of 100).
- **Y-axis (EGA)**: Continuous scale from 0.0 to 1.0 (steps of 0.2).
- **Legend**: Top-left corner, mapping:
- Black circle: câ = 2
- Orange star: câ = 3
- Blue pentagon: câ = 4
- Green cross: câ = 5
### Detailed Analysis
1. **câ = 2 (Black Circle)**:
- Starts at (0, 0.15).
- Gradual upward slope, reaching ~0.57 at episode 400.
- Slowest growth rate among all configurations.
2. **câ = 3 (Orange Star)**:
- Starts at (0, 0.15).
- Steeper initial ascent than câ = 2, peaking at ~0.63 by episode 400.
- Crosses câ = 4 line near episode 200.
3. **câ = 4 (Blue Pentagon)**:
- Starts at (0, 0.15).
- Highest growth rate initially, surpassing câ = 3 by episode 100.
- Peaks at ~0.62 by episode 400.
4. **câ = 5 (Green Cross)**:
- Starts at (0, 0.15).
- Consistently highest EGA across all episodes.
- Reaches ~0.61 by episode 400, maintaining lead over câ = 4.
### Key Observations
- **Divergence Pattern**: All lines converge at (0, 0.15) but separate sharply after episode 100.
- **Configuration Impact**: Higher câ values correlate with higher EGA (câ = 5 > câ = 4 > câ = 3 > câ = 2).
- **Crossing Behavior**: câ = 3 and câ = 4 intersect near episode 200, suggesting transient performance parity.
- **Asymptotic Behavior**: EGA plateaus near 0.6 for all configurations by episode 400.
### Interpretation
The data suggests that increasing câ improves EGA performance, with diminishing returns observed as câ approaches 5. The crossing of câ = 3 and câ = 4 lines implies that intermediate configurations may exhibit non-monotonic trade-offs. The consistent lead of câ = 5 indicates it optimizes EGA most effectively, though the marginal gain over câ = 4 (0.01 difference at episode 400) questions its practical significance. This pattern could reflect algorithmic sensitivity to hyperparameters in reinforcement learning or optimization systems, where higher câ values might better balance exploration/exploitation or model complexity.
</details>
(e) $c_{0}$
<details>
<summary>x28.png Details</summary>

### Visual Description
## Line Graph: EGA Performance Across Episodes for Different Alpha Values
### Overview
The graph illustrates the evolution of Expected Goal Accuracy (EGA) over training episodes for four distinct alpha (α) values (7, 8, 9, 10). All lines originate near 0.15 EGA at episode 0 and exhibit upward trends, with varying rates of improvement and final performance levels.
### Components/Axes
- **X-axis**: "Episode" (0 to 400), with ticks at 0, 100, 200, 300, 400.
- **Y-axis**: "EGA" (0.0 to 1.0), with ticks at 0.0, 0.2, 0.4, 0.6, 0.8, 1.0.
- **Legend**: Located in the top-left corner, mapping:
- **α = 7**: Black circles (â)
- **α = 8**: Orange stars (â
)
- **α = 9**: Blue diamonds (â)
- **α = 10**: Green crosses (âïž)
### Detailed Analysis
1. **α = 7 (Black Circles)**:
- Starts at ~0.15 EGA (episode 0).
- Increases steadily to ~0.42 (episode 100), ~0.50 (200), ~0.52 (300), and ~0.58 (400).
- Smooth, linear growth with minimal fluctuation.
2. **α = 8 (Orange Stars)**:
- Begins at ~0.15 EGA (episode 0).
- Dips slightly to ~0.35 (episode 100), then rises to ~0.55 (200), ~0.57 (300), and ~0.63 (400).
- Exhibits a "V" shape with a mid-episode dip before accelerating.
3. **α = 9 (Blue Diamonds)**:
- Starts at ~0.15 EGA (episode 0).
- Rises sharply to ~0.38 (100), ~0.52 (200), ~0.59 (300), and ~0.62 (400).
- Steeper slope than α = 7, with consistent upward momentum.
4. **α = 10 (Green Crosses)**:
- Begins at ~0.15 EGA (episode 0).
- Peaks at ~0.45 (100), dips to ~0.50 (200), then rises to ~0.58 (300) and ~0.61 (400).
- Shows a plateau phase between episodes 100â200 before resuming growth.
### Key Observations
- **Convergence at Later Episodes**: All lines approach similar EGA values (~0.58â0.63) by episode 400, suggesting diminishing returns for higher α values.
- **α = 8 vs. α = 9**: α = 8 outperforms α = 9 in final EGA (0.63 vs. 0.62), despite α = 9âs faster initial growth.
- **α = 10âs Volatility**: α = 10 exhibits the most fluctuation, with a mid-graph plateau that delays its final improvement.
- **α = 7âs Consistency**: α = 7 demonstrates the most stable, linear progression without dips or plateaus.
### Interpretation
The data suggests that higher α values generally improve EGA, but with trade-offs:
- **Optimal α**: α = 8 achieves the highest final EGA (0.63) while maintaining moderate growth stability.
- **Rapid Learners**: α = 9 excels in early episodes but plateaus slightly earlier than α = 8.
- **High α Risks**: α = 10âs volatility and delayed improvement indicate potential overfitting or sensitivity to hyperparameters.
- **Baseline Performance**: α = 7 provides reliable, steady progress, making it a safe choice for consistent training.
The graph highlights the balance between exploration (higher α) and exploitation (lower α) in optimization tasks, with α = 8 emerging as the most efficient compromise in this scenario.
</details>
(f) $\alpha_{i}$
<details>
<summary>x29.png Details</summary>

### Visual Description
## Line Graph: EGA vs. Episodes for Different α_s Values
### Overview
The image depicts a line graph illustrating the relationship between "Episode" (x-axis) and "EGA" (y-axis) across four distinct α_s values (1, 2, 3, 4). The graph shows four data series, each represented by a unique color and marker, starting from the same initial point and diverging as episodes increase. The legend is positioned in the top-left corner, and all lines exhibit upward trends with varying slopes.
---
### Components/Axes
- **X-axis (Episode)**: Labeled "Episode," ranging from 0 to 400 in increments of 100.
- **Y-axis (EGA)**: Labeled "EGA," ranging from 0.0 to 1.0 in increments of 0.2.
- **Legend**: Located in the top-left corner, mapping:
- **α_s = 1**: Black circle (â)
- **α_s = 2**: Orange star (â
)
- **α_s = 3**: Blue diamond (â)
- **α_s = 4**: Green cross (âïž)
---
### Detailed Analysis
#### Data Series Trends
1. **α_s = 1 (Black Circle)**:
- Starts at ~0.15 EGA at 0 episodes.
- Increases to ~0.35 at 100 episodes, ~0.45 at 200, ~0.5 at 300, and plateaus at ~0.5 by 400 episodes.
- **Trend**: Gradual rise followed by stabilization.
2. **α_s = 2 (Orange Star)**:
- Starts at ~0.15 EGA at 0 episodes.
- Increases to ~0.35 at 100 episodes, ~0.48 at 200, ~0.55 at 300, and ~0.62 at 400.
- **Trend**: Steeper slope than α_s = 1, with consistent growth.
3. **α_s = 3 (Blue Diamond)**:
- Starts at ~0.15 EGA at 0 episodes.
- Increases to ~0.4 at 100 episodes, ~0.52 at 200, ~0.55 at 300, and ~0.6 at 400.
- **Trend**: Faster initial growth than α_s = 1, but slower than α_s = 2.
4. **α_s = 4 (Green Cross)**:
- Starts at ~0.15 EGA at 0 episodes.
- Increases to ~0.38 at 100 episodes, ~0.46 at 200, ~0.52 at 300, and ~0.55 at 400.
- **Trend**: Moderate growth, surpassing α_s = 1 but lagging behind α_s = 2 and 3.
---
### Key Observations
1. **Initial Convergence**: All lines begin at ~0.15 EGA at 0 episodes, indicating identical starting conditions.
2. **Divergence**: By 100 episodes, α_s = 3 (blue diamond) leads, followed by α_s = 2 (orange star), α_s = 4 (green cross), and α_s = 1 (black circle).
3. **Plateau for α_s = 1**: The black circle line flattens after 200 episodes, suggesting diminishing returns or a saturation effect.
4. **α_s = 2 Dominance**: By 400 episodes, α_s = 2 achieves the highest EGA (~0.62), outperforming all other values.
---
### Interpretation
- **α_s Impact**: Higher α_s values (2, 3, 4) correlate with steeper EGA growth, implying α_s is a critical parameter influencing the rate of EGA increase.
- **Saturation Hypothesis**: The plateau in α_s = 1 suggests a potential upper limit to EGA growth under this parameter setting.
- **Efficiency Trade-off**: While α_s = 2 achieves the highest EGA, its trajectory may involve higher computational or resource costs (not shown here).
- **Legend Consistency**: All markers and colors align precisely with their respective α_s values, confirming accurate data series identification.
This graph highlights the trade-off between α_s magnitude and EGA performance, with α_s = 2 emerging as the most effective parameter for maximizing EGA over 400 episodes.
</details>
(g) $\alpha_{s}$
<details>
<summary>x30.png Details</summary>

### Visual Description
## Line Graph: EGA Trends Across Episodes for Different X0 Values
### Overview
The image is a line graph depicting the evolution of EGA (Expected Gain Accuracy) across episodes for four distinct initial conditions (Xâ = 1, 2, 3, 4). The graph shows four data series, each represented by a unique color and marker, plotted against an episode count (0â400) on the x-axis and EGA values (0â1.0) on the y-axis. The legend is positioned in the top-left corner, and all lines originate from the same baseline at episode 0.
---
### Components/Axes
- **X-axis (Episode)**: Labeled "Episode," with a linear scale from 0 to 400 in increments of 100.
- **Y-axis (EGA)**: Labeled "EGA," with a linear scale from 0.0 to 1.0 in increments of 0.2.
- **Legend**: Located in the top-left corner, with four entries:
- **Xâ = 1**: Black circle (â)
- **Xâ = 2**: Orange star (â
)
- **Xâ = 3**: Blue diamond (â)
- **Xâ = 4**: Green cross (âïž)
- **Data Series**: Four lines, each corresponding to a unique Xâ value, with markers at specific episode intervals (0, 100, 200, 300, 400).
---
### Detailed Analysis
#### Xâ = 1 (Black Circle)
- **Trend**: Sharp initial increase from 0.15 (episode 0) to 0.42 (episode 100), followed by a gradual rise to 0.58 (episode 300) and a plateau at 0.58 (episode 400).
- **Key Points**:
- Episode 0: 0.15
- Episode 100: 0.42
- Episode 200: 0.50
- Episode 300: 0.58
- Episode 400: 0.58
#### Xâ = 2 (Orange Star)
- **Trend**: Steady upward trajectory from 0.15 (episode 0) to 0.63 (episode 400), with consistent growth across all intervals.
- **Key Points**:
- Episode 0: 0.15
- Episode 100: 0.35
- Episode 200: 0.48
- Episode 300: 0.55
- Episode 400: 0.63
#### Xâ = 3 (Blue Diamond)
- **Trend**: Moderate increase from 0.15 (episode 0) to 0.50 (episode 400), with slower growth compared to Xâ = 1 and 2.
- **Key Points**:
- Episode 0: 0.15
- Episode 100: 0.38
- Episode 200: 0.42
- Episode 300: 0.48
- Episode 400: 0.50
#### Xâ = 4 (Green Cross)
- **Trend**: Gradual rise from 0.15 (episode 0) to 0.52 (episode 400), with the slowest growth among all series.
- **Key Points**:
- Episode 0: 0.15
- Episode 100: 0.33
- Episode 200: 0.45
- Episode 300: 0.50
- Episode 400: 0.52
---
### Key Observations
1. **Initial Convergence**: All lines start at the same EGA value (0.15) at episode 0, suggesting identical initial conditions or baseline performance.
2. **Divergence**: By episode 100, the lines begin to diverge, with Xâ = 1 and 2 showing the most significant growth.
3. **Plateauing**: Xâ = 1 and 2 exhibit diminishing returns after episode 300, while Xâ = 3 and 4 continue to rise but at a reduced rate.
4. **Final Values**: At episode 400, Xâ = 2 achieves the highest EGA (0.63), followed by Xâ = 1 (0.58), Xâ = 4 (0.52), and Xâ = 3 (0.50).
---
### Interpretation
The data suggests that higher Xâ values (e.g., Xâ = 2) are associated with greater EGA gains over time, though the rate of improvement varies. The sharp initial rise for Xâ = 1 may indicate a rapid adaptation phase, while the plateauing trend for Xâ = 1 and 2 could reflect stabilization or saturation of performance. The slower growth for Xâ = 3 and 4 might imply suboptimal initial conditions or constraints in their learning dynamics. The convergence at episode 0 and divergence later highlight the importance of initial parameters in determining long-term outcomes. This pattern could be critical for optimizing strategies in systems where EGA is a key metric, such as reinforcement learning or adaptive algorithms.
</details>
(h) $x_{0}$
Figure 8: Hyperparameter ablation study in MineRL. EGA over 400 episodes under different hyperparameters. The plots show EGA when varying: (a) $c_{0}$ (revision count threshold for inadmissible items), (b) $\alpha_{i}$ (required items quantities for inadmissible items), (c) $\alpha_{s}$ (required items quantities for less-tried items), and (d) $x_{0}$ (invalid action threshold). Each study varies one hyperparameter while keeping the others fixed to their default values ( $c_{0}=3,\alpha_{i}=8,\alpha_{s}=2,x_{0}=2$ ).
To validate XENONâs stability to its hyperparameters, we conduct comprehensive ablation studies in both MC-TextWorld and MineRL. In these studies, we vary one hyperparameter at a time while keeping the others fixed to their default values ( $c_{0}=3$ , $\alpha_{i}=8$ , $\alpha_{s}=2$ , $x_{0}=2$ ).
Our results (Figure Ë 8, Figure Ë 8) show that although XENON is generally stable across hyperparameters, an effective learning strategy should account for controller capacity when the controller is imperfect. In MC-TextWorld (Figure Ë 8), XENON maintains near-perfect EGA across a wide range of all tested hyperparameters, confirming its stability when a perfect controller is used. In MineRL (Figure Ë 8), with an imperfect controller, the results demonstrate two findings. First, while influenced by hyperparameters, XENON still demonstrates robust performance, showing EGA after 400 episodes for all tested values remains near or above 0.5, outperforming baselines that plateau around or below 0.4 (Figure Ë 5(a)). Second, controller capacity should be considered when designing dependency and action learning strategies. For example, the ablation on $\alpha_{s}$ (Figure Ë 7(g)) shows that while gathering a sufficient quantity of items is necessary ( $\alpha_{s}=1$ ), overburdening the controller with excessive items ( $\alpha_{s}=4$ ) also degrades performance. Similarly, the ablation on $x_{0}$ (Figure Ë 7(h)) shows the need to balance tolerating controller failures against wasting time on invalid actions.
We provide additional ablations in the Appendix on dependency and action learningâwhen initializing the dependency graph from an external source mismatched to the environment (Figure Ë 23), when scaling to more goals/actions (Figure Ë 24), and when using a smaller 4B planner LLM (Figure Ë 26)âas well as an ablation of action selection methods for subgoal construction (Figure Ë 25).
6 Conclusion
We address the challenge of robust planning via experience-based algorithmic knowledge correction. With XENON, we show that directly revising external knowledge through experience enables an LLM-based agent to overcome flawed priors and sparse feedback, surpassing the limits of LLM self-correction. Experiments across diverse Minecraft benchmarks demonstrate that this approach not only strengthens knowledge acquisition and long-horizon planning, but also enables an agent with a lightweight 7B open-weight LLM to outperform prior methods that rely on much larger proprietary models. Our work delivers a key lesson for building robust LLM-based embodied agents: LLM priors should be treated with skepticism and continuously managed and corrected algorithmically.
Limitations
Despite its contributions, XENON faces a limitation. XENONâs performance is influenced by the underlying controller; in MineRL, STEVE-1 (Lifshitz et al., 2023) controller struggles with spatial exploration tasks, making a performance gap compared to more competent controllers like Mineflayer. Future work could involve jointly training the planner and controller, potentially using hierarchical reinforcement learning.
Acknowledgments
This work was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) and IITP-ITRC (Information Technology Research Center) grant funded by the Korea government (MSIT) (No. RS-2019-II191906, Artificial Intelligence Graduate School Program (POSTECH); IITP-2026-RS-2024-00437866; RS-2024-00509258, Global AI Frontier Lab), by a grant from the Korea Institute for Advancement of Technology (KIAT), funded by the Ministry of Trade, Industry and Energy (MOTIE), Republic of Korea (RS-2025-00564342), and by Seoul R&BD Program (SP240008) through the Seoul Business Agency (SBA) funded by The Seoul Metropolitan Government.
References
- S. Bai, K. Chen, X. Liu, J. Wang, W. Ge, S. Song, K. Dang, P. Wang, S. Wang, J. Tang, H. Zhong, Y. Zhu, M. Yang, Z. Li, J. Wan, P. Wang, W. Ding, Z. Fu, Y. Xu, J. Ye, X. Zhang, T. Xie, Z. Cheng, H. Zhang, Z. Yang, H. Xu, and J. Lin (2025) Qwen2.5-vl technical report. arXiv preprint arXiv:2502.13923. Cited by: Figure 26, §K.1, §K.11, Figure 1, §5.1, §5.3.
- B. Baker, I. Akkaya, P. Zhokhov, J. Huizinga, J. Tang, A. Ecoffet, B. Houghton, R. Sampedro, and J. Clune (2022) Video pretraining (vpt): learning to act by watching unlabeled online videos. External Links: 2206.11795, Link Cited by: §J.2.1.
- P. Belcak, G. Heinrich, S. Diao, Y. Fu, X. Dong, S. Muralidharan, Y. C. Lin, and P. Molchanov (2025) Small language models are the future of agentic ai. External Links: 2506.02153, Link, Document Cited by: §1.
- M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba (2021) Evaluating large language models trained on code. External Links: 2107.03374 Cited by: §5.3.
- M. Chen, Y. Li, Y. Yang, S. Yu, B. Lin, and X. He (2024) AutoManual: constructing instruction manuals by llm agents via interactive environmental learning. External Links: 2405.16247 Cited by: §E.1, §1.
- M. CĂŽtĂ©, Ă. KĂĄdĂĄr, X. Yuan, B. Kybartas, T. Barnes, E. Fine, J. Moore, R. Y. Tao, M. Hausknecht, L. E. Asri, M. Adada, W. Tay, and A. Trischler (2018) TextWorld: a learning environment for text-based games. CoRR abs/1806.11532. Cited by: Appendix A.
- K. Du, V. SnĂŠbjarnarson, N. Stoehr, J. White, A. Schein, and R. Cotterell (2024) Context versus prior knowledge in language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), L. Ku, A. Martins, and V. Srikumar (Eds.), Bangkok, Thailand, pp. 13211â13235. External Links: Link, Document Cited by: §1.
- L. Fan, G. Wang, Y. Jiang, A. Mandlekar, Y. Yang, H. Zhu, A. Tang, D. Huang, Y. Zhu, and A. Anandkumar (2022) MineDojo: building open-ended embodied agents with internet-scale knowledge. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, External Links: Link Cited by: §1, §3.
- Y. Feng, Y. Wang, J. Liu, S. Zheng, and Z. Lu (2024) LLaMA-rider: spurring large language models to explore the open world. In Findings of the Association for Computational Linguistics: NAACL 2024, K. Duh, H. Gomez, and S. Bethard (Eds.), Mexico City, Mexico, pp. 4705â4724. External Links: Link, Document Cited by: §1, §2.1.
- Z. Gou, Z. Shao, Y. Gong, Y. Shen, Y. Yang, N. Duan, and W. Chen (2024) CRITIC: large language models can self-correct with tool-interactive critiquing. External Links: 2305.11738, Link Cited by: §2.2.
- W. H. Guss, B. Houghton, N. Topin, P. Wang, C. Codel, M. Veloso, and R. Salakhutdinov (2019) MineRL: a large-scale dataset of minecraft demonstrations. External Links: 1907.13440, Link Cited by: §J.2.2, §J.2.5, §1, §3, §5.1.
- J. Huang, X. Chen, S. Mishra, H. S. Zheng, A. W. Yu, X. Song, and D. Zhou (2024) Large language models cannot self-correct reasoning yet. External Links: 2310.01798, Link Cited by: §2.2.
- J. Li, Q. Wang, Y. Wang, X. Jin, Y. Li, W. Zeng, and X. Yang (2025a) Open-world reinforcement learning over long short-term imagination. In ICLR, Cited by: §J.2.1.
- T. Li, G. Zhang, Q. D. Do, X. Yue, and W. Chen (2024a) Long-context llms struggle with long in-context learning. External Links: 2404.02060 Cited by: §4.2, §5.3.
- Z. Li, Y. Xie, R. Shao, G. Chen, D. Jiang, and L. Nie (2024b) Optimus-1: hybrid multimodal memory empowered agents excel in long-horizon tasks. Advances in neural information processing systems 37, pp. 49881â49913. Cited by: §J.2.2, §J.2.3, §J.2.5, Appendix H, §1, §2.1, §3, §4.3, §5.1, §5.1, Table 3.
- Z. Li, Y. Xie, R. Shao, G. Chen, D. Jiang, and L. Nie (2025b) Optimus-2: multimodal minecraft agent with goal-observation-action conditioned policy. In 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §5.1, Table 3.
- S. Lifshitz, K. Paster, H. Chan, J. Ba, and S. McIlraith (2023) STEVE-1: a generative model for text-to-behavior in minecraft. External Links: 2306.00937 Cited by: §5.1, §5.2, §6.
- Z. Lin, J. Li, J. Shi, D. Ye, Q. Fu, and W. Yang (2021) Juewu-mc: playing minecraft with sample-efficient hierarchical reinforcement learning. arXiv preprint arXiv:2112.04907. Cited by: §J.2.1, §1.
- N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang (2024) Lost in the middle: how language models use long contexts. Transactions of the Association for Computational Linguistics 12, pp. 157â173. External Links: Link, Document Cited by: §4.2.
- S. Liu, Y. Li, K. Zhang, Z. Cui, W. Fang, Y. Zheng, T. Zheng, and M. Song (2025) Odyssey: empowering minecraft agents with open-world skills. In International Joint Conference on Artificial Intelligence, Cited by: §2.1.
- H. Mao, C. Wang, X. Hao, Y. Mao, Y. Lu, C. Wu, J. Hao, D. Li, and P. Tang (2022) Seihai: a sample-efficient hierarchical ai for the minerl competition. In Distributed Artificial Intelligence: Third International Conference, DAI 2021, Shanghai, China, December 17â18, 2021, Proceedings 3, pp. 38â51. Cited by: §J.2.1, §1.
- Microsoft, :, A. Abouelenin, A. Ashfaq, A. Atkinson, H. Awadalla, N. Bach, J. Bao, A. Benhaim, M. Cai, V. Chaudhary, C. Chen, D. Chen, D. Chen, J. Chen, W. Chen, Y. Chen, Y. Chen, Q. Dai, X. Dai, R. Fan, M. Gao, M. Gao, A. Garg, A. Goswami, J. Hao, A. Hendy, Y. Hu, X. Jin, M. Khademi, D. Kim, Y. J. Kim, G. Lee, J. Li, Y. Li, C. Liang, X. Lin, Z. Lin, M. Liu, Y. Liu, G. Lopez, C. Luo, P. Madan, V. Mazalov, A. Mitra, A. Mousavi, A. Nguyen, J. Pan, D. Perez-Becker, J. Platin, T. Portet, K. Qiu, B. Ren, L. Ren, S. Roy, N. Shang, Y. Shen, S. Singhal, S. Som, X. Song, T. Sych, P. Vaddamanu, S. Wang, Y. Wang, Z. Wang, H. Wu, H. Xu, W. Xu, Y. Yang, Z. Yang, D. Yu, I. Zabir, J. Zhang, L. L. Zhang, Y. Zhang, and X. Zhou (2025) Phi-4-mini technical report: compact yet powerful multimodal language models via mixture-of-loras. External Links: 2503.01743, Link Cited by: Figure 26, §K.11.
- K. Nottingham, P. Ammanabrolu, A. Suhr, Y. Choi, H. Hajishirzi, S. Singh, and R. Fox (2023) Do embodied agents dream of pixelated sheep? embodied decision making using language guided world modelling. In Proceedings of the 40th International Conference on Machine Learning, ICMLâ23. Cited by: §J.1, Table 8, Appendix C, Appendix G, §2.1, §3, §3, §4.1, §5.1.
- OpenAI (2023) Gpt-4v(ision) system card. External Links: Link Cited by: §5.3.
- OpenAI (2024) GPT-4 technical report. External Links: 2303.08774, Link Cited by: §5.3.
- PrismarineJS (2023) Prismarinejs/mineflayer. Note: https://github.com/PrismarineJS/mineflayer External Links: Link Cited by: §J.3, §5.1.
- Y. Qin, E. Zhou, Q. Liu, Z. Yin, L. Sheng, R. Zhang, Y. Qiao, and J. Shao (2024) Mp5: a multi-modal open-ended embodied system in minecraft via active perception. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16307â16316. Cited by: §2.1.
- N. Reimers and I. Gurevych (2019) Sentence-bert: sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, External Links: Link Cited by: Appendix I, 2nd item.
- N. Shinn, F. Cassano, E. Berman, A. Gopinath, K. Narasimhan, and S. Yao (2023) Reflexion: language agents with verbal reinforcement learning. External Links: 2303.11366 Cited by: §J.1, §1, §2.2, §5.1.
- K. Stechly, K. Valmeekam, and S. Kambhampati (2024) On the self-verification limitations of large language models on reasoning and planning tasks. External Links: 2402.08115, Link Cited by: §J.1, §1, §5.1.
- A. Szot, A. Clegg, E. Undersander, E. Wijmans, Y. Zhao, J. Turner, N. Maestre, M. Mukadam, D. Chaplot, O. Maksymets, A. Gokaslan, V. Vondrus, S. Dharur, F. Meier, W. Galuba, A. Chang, Z. Kira, V. Koltun, J. Malik, M. Savva, and D. Batra (2021) Habitat 2.0: training home assistants to rearrange their habitat. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
- G. Tyen, H. Mansoor, V. Carbune, P. Chen, and T. Mak (2024) LLMs cannot find reasoning errors, but can correct them given the error location. In Findings of the Association for Computational Linguistics: ACL 2024, L. Ku, A. Martins, and V. Srikumar (Eds.), Bangkok, Thailand, pp. 13894â13908. External Links: Link, Document Cited by: §2.2.
- G. Wang, Y. Xie, Y. Jiang, A. Mandlekar, C. Xiao, Y. Zhu, L. Fan, and A. Anandkumar (2023a) Voyager: an open-ended embodied agent with large language models. arXiv preprint arXiv: Arxiv-2305.16291. Cited by: §1, §2.1.
- Z. Wang, S. Cai, G. Chen, A. Liu, X. (. Ma, and Y. Liang (2023b) Describe, explain, plan and select: interactive planning with llms enables open-world multi-task agents. Advances in Neural Information Processing Systems 36, pp. 34153â34189. Cited by: §J.2.5, §1, §5.1.
- Z. Wang, S. Cai, A. Liu, Y. Jin, J. Hou, B. Zhang, H. Lin, Z. He, Z. Zheng, Y. Yang, X. Ma, and Y. Liang (2023c) JARVIS-1: open-world multi-task agents with memory-augmented multimodal language models. arXiv preprint arXiv: 2311.05997. Cited by: §1, §2.1, §5.1.
- Y. Wu, M. S. Hee, Z. Hu, and R. K. Lee (2024) LongGenBench: benchmarking long-form generation in long context llms. External Links: 2409.02076, Link Cited by: §5.3.
- L. Yang, Z. Yu, T. Zhang, M. Xu, J. E. Gonzalez, B. Cui, and S. Yan (2025) SuperCorrect: supervising and correcting language models with error-driven insights. In International Conference on Learning Representations, Cited by: §2.2.
- Y. Yoon, G. Lee, S. Ahn, and J. Ok (2024) Breadth-first exploration on adaptive grid for reinforcement learning. In Forty-first International Conference on Machine Learning, Cited by: 2nd item.
- S. Yu and C. Lu (2024) ADAM: an embodied causal agent in open-world environments. arXiv preprint arXiv:2410.22194. Cited by: §J.1, §J.3.1, Table 8, §K.1, §2.1, §5.1, §5.1.
- H. Yuan, C. Zhang, H. Wang, F. Xie, P. Cai, H. Dong, and Z. Lu (2023) Plan4MC: skill reinforcement learning and planning for open-world Minecraft tasks. arXiv preprint arXiv:2303.16563. Cited by: §1.
- Y. Zhang, M. Khalifa, L. Logeswaran, J. Kim, M. Lee, H. Lee, and L. Wang (2024) Small language models need strong verifiers to self-correct reasoning. In Findings of the Association for Computational Linguistics: ACL 2024, L. Ku, A. Martins, and V. Srikumar (Eds.), Bangkok, Thailand, pp. 15637â15653. External Links: Link, Document Cited by: §1.
- A. Zhao, D. Huang, Q. Xu, M. Lin, Y. Liu, and G. Huang (2024) ExpeL: llm agents are experiential learners. In Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2024, February 20-27, 2024, Vancouver, Canada, M. J. Wooldridge, J. G. Dy, and S. Natarajan (Eds.), pp. 19632â19642. External Links: Link, Document Cited by: §E.1.
- Z. Zhao, W. Chai, X. Wang, L. Boyi, S. Hao, S. Cao, T. Ye, J. Hwang, and G. Wang (2023) See and think: embodied agent in virtual environment. arXiv preprint arXiv:2311.15209. Cited by: §2.1.
- X. Zheng, H. Lin, K. He, Z. Wang, Z. Zheng, and Y. Liang (2025) MCU: an evaluation framework for open-ended game agents. External Links: 2310.08367, Link Cited by: §J.4, §5.1.
- X. Zhu, Y. Chen, H. Tian, C. Tao, W. Su, C. Yang, G. Huang, B. Li, L. Lu, X. Wang, Y. Qiao, Z. Zhang, and J. Dai (2023) Ghost in the minecraft: generally capable agents for open-world environments via large language models with text-based knowledge and memory. arXiv preprint arXiv:2305.17144. Cited by: §1, §2.1.
This appendix is organized as follows:
- Appendix Ë A: Experiments in a domain other than Minecraft (Microsoft TextWorld Cooking).
- Appendix Ë B: Prompts and qualitative results of LLM self-correction in our experiments.
- Appendix Ë C: Detailed procedure for experienced requirement set determination and dependency graph updates, as discussed in Section Ë 3.
- Appendix Ë E: Detailed pseudocode and the prompt for ADG in Section Ë 4.1.
- Appendix Ë F: Detailed pseudocode and the prompt for step-by-step planning using FAM in Section Ë 4.2.
- Appendix Ë H: Detailed descriptions and the prompt for CRe in Section Ë 4.3.
- Appendix Ë I: Detailed descriptions of implementation, human-written plans, and hyperparameters.
- Appendix Ë J: Detailed descriptions of the baselines and experimental environments in Section Ë 5.
- Appendix Ë K: Analysis of experimental results and additional experimental results.
- Appendix Ë L: Descriptions about LLM usage.
Appendix A Additional experiments in another domain
To assess generalization beyond Minecraft, we evaluate XENON on the Microsoft TextWorld Cooking environment (CĂŽtĂ© et al., 2018), a text-based household task planning benchmark. We demonstrate XENON can correct an LLMâs flawed knowledge of preconditions (e.g., required tools) and valid actions for plans using ADG and FAM in this domain as well. We note that XENON is applied with minimal modification: FAM is applied without modification, while ADG is adapted from its original design, which supports multiple incoming edges (preconditions) for a node, to one that allows only a single incoming edge, as this domain requires only a single precondition per node.
A.1 Experiment Setup
Environment Rules
The goal is to prepare and eat a meal by reading a cookbook, which provides a plan as a list of (action, ingredient) pairs, e.g., (âfryâ, âpepperâ). We note that an agent cannot succeed by naively following this plan. This is because the agent must solve two key challenges: (1) it must discover the valid tool required for each cookbook action, and (2) it must discover the valid, executable action for each cookbook action, as some cookbook actions are not directly accepted by the environment (i.e., not in its action space).
Specifically, to succeed a cookbookâs (action, ingredient) pair, an agent must make a subgoal, formatted as (executable action, ingredient, tool), where the executable action and tool must be valid for the cookbook action. For example, the cookbookâs (âfryâ, âpepperâ) pair requires the agent to make a subgoal (cook, âpepperâ, stove). The available executable action space consists of { âchopâ, âcloseâ, âcookâ, âdiceâ, âdropâ, âeatâ, âexamineâ, âsliceâ, âprepareâ }, and the available tools are { âknifeâ, âovenâ, âstoveâ, âfridgeâ, âtableâ, âcounterâ }.
Baselines and Evaluation
All agents use an LLM (Qwen2.5-VL-7B) to make subgoals. The tool for each cookbook action is predicted by the LLM from the available tools before an episode begins. At each timestep during the episode, given a cookbook action, the LLM predicts an executable action from the executable action space, constructing a subgoal from this predicted executable action, the input ingredient, and the predicted tool.
To isolate the challenge of planning knowledge correction, we assume a competent controller gathers all ingredients and tools; thus, an agent starts each episode with all necessary ingredients and tools. An episode (max 50 timesteps) is successful if the agent completes the plan.
A.2 Results
Table 6: Success rates in the TextWorld Cooking environment, comparing XENON against the SC (LLM self-correction) and DECKARD baselines from Section Ë 5.1. We report the mean $±$ standard deviation over 3 independent runs, where each run consists of 100 episodes.
| | DECKARD | SC | XENON |
| --- | --- | --- | --- |
| Success Rate | $0.09± 0.02$ | $0.75± 0.04$ | $1.00± 0.00$ |
Table Ë 6 shows that XENON achieves a perfect success rate ( $1.00± 0.00$ ), significantly outperforming both SC ( $0.75± 0.04$ ) and DECKARD ( $0.09± 0.02$ ). These results demonstrate that XENONâs core mechanisms (ADG and FAM) are generalizable, effectively correcting flawed planning knowledge in a domain that requires the agent to discover valid symbolic actions and preconditions. Notably, the SC baseline fails to achieve high performance, even in the TextWorld Cooking environment which is simpler than Minecraft. This reinforces our claim that relying on LLM self-correction is less reliable than XENONâs experience-based algorithmic knowledge correction.
Appendix B Prompts and qualitative results of LLM self-correction
B.1 Dependency correction
Figure Ë 9 shows the prompt used for dependency correction.
âŹ
1 You are a professional game analyst. For a given < item _ name >, you need to make < required _ items > to get the item.
2 If you make < required _ items > well, I will give you 1 $.
3
4 I will give you recent transitions.
5 % Recent failed trajectories are given
6 [Failed example]
7 < item _ name >: {item _ name}
8 < hypothesized _ required _ items >: {original _ prediction}
9 < inventory >: {inventory}
10 < plan >: {failed _ subgoal}
11 < success >: false
12
13 I will give you learned items similar to < item _ name >, and their validated required items, just for reference.
14 % K similar experienced items and their requirements are given
15 [Success Example]
16 < item _ name >: {experienced _ item}
17 < required _ items > {experienced _ requirements}
18
19 % Make a new predicted requirement set
20 [Your turn]
21 Here is < item _ name >, you MUST output < required _ items > to obtain the item in JSON format. Remember < required _ items > MUST be in JSON format.
22
23 < item _ name >: {item _ name}
24 < required _ items >:
Figure 9: Prompt used for LLM self-correction about dependencies.
We provide some examples of actual prompts and LLM outputs in Figure Ë 10, Figure Ë 11
âŹ
1 You are a professional game analyst. For a given < item _ name >, you need to make < required _ items > to get the item.
2 If you make < required _ items > well, I will give you 1 $.
3
4 I will give you recent transitions.
5
6 [Failed example]
7 < item _ name >: iron _ nugget
8 < hypothesized _ required _ items >: {â iron _ ore â: 1, â crafting _ table â: 1}
9 < inventory >: {â crafting _ table â: 1, â wooden _ sword â: 1, â wooden _ pickaxe â: 1, â torch â: 4, â furnace â: 1, â stone _ pickaxe â: 1, â iron _ axe â: 1, â iron _ shovel â: 1, â stick â: 2, â iron _ pickaxe â: 1, â diamond â: 3, â iron _ ingot â: 2, â iron _ ore â: 2, â gold _ ore â: 1, â coal â: 1}
10 < plan >: dig down and mine iron _ nugget
11 < success >: false
12
13 I will give you learned items similar to < item _ name >, and their validated required items, just for reference.
14 [Success Example]
15 < item _ name >:
16 iron _ ingot
17 < required _ items >:
18 {â recipe â: {â furnace â: 1, â iron _ ore â: 1, â coals â: 1}}
19 [Success Example]
20 < item _ name >:
21 iron _ pickaxe
22 < required _ items >:
23 {â recipe â: {â stick â: 2, â iron _ ingot â: 3, â crafting _ table â: 1}}
24 [Success Example]
25 < item _ name >:
26 iron _ shovel
27 < required _ items >:
28 {â recipe â: {â stick â: 2, â iron _ ingot â: 1, â crafting _ table â: 1}}
29
30 [Your turn]
31 Here is < item _ name >, you MUST output < required _ items > to obtain the item in JSON format. Remember < required _ items > MUST be in JSON format.
32
33 < item _ name >:
34 iron _ nugget
35 < required _ items >:
36 % LLM output: {ârecipeâ: {âiron_oreâ: 1, âcrafting_tableâ: 1}}
Figure 10: Example of dependency self-correction for iron_nugget.
âŹ
1 You are a professional game analyst. For a given < item _ name >, you need to make < required _ items > to get the item.
2 If you make < required _ items > well, I will give you 1 $.
3
4 I will give you recent transitions.
5
6 [Failed example]
7 < item _ name >: charcoal
8 < hypothesized _ required _ items >: {â oak _ log â: 8}
9 < inventory >: {â dirt â: 1, â oak _ log â: 2, â crafting _ table â: 1, â wooden _ hoe â: 1, â wooden _ pickaxe â: 1, â torch â: 4, â stone _ axe â: 1, â furnace â: 1, â stone _ pickaxe â: 1, â stick â: 2, â iron _ pickaxe â: 1, â diamond â: 1, â iron _ ingot â: 3, â iron _ ore â: 2, â coal â: 2}
10 < action >: craft charcoal
11 < success >: false
12
13 I will give you learned items similar to < item _ name >, and their validated required items, just for reference.
14 [Success Example]
15 < item _ name >:
16 coals
17 < required _ items >:
18 {â recipe â: {â wooden _ pickaxe â: 1}}
19 [Success Example]
20 < item _ name >:
21 furnace
22 < required _ items >:
23 {â recipe â: {â cobblestone â: 8, â crafting _ table â: 1}}
24 [Success Example]
25 < item _ name >:
26 diamond
27 < required _ items >:
28 {â recipe â: {â iron _ pickaxe â: 1}}
29
30 [Your turn]
31 Here is < item _ name >, you MUST output < required _ items > to achieve charcoal in JSON format. Remember < required _ items > MUST be in JSON format.
32
33 < item _ name >:
34 charcoal
35 < required _ items >:
36 % LLM output: {ârecipeâ: {âoak_logâ: 8}}
Figure 11: Example of dependency self-correction for charcoal.
B.2 Action correction
Figure Ë 12 shows the prompt used self-reflection for failed actions.
âŹ
1 % LLM self-reflection to analyze failure reasons
2 You are a professional game analyst.
3 For a given < item _ name > and < inventory >, you need to analyze why < plan > failed to get the item.
4 I will give you examples of analysis as follow.
5
6 [Example]
7 < item _ name >: wooden _ pickaxe
8 < inventory >: {â stick â: 4, â planks â: 4, â crafting _ table â: 1}
9 < plan >: smelt wooden _ pickaxe
10 < failure _ analysis >
11 {" analysis ": " You failed because you cannot smelt a wooden _ pickaxe. You should craft it instead."}
12
13 [Example]
14 < item _ name >: stone _ pickaxe
15 < inventory >: {â stick â: 4, â planks â: 4, â crafting _ table â: 1}
16 < plan >: craft stone _ pickaxe
17 < failure _ analysis >
18 {" analysis ": " You failed because you do not have enough cobblestones."}
19
20 [Your turn]
21 Here is < item _ name >, < inventory > and < plan >, you MUST output < failure _ analysis > concisely in JSON format.
22
23 < item _ name >: {item _ name}
24 < inventory >: {inventory}
25 < plan >: {plan}
26 < failure _ analysis >
27
28 % Then, using the self-reflection results, LLM self-correct its actions.
29 For an item name, you need to make a plan, by selecting one among provided options.
30 I will give you examples of which plans are needed to achieve an item, just for reference.
31 [Example]
32 < item name >
33 {similar _ item}
34 < task planning >
35 {successful _ plan
36
37 Here are some analyses on previous failed plans for this item.
38 [Analysis]
39 {â item _ name â: {item}, â inventory â: {inventory}, â plan â: â{plan}â, â failure _ analysis â: â{self - reflection}â}
40
41 [Your turn]
42 Here is < item name >, you MUST select one from below < options >, to make < task planning >.
43 you MUST select one from below < options >. DO NOT MAKE A PLAN NOT IN < options >.
44
45 < options >:
46 1: {" task ": " dig down and mine {item}", " goal ": [{item}, {quantity}]}
47 2: {" task ": " craft {item}", " goal ": [{item}, {quantity}]}
48 3: {" task ": " smelt {item}", " item ": [{item}, {quantity}}
49
50 < item name >
51 {item}
52 < task planning >
Figure 12: Prompts used for LLM self-correction about actions.
We provide some examples of actual prompts and LLM outputs in Figure Ë 13, Figure Ë 14
âŹ
1 For an item name, you need to make a plan, by selecting one among provided options.
2 I will give you examples of which plans are needed to achieve an item, just for reference.
3
4 [Example]
5 < item name >
6 iron _ ingot
7 < task planning >
8 {" task ": " smelt iron _ ingot ", " goal ": [" iron _ ingot ", 1]}
9
10 [Example]
11 < item name >
12 iron _ pickaxe
13 < task planning >
14 {" task ": " craft iron _ pickaxe ", " goal ": [" iron _ pickaxe ", 1]}
15
16 [Example]
17 < item name >
18 iron _ shovel
19 < task planning >
20 {" task ": " craft iron _ shovel ", " goal ": [" iron _ shovel ", 1]}
21
22 Here are some analyses on previous failed plans for this item.
23 [Analysis]
24 {â item _ name â: â iron _ nugget â,
25 â inventory â: {â crafting _ table â: 1, â wooden _ sword â: 1, â wooden _ pickaxe â: 1, â torch â: 4, â furnace â: 1, â stone _ pickaxe â: 1, â iron _ axe â: 1, â iron _ shovel â: 1, â stick â: 2, â iron _ pickaxe â: 1, â diamond â: 3, â iron _ ingot â: 2, â iron _ ore â: 2, â gold _ ore â: 1, â coal â: 1},
26 â plan â: â dig down and mine iron _ nugget â,
27 â failure _ analysis â: â You failed because you do not have any iron ore or diamond ore to mine for iron nuggets.â}
28
29 [Your turn]
30 Here is < item name >, you MUST select one from below < options >, to make < task planning >.
31 you MUST select one from below < options >. DO NOT MAKE A PLAN NOT IN < options >.
32
33 < options >
34 1. {" task ": " dig down and mine iron _ nugget ", " goal ": [" iron _ nugget ", 1]}
35 2. {" task ": " craft iron _ nugget ", " goal ": [" iron _ nugget ", 1]}
36 3. {" task ": " smelt iron _ nugget ", " goal ": [" iron _ nugget ", 1]}
37
38 < item name >
39 iron _ nugget
40 % LLM output: â{"task": "dig down and mine iron_nugget", "goal": ["iron_nugget", 1]}â
Figure 13: Example of action self-correction for iron_nugget.
âŹ
1 For an item name, you need to make a plan, by selecting one among provided options.
2 I will give you examples of which plans are needed to achieve an item, just for reference.
3
4 [Example]
5 < item name >
6 coals
7 < task planning >
8 {" task ": " dig down and mine coals ", " goal ": [" coals ", 1]}
9
10 [Example]
11 < item name >
12 furnace
13 < task planning >
14 {" task ": " craft furnace ", " goal ": [" furnace ", 1]}
15
16 [Example]
17 < item name >
18 diamond
19 < task planning >
20 {" task ": " dig down and mine diamond ", " goal ": [" diamond ", 1]}
21
22 Here are some analyses on previous failed plans for this item.
23 [Analysis]
24 {â item _ name â: â charcoal â,
25 â inventory â: {â dirt â: 1, â oak _ log â: 2, â crafting _ table â: 1, â wooden _ hoe â: 1, â wooden _ pickaxe â: 1, â torch â: 4, â stone _ axe â: 1, â furnace â: 1, â stone _ pickaxe â: 1, â stick â: 2, â iron _ pickaxe â: 1, â diamond â: 1, â iron _ ingot â: 3, â iron _ ore â: 2, â coal â: 2},
26 â plan â: â mine iron _ nugget â,
27 â failure _ analysis â: â You failed because you already have enough charcoal.â}
28
29
30 [Your turn]
31 Here is < item name >, you MUST select one from below < options >, to make < task planning >.
32 you MUST select one from below < options >. DO NOT MAKE A PLAN NOT IN < options >.
33
34 < options >
35 1. {" task ": " mine iron _ nugget ", " goal ": [" charcoal ", 1]}
36 2. {" task ": " craft charcoal ", " goal ": [" charcoal ", 1]}
37 3. {" task ": " smelt charcoal ", " goal ": [" charcoal ", 1]}
38
39 < item name >
40 charcoal
41 < task planning >
42 % LLM output: â{"task": "craft charcoal", "goal": ["charcoal", 1]}â
Figure 14: Example of action self-correction for charcoal.
Appendix C Experienced requirement set and dependency graph update
We note that the assumptions explained in this section are largely similar to those in the implementation of DECKARD (Nottingham et al., 2023) https://github.com/DeckardAgent/deckard.
Determining experienced requirement set
When the agent obtains item $v$ while executing a subgoal $(a,q,u)$ , it determines the experienced requirement set $\mathcal{R}_{exp}(v)$ differently depending on whether the high-level action $a$ is âmineâ or falls under âcraftâ or âsmeltâ. If $a$ is âmineâ, the agent determines $\mathcal{R}_{exp}(v)$ based on the pickaxe in its inventory. If no pickaxe is held, $\mathcal{R}_{exp}(v)$ is $\emptyset$ . Otherwise, $\mathcal{R}_{exp}(v)$ becomes $\{(\text{the highest-tier pickaxe the agent has},1)\}$ , where the highest-tier pickaxe is determined following the hierarchy: âwooden_pickaxeâ, âstone_pickaxeâ, âiron_pickaxeâ, âdiamond_pickaxeâ. If $a$ is âcraftâ or âsmeltâ, the agent determines the used items and their quantities as $\mathcal{R}_{exp}(v)$ by observing inventory changes when crafting or smelting $v$ .
Dependency graph update
When the agent obtains an item $v$ and its $\mathcal{R}_{exp}(v)$ for the first time, it updates its dependency graph $\hat{\mathcal{G}}=(\hat{\mathcal{V}},\hat{\mathcal{E}})$ . Since $\mathcal{R}_{\text{exp}}(v)$ only contains items acquired before $v$ , no cycles can be introduced to ADG during learning. The update proceeds as follows: The agent adds $v$ to both the set of known items $\hat{\mathcal{V}}$ . Then, it updates the edge set $\hat{\mathcal{E}}$ by replacing $v$ âs incoming edges with $\mathcal{R}_{exp}(v)$ : it removes all of $v$ âs incoming edges $(u,·,v)â\hat{\mathcal{E}}$ and adds new edges $(u_{i},q_{i},v)$ to $\hat{\mathcal{E}}$ for every $(u_{i},q_{i})â\mathcal{R}_{exp}(v)$ .
Appendix D Full procedure of XENON
input : invalid action threshold $x_{0}$ , inadmissible item threshold $c_{0}$ , less-explored item scale $\alpha_{s}$ , inadmissible item scale $\alpha_{i}$
1 Initialize dependency $\hat{\mathcal{G}}â(\hat{\mathcal{V}},\hat{\mathcal{E}})$ , revision counts $C[v]â 1$ for all $vâ\hat{\mathcal{V}}$
2 Initialize memory $S(a,v)=0,F(a,v)=0$ for all $vâ\hat{\mathcal{V}},aâ\mathcal{A}$
3 while learning do
4 Get an empty inventory $inv$
$v_{g}â\texttt{SelectGoalWithDifficulty}(\hat{\mathcal{G}},C[·])$
// DEX Appendix Ë G
5 while $H_{episode}$ do
6 if $v_{g}â inv$ then
7 $v_{g}â\texttt{SelectGoalWithDifficulty}(\hat{\mathcal{G}},C[·])$
8
9
Series of aggregated requirements $((q_{l},u_{l}))_{l=1}^{L_{v_{g}}}$ using $\hat{\mathcal{G}}$ and $inv$
// from Section Ë 3
10 Plan $Pâ((a_{l},q_{l},u_{l}))_{l=1}^{L_{v_{g}}}$ by selecting $a_{l}$ for each $u_{l}$ , using LLM, $S$ , $F$ , $x_{0}$
11 foreach subgoal $(a,q,u)â P$ do
12 Execute $(a,q,u)$ then get the execution result $success$
Get an updated inventory $inv$ , dependency graph $\hat{\mathcal{G}}$
// from Section Ë 3
13
14 if success then $S(a,u)â S(a,u)+1$
15 else $F(a,u)â F(a,u)+1$
16
17 if not $success$ then
18 if All actions are invalid then
$\hat{\mathcal{G}},Câ\texttt{RevisionByAnalogy}(\hat{\mathcal{G}},u,C[·],c_{0},\alpha_{s},\alpha_{i})$
// ADG Section Ë 4.1
19 Reset memory $S(·,u)â 0,F(·,u)â 0$
20 $v_{g}â\texttt{SelectGoalWithDifficulty}(\hat{\mathcal{G}},C[·])$
21 break
22
23
24
25
Algorithm 1 Pseudocode of XENON
The full procedure of XENON is outlined in Algorithm Ë 1
Appendix E Details in Adaptive Dependency Graph (ADG)
E.1 Rationale for initial knowledge
In real-world applications, a human user may wish for an autonomous agent to accomplish certain goals, yet the user themselves may have limited or no knowledge of how to achieve them within a complex environment. We model this scenario by having a user specify goal items without providing the detailed requirements, and then the agent should autonomously learn how to obtain these goal items. The set of 67 goal item names ( $\mathcal{V}_{0}$ ) provided to the agent represents such user-specified goal items, defining the learning objectives.
To bootstrap learning in complex environments, LLM-based planning literature often utilizes minimal human-written plans for initial knowledge (Zhao et al., 2024; Chen et al., 2024). In our case, we provide the agent with 3 human-written plans (shown in Appendix Ë I). By executing these plans, our agent can experience items and their dependencies, thereby bootstrapping the dependency learning process.
E.2 Details in dependency graph initialization
Keeping ADG acyclic during initialization
During initialization, XENON prevents cycles in ADG algorithmically and maintains ADG as a directed acyclic graph, by, whenever adding an LLM-predicted requirement set for an item, discarding any set that would make a cycle and instead assign an empty requirement set to that item. Specifically, we identify and prevent cycles in three steps when adding LLM-predicted incoming edges for an item $v$ . First, we tentatively insert the LLM-predicted incoming edges of $v$ into the current ADG. Second, we detect cycles by checking whether any of $v$ âs parents now appears among $v$ âs descendants in the updated graph. Third, if a cycle is detected, we discard the LLM-predicted incoming edges for $v$ and instead assign an empty set of incoming edges to $v$ in the ADG.
Pseudocode is shown in Algorithm Ë 2. The prompt is shown in Figure Ë 15.
1
input : Goal items $\mathcal{V}_{0}$ , (optional) human written plans $\mathcal{P}_{0}$
output : Initialized dependency graph $\hat{\mathcal{G}}=(\hat{\mathcal{V}},\hat{\mathcal{E}})$ , experienced items $\mathcal{V}$
2
3 Initialize a set of known items $\hat{\mathcal{V}}â\mathcal{V}_{0}$ , edge set $\hat{\mathcal{E}}â\emptyset$
4 Initialize a set of experienced items $\mathcal{V}â\emptyset$
5
6 foreach plan in $\mathcal{P}_{0}$ do
7 Execute the plan and get experienced items and their experienced requirement sets $\bigl\{(v_{n},\mathcal{R}_{exp}(v_{n}))\bigr\}_{n=1}^{N}$
8 foreach $(v,\mathcal{R}_{exp}(v))â\bigl\{(v_{n},\mathcal{R}_{exp}(v_{n}))\bigr\}_{n=1}^{N}$ do
9 if $vâ\mathcal{V}$ then
/* graph update from Appendix Ë C */
10 $\mathcal{V}â\mathcal{V}\cup\{v\}$ , $\hat{\mathcal{V}}â\hat{\mathcal{V}}\cup\{v\}$
11 Add edges to $\hat{\mathcal{E}}$ according to $\mathcal{R}_{exp}(v)$
12
13
/* Graph construction using LLM predictions */
14 while $â vâ\hat{\mathcal{V}}\setminus\mathcal{V}$ whose requirement set $\mathcal{R}(v)$ has not yet been predicted by the LLM do
15 Select such an item $vâ\hat{\mathcal{V}}\setminus\mathcal{V}$ (i.e., $\mathcal{R}(v)$ has not yet been predicted)
16 Select $\mathcal{V}_{K}âeq\mathcal{V}$ based on Top-K semantic similarity to $v$ , $|\mathcal{V}_{K}|=K$
17 Predict $\mathcal{R}(v)â LLM(v,\{\big(u,\mathcal{R}(u,\hat{\mathcal{G}})\big)\}_{uâ\mathcal{V}_{K}})$
18
19 foreach $(u_{j},q_{j})â\mathcal{R}(v)$ do
20 $\hat{\mathcal{E}}â\hat{\mathcal{E}}\cup\{(u_{j},q_{j},v)\}$
21 if $u_{j}â\hat{\mathcal{V}}$ then
22 $\hat{\mathcal{V}}â\hat{\mathcal{V}}\cup\{u_{j}\}$
23
24
25
Algorithm 2 GraphInitialization
âŹ
1 You are a professional game analyst. For a given < item _ name >, you need to make < required _ items > to get the item.
2 If you make < required _ items > well, I will give you 1 $.
3
4 I will give you some examples < item _ name > and < required _ items >.
5
6 [Example] % TopK similar experienced items are given as examples
7 < item _ name >: {experienced _ item}
8 < required _ items >: {experienced _ requirement _ set}
9
10 [Your turn]
11 Here is a item name, you MUST output < required _ items > in JSON format. Remember < required _ items > MUST be in JSON format.
12
13 < item _ name >: {item _ name}
14 < required _ items >:
Figure 15: Prompt for requirement set prediction for dependency graph initialization
E.3 Pseudocode of RevisionByAnalogy
Pseudocode is shown in Algorithm Ë 3.
1
input : Dependency graph $\hat{\mathcal{G}}=(\hat{\mathcal{V}},\hat{\mathcal{E}})$ , an item to revise $v$ , exploration counts $C[·]$ , inadmissible item threshold $c_{0}$ , less-explored item scale $\alpha_{s}$ , inadmissible item scale $\alpha_{i}$
output : Revised dependency graph $\hat{\mathcal{G}}=(\hat{\mathcal{V}},\hat{\mathcal{E}})$ , exploration counts $C[·]$
2
3 Consider cases based on $C[v]$ :
4 if $C[v]>c_{0}$ then
/* $v$ is inadmissible */
5
/* resource set: items previously consumed for crafting other items */
6 $\mathcal{R}(v)â\{(u,\alpha_{i})\mid uâ\text{``resource'' set}\}$
/* Remove all incoming edges to $v$ in $\hat{\mathcal{E}}$ and add new edges */
7 $\hat{\mathcal{E}}â\hat{\mathcal{E}}\setminus\{(x,q,v)\mid(x,q,v)â\hat{\mathcal{E}}\}$
8 foreach $(u,\alpha_{i})â\mathcal{R}(v)$ do
9 $\hat{\mathcal{E}}â\hat{\mathcal{E}}\cup\{(u,\alpha_{i},v)\}$
10
11
/* Revise requirement sets of descendants of $v$ */
12 Find the set of all descendants of $v$ in $\hat{\mathcal{G}}$ (excluding $v$ ): $\mathcal{W}â\text{FindAllDescendants}(v,\hat{\mathcal{G}})$
13
14 for each item $w$ in $\mathcal{W}$ do
15 Invoke RevisionByAnalogy for $w$
16
17
18 else
/* $v$ is less explored yet. Revise based on analogy */
19 Find similar successfully obtained items $\mathcal{V}_{K}âeq\hat{\mathcal{V}}$ based on Top-K semantic similarity to $v$
20
Candidate items $U_{cand}â\{u\midâ wâ\mathcal{V}_{K},(u,·,w)â\hat{\mathcal{E}}\}$ /* all items required to obtain similar successfully obtained items $\mathcal{V}_{K}$ */
21
22 Start to construct a requirement set, $\mathcal{R}(v)â\emptyset$
23 for each item $u$ in $U_{cand}$ do
24 if $u$ is in âresourceâ set then
25 Add $(u,\alpha_{s}Ă C[v])$ to $\mathcal{R}(v)$
26
27 else
28 Add $(u,1)$ to $\mathcal{R}(v)$
29
30
31 Update $\hat{\mathcal{G}}$ : Remove all incoming edges to $v$ in $\hat{\mathcal{E}}$ , and add new edges $(u,q,v)$ to $\hat{\mathcal{E}}$ for each $(u,q)â\mathcal{R}(v)$
32
33
Algorithm 3 RevisionByAnalogy
Appendix F Step-by-step planning using FAM
Given a sequence of aggregated requirements $((q_{l},v_{l}))_{l=1}^{L}$ , XENON employs a step-by-step planning approach, iteratively selecting an high-level action $a_{l}$ for each requirement item $v_{l}$ to make a subgoal $(a_{l},q_{l},v_{l})$ . This process considers the past attempts to obtain $v_{l}$ using specific actions. Specifically, for a given item $v_{l}$ , if FAM has an empirically valid action, XENON reuses it without prompting the LLM. Otherwise, XENON prompts the LLM to select an action, leveraging information from (i) valid actions for items semantically similar to $v_{l}$ , (ii) empirically invalid actions for $v_{l}$ .
The pseudocode for this action selection process is detailed in Algorithm Ë 4. The prompt is shown in Figure Ë 16.
1
Input : An item $v$ , Action set $\mathcal{A}$ , Success/Failure counts from FAM $S(·,·)$ and $F(·,·)$ , Invalid action threshold $x_{0}$
Output : Selected action $a_{selected}$
2
/* 1. Classify actions based on FAM history (S and F counts) */
3 $\mathcal{A}^{valid}_{v}â\{aâ\mathcal{A}\mid S(a,v)>0\land S(a,v)>F(a,v)-x_{0}\}$
4 $\mathcal{A}^{invalid}_{v}â\{aâ\mathcal{A}\mid F(a,v)â„ S(a,v)+x_{0}\}$
5
6 if $\mathcal{A}^{valid}_{v}â \emptyset$ then
/* Reuse the empirically valid action if it exists */
7 Select $a_{selected}$ from $\mathcal{A}^{valid}_{v}$
8 return $a_{selected}$
9
10 else
/* Otherwise, query LLM with similar examples and filtered candidates */
11
/* (i) Retrieve valid actions from other items for examples */
12 $\mathcal{V}_{source}â\{uâ\hat{V}\setminus\{v\}\midâ a^{\prime},S(a^{\prime},u)>0\land S(a^{\prime},u)>F(a^{\prime},u)-x_{0}\}$
13 Identify $\mathcal{V}_{topK}âeq\mathcal{V}_{source}$ as the $K$ items most similar to $v$ (using S-BERT)
14 $\mathcal{D}_{examples}â\{(u,a_{valid})\mid uâ\mathcal{V}_{topK},a_{valid}â\mathcal{A}^{valid}_{u}\}$
15
/* (ii) Prune invalid actions to form candidates */
16 $\mathcal{A}^{cand}_{v}â\mathcal{A}\setminus\mathcal{A}^{invalid}_{v}$
17
18 if $\mathcal{A}^{cand}_{v}=\emptyset$ then
19 $\mathcal{A}^{cand}_{v}â\mathcal{A}$
20
21 $a_{selected}â\text{LLM}(v,\mathcal{D}_{examples},\mathcal{A}^{cand}_{v})$
22 return $a_{selected}$
23
Algorithm 4 Step-by-step Planning with FAM
âŹ
1 For an item name, you need to make a plan, by selecting one among provided options.
2 I will give you examples of which plans are needed to achieve an item, just for reference.
3
4 % Similar items and their successful plans are given
5 [Example]
6 < item name >
7 {similar _ item}
8 < task planning >
9 {successful _ plan}
10
11 [Your turn]
12 Here is < item name >, you MUST select one from below < options >, to make < task planning >.
13 you MUST select one from below < options >. DO NOT MAKE A PLAN NOT IN < options >.
14
15 % Three actions are given, excluding any that were empirically invalid
16 < options >:
17 1: {" task ": " dig down and mine {item}", " goal ": [{item}, {quantity}]}
18 2: {" task ": " craft {item}", " goal ": [{item}, {quantity}]}
19 3: {" task ": " smelt {item}", " item ": [{item}, {quantity}}
20
21 < item name >
22 {item}
23 < task planning >
Figure 16: Prompt for action selection
Appendix G Difficulty-based Exploration (DEX)
{omhx}
For autonomous dependency learning, we introduce DEX. DEX strategically selects items that (1) appear easier to obtain, prioritizing those (2) under-explored for diversity and (3) having fewer immediate prerequisite items according to the learned graph $\hat{\mathcal{G}}$ . (line 5 in Algorithm Ë 1). First, DEX defines the previously unobtained items but whose required items are all obtained according to learned dependency $\hat{\mathcal{G}}$ as the frontier $F$ . Next, the least explored frontier set $\mathcal{F}_{min}\coloneqq\{fâ F\mid C(f)=\min_{f^{\prime}â F}C(f^{\prime})\}$ is identified, based on revision counts $C(·)$ . For items $f^{\prime}â\mathcal{F}_{min}$ , difficulty $D(f^{\prime})$ is estimated as $L_{f^{\prime}}$ , the number of distinct required items needed to obtain $f^{\prime}$ according to $\hat{\mathcal{G}}$ . The intrinsic goal $g$ is then selected as the item in $\mathcal{F}_{min}$ with the minimum estimated difficulty: $g=\arg\min_{f^{\prime}â\mathcal{F}_{min}}D(f^{\prime})$ . Ties are broken uniformly at random.
While our frontier concept is motivated by DECKARD (Nottingham et al., 2023), DEXâs selection process differs significantly. DECKARD selects randomly from $\{vâ\mathcal{F}\mid C(v)†c_{0}\}$ , but if this set is empty, it selects randomly from the union of frontier set and previously obtained item set. This risks inefficient attempts on already obtained items. In contrast, DEX exclusively selects goals from $\mathcal{F}_{\text{min}}$ , inherently avoiding obtained items. This efficiently guides exploration towards achievable, novel dependencies.
Appendix H Context-aware Reprompting (CRe)
Minecraft, a real-world-like environment can lead to situations where the controller stalls (e.g., when stuck in deep water or a cave). To assist the controller, the agent provides temporary prompts to guide it (e.g., "get out of the water and find trees"). XENON proposes a context-aware reprompting scheme. It is inspired by Optimus-1 Li et al. (2024b) but introduces two key differences:
1. Two-stage reasoning. When invoked, in Optimus-1, LLM simultaneously interprets image observations, decides whether to reprompt, and generates new prompts. XENON decomposes this process into two distinct steps:
1. the LLM generates a caption for the current image observation, and
1. using text-only input (the generated caption and the current subgoal prompt), the LLM determines if reprompting is necessary and, if so, produces a temporary prompt.
1. Trigger. Unlike Optimus-1, which invokes the LLM at fixed intervals, XENON calls the LLM only if the current subgoal item has not been obtained within that interval. This approach avoids unnecessary or spurious interventions from a smaller LLM.
The prompt is shown in Figure Ë 17.
âŹ
1 % Prompt for the first step: image captioning
2 Given a Minecraft game image, describe nearby Minecraft objects, like tree, grass, cobblestone, etc.
3 [Example]
4 " There is a large tree with dark green leaves surrounding the area."
5 " The image shows a dark, cave - like environment in Minecraft. The player is digging downwards. There are no visible trees or grass in this particular view."
6 " The image shows a dark, narrow tunnel made of stone blocks. The player is digging downwards."
7 [Your turn]
8 Describe the given image, simply and clearly like the examples.
9
10 % Prompt for the second step: reasoning whether reprompting is needed or not
11 Given < task > and < visual _ description >, determine if the player needs intervention to achieve the goal. If intervention is needed, suggest a task that the player should perform.
12 I will give you examples.
13 [Example]
14 < task >: chop tree
15 < visual _ description >: There is a large tree with dark green leaves surrounding the area.
16 < goal _ item >: logs
17 < reasoning >:
18 {{
19 " need _ intervention ": false,
20 " thoughts ": " The player can see a tree and can chop it down to get logs.",
21 " task ": "",
22}}
23 [Example]
24 < task >: chop tree
25 < visual _ description >: The image shows a dirt block in Minecraft. There is a tree in the image, but it is too far from here.
26 < goal _ item >: logs
27 < reasoning >:
28 {{
29 " need _ intervention ": true,
30 " thoughts ": " The player is far from trees. The player needs to move to the trees.",
31 " task ": " explore to find trees ",
32}}
33 [Example]
34 < task >: dig down to mine iron _ ore
35 < visual _ description >: The image shows a dark, narrow tunnel made of stone blocks. The player is digging downwards.
36 < goal _ item >: iron _ ore
37 < reasoning >:
38 {{
39 " need _ intervention ": false,
40 " thoughts ": " The player is already digging down and is likely to find iron ore.",
41 " task ": "",
42}}
43 [Your turn]
44 Here is the < task >, < visual _ description >, and < goal _ item >.
45 You MUST output the < reasoning > in JSON format.
46 < task >: {task} % current prompt for the controller
47 < visual _ description >: {visual _ description} % caption from the step 1
48 < goal _ item >: {goal _ item} % current subgoal item
49 < reasoning >:
Figure 17: Prompt for context-aware reprompting
Appendix I Implementation details
To identify similar items, semantic similarity between two items is computed as the cosine similarity of their Sentence-BERT (all-MiniLM-L6-v2 model) embeddings (Reimers and Gurevych, 2019). This metric is utilized whenever item similarity comparisons are needed, such as in Algorithm Ë 2, Algorithm Ë 3, and Algorithm Ë 4.
I.1 Hyperparameters
Table 7: Hyperparameters used in our experiments.
| Hyperparameter | Notation | Value |
| --- | --- | --- |
| Failure threshold for invalid action | $x_{0}$ | $2$ |
| Revision count threshold for inadmissible items | $c_{0}$ | $3$ |
| Required items quantity scale for less explored items | $\alpha_{s}$ | $2$ |
| Required items quantity scale for inadmissible items | $\alpha_{i}$ | $8$ |
| Number of top-K similar experienced items used | $K$ | $3$ |
For all experiments, we use consistent hyperparameters across environments. The hyperparameters, whose values are determined with mainly considering robustness against imperfect controllers. All hyperparameters are listed in Table Ë 7. The implications of increasing each hyperparameterâs value are detailed below:
- $x_{0}$ (failure threshold for empirically invalid action): Prevents valid actions from being misclassified as invalid due to accidental failures from an imperfect controller or environmental stochasticity. Values that are too small or large hinder dependency learning and planning by hampering the discovery of valid actions.
- $c_{0}$ (exploration count threshold for inadmissible items): Ensures an item is sufficiently attempted before being deemed âinadmissibleâ and triggering a revision for its descendants. Too small/large values could cause inefficiency; small values prematurely abandon potentially correct LLM predictions for descendants, while large values prevent attempts on descendant items.
- $\alpha_{s}$ (required items quantity scale for less explored items): Controls the gradual increase of required quantities for revised required items. Small values make learning inefficient by hindering item obtaining due to insufficient required items, yet large values lower robustness by overburdening controllers with excessive quantity demands.
- $\alpha_{i}$ (required items quantity scale for inadmissible items): Ensures sufficient acquisition of potential required items before retrying inadmissible items to increase the chance of success. Improper values reduce robustness; too small leads to failure in obtaining items necessitating many items; too large burdens controllers with excessive quantity demands.
- $K$ (Number of similar items to retrieve): Determines how many similar, previously successful experiences are retrieved to inform dependency revision (Algorithm Ë 3) and action selection (Algorithm Ë 4).
I.2 Human-written plans
We utilize three human-written plans (for iron sword, golden sword, and diamond, shown in Plan 18, 19, and 20, respectively), the format of which is borrowed from the human-written plan examples in the publicly released Optimus-1 repository https://github.com/JiuTian-VL/Optimus-1/blob/main/src/optimus1/example.py. We leverage the experiences gained from executing these plans to initialize XENONâs knowledge.
âŹ
1 iron_sword: str = "" "
2 <goal>: craft an iron sword.
3 <requirements>:
4 1. log: need 7
5 2. planks: need 21
6 3. stick: need 5
7 4. crafting_table: need 1
8 5. wooden_pickaxe: need 1
9 6. cobblestone: need 11
10 7. furnace: need 1
11 8. stone_pickaxe: need 1
12 9. iron_ore: need 2
13 10. iron_ingot: need 2
14 11. iron_sword: need 1
15 <plan>
16 {
17 " step 1 ": {" prompt ": " mine logs ", " item ": [" logs ", 7]},
18 " step 2 ": {" prompt ": " craft planks ", " item ": [" planks ", 21]},
19 " step 3 ": {" prompt ": " craft stick ", " item ": [" stick ", 5]},
20 " step 4 ": {" prompt ": " craft crafting_table ", " item ": [" crafting_table ", 1]},
21 " step 5 ": {" prompt ": " craft wooden_pickaxe ", " item ": [" wooden_pickaxe ", 1]},
22 " step 6 ": {" prompt ": " mine cobblestone ", " item ": [" cobblestone ", 11]},
23 " step 7 ": {" prompt ": " craft furnace ", " item ": [" furnace ", 1]},
24 " step 8 ": {" prompt ": " craft stone_pickaxe ", " item ": [" stone_pickaxe ", 1]},
25 " step 9 ": {" prompt ": " mine iron_ore ", " item ": [" iron_ore ", 2]},
26 " step 10 ": {" prompt ": " smelt iron_ingot ", " item ": [" iron_ingot ", 2]},
27 " step 11 ": {" prompt ": " craft iron_sword ", " item ": [" iron_sword ", 1]}
28}
29 " ""
Figure 18: Human-written plan for crafting an iron sword.
âŹ
1 golden_sword: str = "" "
2 <goal>: craft a golden sword.
3 <requirements>:
4 1. log: need 9
5 2. planks: need 27
6 3. stick: need 7
7 4. crafting_table: need 1
8 5. wooden_pickaxe: need 1
9 6. cobblestone: need 11
10 7. furnace: need 1
11 8. stone_pickaxe: need 1
12 9. iron_ore: need 3
13 10. iron_ingot: need 3
14 11. iron_pickaxe: need 1
15 12. gold_ore: need 2
16 13. gold_ingot: need 2
17 14. golden_sword: need 1
18 <plan>
19 {
20 " step 1 ": {" prompt ": " mine logs ", " item ": [" logs ", 7]},
21 " step 2 ": {" prompt ": " craft planks ", " item ": [" planks ", 21]},
22 " step 3 ": {" prompt ": " craft stick ", " item ": [" stick ", 5]},
23 " step 4 ": {" prompt ": " craft crafting_table ", " item ": [" crafting_table ", 1]},
24 " step 5 ": {" prompt ": " craft wooden_pickaxe ", " item ": [" wooden_pickaxe ", 1]},
25 " step 6 ": {" prompt ": " mine cobblestone ", " item ": [" cobblestone ", 11]},
26 " step 7 ": {" prompt ": " craft furnace ", " item ": [" furnace ", 1]},
27 " step 8 ": {" prompt ": " craft stone_pickaxe ", " item ": [" stone_pickaxe ", 1]},
28 " step 9 ": {" prompt ": " mine iron_ore ", " item ": [" iron_ore ", 3]},
29 " step 10 ": {" prompt ": " smelt iron_ingot ", " item ": [" iron_ingot ", 3]},
30 " step 11 ": {" task ": " craft iron_pickaxe ", " goal ": [" iron_pickaxe ", 1]},
31 " step 12 ": {" prompt ": " mine gold_ore ", " item ": [" gold_ore ", 2]},
32 " step 13 ": {" prompt ": " smelt gold_ingot ", " item ": [" gold_ingot ", 2]},
33 " step 14 ": {" task ": " craft golden_sword ", " goal ": [" golden_sword ", 1]}
34}
35 " ""
Figure 19: Human-written plan for crafting a golden sword.
âŹ
1 diamond: str = "" "
2 <goal>: mine a diamond.
3 <requirements>:
4 1. log: need 7
5 2. planks: need 21
6 3. stick: need 6
7 4. crafting_table: need 1
8 5. wooden_pickaxe: need 1
9 6. cobblestone: need 11
10 7. furnace: need 1
11 8. stone_pickaxe: need 1
12 9. iron_ore: need 3
13 10. iron_ingot: need 3
14 11. iron_pickaxe: need 1
15 12. diamond: need 1
16 <plan>
17 {
18 " step 1 ": {" prompt ": " mine logs ", " item ": [" logs ", 7]},
19 " step 2 ": {" prompt ": " craft planks ", " item ": [" planks ", 21]},
20 " step 3 ": {" prompt ": " craft stick ", " item ": [" stick ", 5]},
21 " step 4 ": {" prompt ": " craft crafting_table ", " item ": [" crafting_table ", 1]},
22 " step 5 ": {" prompt ": " craft wooden_pickaxe ", " item ": [" wooden_pickaxe ", 1]},
23 " step 6 ": {" prompt ": " mine cobblestone ", " item ": [" cobblestone ", 11]},
24 " step 7 ": {" prompt ": " craft furnace ", " item ": [" furnace ", 1]},
25 " step 8 ": {" prompt ": " craft stone_pickaxe ", " item ": [" stone_pickaxe ", 1]},
26 " step 9 ": {" prompt ": " mine iron_ore ", " item ": [" iron_ore ", 2]},
27 " step 10 ": {" prompt ": " smelt iron_ingot ", " item ": [" iron_ingot ", 2]},
28 " step 11 ": {" prompt ": " craft iron_pickaxe ", " item ": [" iron_pickaxe ", 1]},
29 " step 12 ": {" prompt ": " mine diamond ", " item ": [" diamond ", 1]}
30}
31 " ""
Figure 20: Human-written plan for mining a diamond.
Appendix J Details for experimental setup
J.1 Compared baselines for dependency learning
We compare our proposed method, XENON, against four baselines: LLM self-correction (SC), DECKARD Nottingham et al. (2023), ADAM (Yu and Lu, 2024), and RAND (the simplest baseline). As no prior baselines were evaluated under our specific experimental setup (i.e., empty initial inventory, pre-trained low-level controller), we adapted their implementation to align with our environment. SC is implemented following common methods that prompt the LLM to correct its own knowledge upon plan failures (Shinn et al., 2023; Stechly et al., 2024). A summary of all methods compared in our experiments is provided in Table Ë 8. All methods share the following common experimental setting: each episode starts with an initial experienced requirements for some items, derived from human-written plans (details in Appendix Ë I). Additionally, all agents begin each episode with an initial empty inventory.
Table 8: Summary of methods compared in our experiments.
$$
\times \tag{2024}
$$
LLM self-correction (SC)
While no prior work specifically uses LLM self-correction to learn Minecraft item dependencies in our setting, we include this baseline to demonstrate the unreliability of this approach. For predicted requirements, similar to XENON, SC initializes its dependency graph with LLM-predicted requirements for each item. When a plan for an item fails repeatedly, it attempts to revise the requirements using LLM. SC prompts the LLM itself to perform the correction, providing it with recent trajectories and the validated requirements of similar, previously obtained items in the input prompt. SCâs action memory stores both successful and failed actions for each item. Upon a plan failure, the LLM is prompted to self-reflect on the recent trajectory to determine the cause of failure. When the agent later plans to obtain an item on which it previously failed, this reflection is included in the LLMâs prompt to guide its action selection. Intrinsic goals are selected randomly from the set of previously unobtained items. The specific prompts used for the LLM self-correction and self-reflection in this baseline are provided in Appendix Ë B.
DECKARD
The original DECKARD utilizes LLM-predicted requirements for each item but does not revise these initial predictions. It has no explicit action memory for the planner; instead, it trains and maintains specialized policies for each obtained item. It selects an intrinsic goal randomly from less explored frontier items (i.e., $\{vâ\mathcal{F}\mid C(v)†c_{0}\}$ ). If no such items are available, it selects randomly from the union of experienced items and all frontier items.
In our experiments, the DECKARD baseline is implemented to largely mirror the original version, with the exception of its memory system. Its memory is implemented to store only successful actions without recording failures. This design choice aligns with the original DECKARDâs approach, which, by only learning policies for successfully obtained items, lacks policies for unobtained items.
ADAM
The original ADAM started with an initial inventory containing 32 quantities of experienced resource items (i.e., items used for crafting other items) and 1 quantity of tool items (e.g., pickaxes, crafting table), implicitly treating those items as a predicted requirement set for each item. Its memory recorded which actions were used for each subgoal item without noting success or failure, and its intrinsic goal selection was guided by an expert-defined exploration curriculum.
In our experiments, ADAM starts with an empty initial inventory. The predicted requirements for each goal item in our ADAM implementation assume a fixed quantity of 8 for all resource items. This quantity was chosen to align with $\alpha_{i}$ , the hyperparameter for the quantity scale of requirement items for inadmissible items, thereby ensuring a fair comparison with XENON. The memory stores successful actions for each item, but did not record failures. This modification aligns the memory mechanism with SC and DECKARD baselines, enabling a more consistent comparison across baselines in our experimental setup. Intrinsic goal selection is random, as we do not assume such an expert-defined exploration curriculum.
RAND
RAND is a simple baseline specifically designed for our experimental setup. It started with an empty initial inventory and an LLM-predicted requirement set for each item. RAND did not incorporate any action memory. Its intrinsic goal selection involved randomly selecting from unexperienced items.
J.2 MineRL environment
J.2.1 Basic rules
Minecraft has been adopted as a suitable testbed for validating performance of AI agents on long-horizon tasks (Mao et al., 2022; Lin et al., 2021; Baker et al., 2022; Li et al., 2025a), largely because of the inherent dependency in item acquisition where agents must obtain prerequisite items before more advanced ones. Specifically, Minecraft features multiple technology levelsâincluding wood, stone, iron, gold, diamond, etc. âwhich dictate item and tool dependencies. For instance, an agent must first craft a lower-level tool like a wooden pickaxe to mine materials such as stone. Subsequently, a stone pickaxe is required to mine even higher-level materials like iron. An iron pickaxe is required to mine materials like gold and diamond. Respecting the dependency is crucial for achieving complex goals, such as crafting an iron sword or mining a diamond.
J.2.2 Observation and action space
First, we employ MineRL (Guss et al., 2019) with Minecraft version 1.16.5.
Observation
When making a plan, our agent receives inventory information (i.e., item with their quantities) as text. When executing the plan, our agent receives an RGB image with dimensions of $640Ă 360$ , including the hotbar, health indicators, food saturation, and animations of the playerâs hands.
Action space
Following Optimus-1 (Li et al., 2024b), our low-level action space primarily consists of keyboard and mouse controls, except for craft and smelt high-level actions. Crucially, craft and smelt actions are included into our action space, following (Li et al., 2024b). This means these high-level actions automatically succeed in producing an item if the agent possesses all the required items and a valid actions for that item is chosen; otherwise, they fail. This abstraction removes the need for complex, precise low-level mouse control for these specific actions. For low-level controls, keyboard presses control agent movement (e.g., jumping, moving forward, backward) and mouse movements control the agentâs perspective. The mouseâs left and right buttons are used for attacking, using, or placing items. The detailed action space is described in Table Ë 9.
Table 9: Action space in MineRL environment
| Index | Action | Human Action | Description |
| --- | --- | --- | --- |
| 1 | Forward | key W | Move forward. |
| 2 | Back | key S | Move back. |
| 3 | Left | key A | Move left. |
| 4 | Right | key D | Move right. |
| 5 | Jump | key Space | Jump. When swimming, keeps the player afloat. |
| 6 | Sneak | key left Shift | Slowly move in the current direction of movement. |
| 7 | Sprint | key left Ctrl | Move quickly in the direction of current movement. |
| 8 | Attack | left Button | Destroy blocks (hold down); Attack entity (click once). |
| 9 | Use | right Button | Place blocks, entity, open items or other interact actions defined by game. |
| 10 | hotbar [1-9] | keys 1-9 | Selects the appropriate hotbar item. |
| 11 | Open/Close Inventory | key E | Opens the Inventory. Close any open GUI. |
| 12 | Yaw | move Mouse X | Turning; aiming; camera movement.Ranging from -180 to +180. |
| 13 | Pitch | move Mouse Y | Turning; aiming; camera movement.Ranging from -180 to +180. |
| 14 | Craft | - | Execute crafting to obtain new item |
| 15 | Smelt | - | Execute smelting to obtain new item. |
J.2.3 Goals
We consider 67 goals from the long-horizon tasks benchmark suggested in (Li et al., 2024b). These goals are categorized into 7 groups based on Minecraftâs item categories: Wood
<details>
<summary>x31.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
, Stone
<details>
<summary>x32.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
, Iron
<details>
<summary>x33.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
, Gold
<details>
<summary>x34.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
, Diamond
<details>
<summary>x35.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
, Redstone
<details>
<summary>x36.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
, and Armor
<details>
<summary>x37.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
. All goal items within each group are listed in Table Ë 10.
Table 10: Setting of 7 groups encompassing 67 Minecraft long-horizon goals.
| Group | Goal Num. | All goal items |
| --- | --- | --- |
|
<details>
<summary>x38.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Wood | 10 | bowl, crafting_table, chest, ladder, stick, wooden_axe, wooden_hoe, wooden_pickaxe, wooden_shovel, wooden_sword |
|
<details>
<summary>x39.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Stone | 9 | charcoal, furnace, smoker, stone_axe, stone_hoe, stone_pickaxe, stone_shovel, stone_sword, torch |
|
<details>
<summary>x40.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Iron | 16 | blast_furnace, bucket, chain, hopper, iron_axe, iron_bars, iron_hoe, iron_nugget, iron_pickaxe, iron_shovel, iron_sword, rail, shears, smithing_table, stonecutter, tripwire_hook |
|
<details>
<summary>x41.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Gold | 6 | gold_ingot, golden_axe, golden_hoe, golden_pickaxe, golden_shovel, golden_sword |
|
<details>
<summary>x42.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Redstone | 6 | activator_rail, compass, dropper, note_block, piston, redstone_torch |
|
<details>
<summary>x43.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Diamond | 7 | diamond, diamond_axe, diamond_hoe, diamond_pickaxe, diamond_shovel, diamond_sword, jukebox |
|
<details>
<summary>x44.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Armor | 13 | diamond_boots, diamond_chestplate, diamond_helmet, diamond_leggings, golden_boots, golden_chestplate, golden_helmet, golden_leggings, iron_boots, iron_chestplate, iron_helmet, iron_leggings, shield |
Additional goals for scalability experiments.
To evaluate the scalability of XENON with respect to the number of goals Section Ë K.9, we extend the above 67-goal set (Table Ë 10) by adding additional goal items to construct two larger settings with 100 and 120 goals; the added goals are listed in Table 11.
Specifically, in the setting with 100 goals, we add 33 goals in total by introducing new âleatherâ, âpaperâ, and âflintâ groups and by adding more items to the existing âwoodâ and âstoneâ groups. In the setting with 120 goals, we further add 20 goals in the âironâ, âgoldâ, âredstoneâ, and âdiamondâ groups.
Table 11: Additional goals used for the scalability experiments. The setting with 100 goals extends the 67-goal set in Table 10 by adding all items in the top block; the setting with 120 goals further includes both the top and bottom blocks.
| Group | Goal Num. | Added goal items |
| --- | --- | --- |
| Additional items in the setting with 100 goals (33 items) | | |
| leather | 7 | leather, leather_boots, leather_chestplate, leather_helmet, leather_leggings, leather_horse_armor, item_frame |
| paper | 5 | map, book, cartography_table, bookshelf, lectern |
| flint | 4 | flint, flint_and_steel, fletching_table, arrow |
| wood | 8 | bow, boat, wooden_slab, wooden_stairs, wooden_door, wooden_sign, wooden_fence, woodenfence_gate |
| stone | 9 | cobblestone_slab, cobblestone_stairs, cobblestone_wall, lever, stone_slab, stone_button, stone_pressure_plate, stone_bricks, grindstone |
| Additional items only in the setting with 120 goals (20 more items) | | |
| iron | 7 | iron_trapdoor, heavy_weighted_pressure_plate, iron_door, crossbow, minecart, cauldron, lantern |
| gold | 4 | gold_nugget, light_weighted_pressure_plate, golden_apple, golden_carrot |
| redstone | 7 | redstone, powered_rail, target, dispenser, clock, repeater, detector_rail |
| diamond | 2 | obsidian, enchanting_table |
J.2.4 Episode horizon
The episode horizon varies depending on the experiment phase: dependency learning or long-horizon goal planning. During the dependency learning phase, each episode has a fixed horizon of 36,000 steps. In this phase, if the agent successfully achieves an intrinsic goal within an episode, it is allowed to select another intrinsic goal and continue exploration without the episode ending. After dependency learning, when measuring the success rate of goals from the long-horizon task benchmark, the episode horizon differs based on the goalâs category group. And in this phase, the episode immediately terminates upon success of a goal. The specific episode horizons for each group are as follows: Wood: 3,600 steps; Stone: 7,200 steps; Iron: 12,000 steps; and Gold, Diamond, Redstone, and Armor: 36,000 steps each.
J.2.5 Item spawn probability details
Following Optimus-1âs public implementation, we have modified environment configuration different from original MineRL environment (Guss et al., 2019). In Minecraft, obtaining essential resources such as iron, gold, and diamond requires mining their respective ores. However, these ores are naturally rare, making them challenging to obtain. This inherent difficulty can significantly hinder an agentâs goal completion, even with an accurate plan. This challenge in resource gathering due to an imperfect controller is a common bottleneck, leading many prior works to employ environmental modifications to focus on planning. For example, DEPS (Wang et al., 2023b) restricts the controllerâs actions based on the goal items https://github.com/CraftJarvis/MC-Planner/blob/main/controller.py. Optimus-1 (Li et al., 2024b) also made resource items easier to obtain by increasing item ore spawn probabilities. To focus on our primary goal of robust planning and isolate this challenge, we follow Optimus-1 and adopt its item ore spawn procedure directly from the publicly released Optimus-1 repository, without any modifications to its source code https://github.com/JiuTian-VL/Optimus-1/blob/main/src/optimus1/env/wrapper.py.
The ore spawn procedure probabilistically spawns ore blocks in the vicinity of the agentâs current coordinates $(x,y,z)$ . Specifically, at each timestep, the procedure has a 10% chance of activating. When activated, it spawns a specific type of ore block based on the agentâs y-coordinate. Furthermore, for any given episode, the procedure is not activate more than once at the same y-coordinate. The types of ore blocks spawned at different y-levels are as follows:
-
<details>
<summary>x45.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Coal Ore: between y=45 and y=50.
-
<details>
<summary>x46.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Iron Ore: between y=26 and y=43.
-
<details>
<summary>x47.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Gold Ore: between y=15 and y=26
-
<details>
<summary>x48.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Redstone Ore: between y=15 and y=26
-
<details>
<summary>x49.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
Diamond Ore: below y=14
J.3 Mineflayer Environment
We use the Mineflayer (PrismarineJS, 2023) environment with Minecraft version 1.19. In Mineflayer, resource item spawn probabilities do not need to be adjusted, unlike in MineRL Section Ë J.2.5. This is because the controller, JavaScript APIs provided by Mineflayer, is competent to gather many resource items.
J.3.1 Observation and Action Space
The agentâs observation space is multimodal. For planning, the agent receives its current inventory (i.e., item names and their quantities) as text. For plan execution, it receives a first-person RGB image that includes the hotbar, health and food indicators, and player hand animations. For action space, following ADAM (Yu and Lu, 2024), we use the JavaScript APIs provided by Mineflayer for low-level control. Specifically, our high-level actions, such as âcraftâ, âsmeltâ, and âmineâ, are mapped to corresponding Mineflayer APIs like craftItem, smeltItem, and mineBlock.
J.3.2 Episode Horizon
For dependency learning, each episode has a fixed horizon of 30 minutes, which is equivalent to 36,000 steps in the MineRL environment. If the agent successfully achieves a goal within this horizon, it selects another exploratory goal and continues within the same episode.
J.4 MC-TextWorld
MC-Textworld is a text-based environment based on Minecraft game rules (Zheng et al., 2025). We employ Minecraft version 1.16.5. In this environment, basic rules and goals are the same as those in the MineRL environment Section Ë J.2. Furthermore, resource item spawn probabilities do not need to be adjusted, unlike in MineRL Section Ë J.2.5. This is because an agent succeeds in mining an item immediately without spatial exploration, if it has a required tool and âmineâ is a valid action for that item.
In the following subsections, we detail the remaining aspects of experiment setups in this environment: the observation and action space, and the episode horizon.
J.4.1 Observation and action space
The agent receives a text-based observation consisting of inventory information (i.e., currently possessed items and their quantities). Actions are also text-based, where each action is represented as an high-level action followed by an item name (e.g., "mine diamond"). Thus, to execute a subgoal specified as $(a,q,v)$ (high-level action $a$ , quantity $q$ , item $v$ ), the agent repeatedly performs the action $(a,v)$ until $q$ units of $v$ are obtained.
J.4.2 Episode horizon
In this environment, we conduct experiments for dependency learning only. Each episode has a fixed horizon of 3,000 steps. If the agent successfully achieves an intrinsic goal within an episode, it is then allowed to select another intrinsic goal and continue exploration, without termination of the episode.
J.4.3 Perturbation on ground truth rules
<details>
<summary>x50.png Details</summary>

### Visual Description
## Diagram: Minecraft Crafting System Variations
### Overview
The diagram illustrates three variations of a Minecraft crafting system across three levels, comparing "Vanilla" (original) mechanics with two perturbed versions: "Perturbed True Required Items" and "Perturbed True Actions." Each section uses color-coded levels (green for Level 1, blue for Level 2, purple for Level 3) and arrows to depict item relationships and valid actions.
### Components/Axes
- **Sections**:
- **(a) Vanilla**: Original crafting system with fixed item requirements.
- **(b) Perturbed True Required Items**: Modified item requirements (e.g., replacing materials).
- **(c) Perturbed True Actions**: Expanded valid actions (crafting, mining, smelting).
- **Levels**:
- Level 1 (green), Level 2 (blue), Level 3 (purple).
- **Valid Actions**:
- Vanilla: "craft" (all levels).
- Perturbed True Actions: "craft," "mine," or "smelt" (Level 3 only).
### Detailed Analysis
#### (a) Vanilla
- **Level 1**:
- Items: Sword, wooden block, stone block, iron block, pickaxe, shovel.
- Arrows indicate crafting dependencies (e.g., sword requires iron block).
- **Level 2**:
- Items: Cobblestone, dirt, stone, iron ore, diamond ore.
- Arrows show progression (e.g., cobblestone â stone).
- **Level 3**:
- Items: Wooden planks, sticks, torches.
- Arrows link planks â sticks â torches.
#### (b) Perturbed True Required Items
- **Level 1**:
- Replaces stone block with gray block (dashed blue box).
- Arrows retain crafting logic but with substituted materials.
- **Level 2**:
- Replaces dirt with gray block (dashed blue box).
- Cobblestone and stone remain unchanged.
- **Level 3**:
- Replaces wooden planks with gray block (dashed blue box).
- Sticks and torches remain unchanged.
#### (c) Perturbed True Actions
- **Level 1**:
- Valid actions: "craft" (unchanged).
- Items identical to Vanilla Level 1.
- **Level 2**:
- Valid actions: "craft" (unchanged).
- Items identical to Vanilla Level 2.
- **Level 3**:
- Valid actions: "craft," "mine," or "smelt" (bold purple text).
- Items identical to Vanilla Level 3.
### Key Observations
1. **Color Coding**:
- Green (Level 1), blue (Level 2), purple (Level 3) consistently applied across all sections.
2. **Action Flexibility**:
- Only Level 3 in Perturbed True Actions allows mining/smelting, suggesting advanced resource utilization.
3. **Material Substitution**:
- Perturbed True Required Items replaces specific blocks (stone, dirt, planks) with gray blocks, implying alternative crafting pathways.
### Interpretation
The diagram demonstrates a tiered crafting system where:
- **Vanilla** enforces rigid crafting rules (fixed items, actions).
- **Perturbed True Required Items** introduces material flexibility, allowing substitutions (e.g., gray blocks for stone).
- **Perturbed True Actions** expands gameplay mechanics, enabling mining/smelting at higher levels.
This progression suggests a dynamic system where players can adapt to resource scarcity (via substitutions) or engage in deeper resource management (via mining/smelting). The use of color and arrows emphasizes hierarchical complexity, with Level 3 representing the most advanced, flexible stage.
</details>
Figure 21: Illustration of the ground-truth rule perturbation settings. (a) in the vanilla setting, goal items (black boxes) have standard required items (incoming edges) and âcraftâ is the valid action; (b) in the Perturbed Requirements setting, one required item (red dashed circle) is replaced by a new one randomly from a candidate pool (blue dashed box); (c) in the Perturbed Actions setting, the valid action is changed to either âmineâ or âsmeltâ.
To evaluate each agentâs robustness to conflicts with its prior knowledge, we perturb the ground-truth rules (required items and actions) for a subset of goal items, as shown in Figure Ë 21. The perturbation is applied at different intensity levels (from 1 to 3), where higher levels affect a greater number of items. These levels are cumulative, meaning a Level 2 perturbation includes all perturbations from Level 1 plus additional ones.
- Vanilla Setting: In the setting with no perturbation (Figure Ë 21, a), the ground-truth rules are unmodified. In the figure, items in the black solid boxes are the goal items, and those with arrows pointing to them are their true required items. Each goal item has âcraftâ as a valid action.
- Perturbed True Required Items: In this setting (Figure Ë 21, b), one of the true required items (indicated by a red dashed circle) for a goal is replaced. The new required item is chosen uniformly at random from a candidate pool (blue dashed box). The valid action remains craft.
- Perturbed True Actions: In this setting (Figure Ë 21, c), the valid action for a goal is randomly changed from âcraftâ to either âmineâ or âsmeltâ. The required items are not modified.
- Perturbed Both Rules: In this setting, both the required items and the valid actions are modified according to the rules described above.
Appendix K Additional experimental results
K.1 LLM-predicted initial dependency graph analysis
Table 12: Performance analysis for the initial LLM-predicted requirement sets over 75 Minecraft items, used to build the initial dependency graph. Note that while we began the prediction process with 67 goal items, the total number of predicted items expanded to 75. This expansion occurred because, as the LLM predicted requirement sets for items in the dependency graph (initially for those goal items), any newly mentioned items that were not yet part of the graph are also included. This iterative process is detailed in Section Ë 4.1 (Dependency graph initialization) of our method.
| Metric | Value |
| --- | --- |
| Requirement Set Prediction Accuracy | |
| Correct items (ignoring quantities) | 23% |
| Exact items & quantities | 8% |
| Non-existent Item Rates | |
| Non-existent items | 8% |
| Descendants of non-existent items | 23% |
| Required Items Errors | |
| Unnecessary items included | 57% |
| Required items omitted | 57% |
| Required Item Quantity Prediction Errors | |
| Standard deviation of quantity error | 2.74 |
| Mean absolute quantity error | 2.05 |
| Mean signed quantity error | -0.55 |
The initial dependency graph, constructed from predictions by Qwen2.5-VL-7B (Bai et al., 2025), forms the initial planning knowledge for XENON (Section Ë 4.1). This section analyzes its quality, highlighting limitations that necessitate our adaptive dependency learning.
As shown in Table Ë 12, the 7B LLMâs initial requirement sets exhibit significant inaccuracies. Accuracy for correct item types was 23%, dropping to 8% for exact items and quantities. Errors in dependency among items are also prevalent: 57% of items included unnecessary items, and 57% omitted required items. Furthermore, 8% of predicted items were non-existent (hallucinated), making 23% of descendant items unattainable. Quantity predictions also showed substantial errors, with a mean absolute error of 2.05.
These results clearly demonstrate that the LLM-generated initial dependency graph is imperfect. Its low accuracy and high error rates underscore the unreliability of raw LLM knowledge for precise planning, particularly for smaller models like the 7B LLM which are known to have limited prior knowledge on Minecraft, as noted in previous work (ADAM, Yu and Lu (2024), Appendix A. LLMsâ Prior Knowledge on Minecraft). This analysis therefore highlights the importance of the adaptive dependency learning within XENON, which is designed to refine this initial, imperfect knowledge for robust planning.
Table 13: Ratio of dependencies learned for items which are unobtainable by the flawed initial dependency graph (out of 51). Analysis is based on the final learned graphs from the MineRL experiments.
| Agent | Learned ratio (initially unobtainable items) |
| --- | --- |
| XENON | 0.51 |
| SC | 0.25 |
| DECKARD | 0.25 |
| ADAM | 0.00 |
| RAND | 0.02 |
K.2 Additional analysis of learned dependency graph
As shown in Table Ë 13, XENON demonstrates significantly greater robustness to the LLMâs flawed prior knowledge compared to all baselines. It successfully learned the correct dependencies for over half (0.51) of the 51 items that were initially unobtainable by the flawed graph. In contrast, both DECKARD (with no correction) and the SC baseline (with LLM self-correction) learned only a quarter of these items (0.25). This result strongly indicates that relying on the LLM to correct its own errors is as ineffective as having no correction mechanism at all in this setting. The other baselines, ADAM and RAND, failed almost completely, highlighting the difficulty of this challenge.
K.3 Impact of controller capacity on dependency learning
We observe that controller capacity significantly impacts an agentâs ability to learn dependencies from interaction. Specifically, in our MineRL experiments, we find that ADAM fails to learn any new dependencies due to the inherent incompatibility between its strategy and the controllerâs limitations. In our realistic setting with empty initial inventories, ADAMâs strategy requires gathering a sufficient quantity (fixed at 8, same with our hyperparameter $\alpha_{i}$ The scaling factor for required item quantities for inadmissible items.) of all previously used resources before attempting a new item. This list of required resource items includes gold ingot
<details>
<summary>x51.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
, because of an initially provided human-written plan for golden sword; however, the controller STEVE-1 never managed to collect more than seven units of gold in a single episode across all our experiments. Consequently, this controller bottleneck prevents ADAM from ever attempting to learn new items, causing its dependency learning to stall completely.
Although XENON fails to learn dependencies for the Redstone group items in MineRL, our analysis shows this stems from controller limitations rather than algorithmic ones. Specifically, in MineRL, STEVE-1 cannot execute XENONâs exploration strategy for inadmissible items, which involves gathering a sufficient quantity of all previously used resources before a retry (Section Ë 4.1). The Redstone group items become inadmissible because the LLMâs initial predictions for them are entirely incorrect. This lack of a valid starting point prevents XENON from ever experiencing the core item, redstone, being used as a requirement for any other item. Consequently, our RevisionByAnalogy mechanism has no analogous experience to propose redstone as a potential required item for other items during its revision process.
In contrast, with more competent controllers, XENON successfully overcomes even such severely flawed prior knowledge to learn the challenging Redstone group dependencies, as demonstrated in Mineflayer and MC-TextWorld. First, in Mineflayer, XENON learns the correct dependencies for 5 out of 6 Redstone items. This success is possible because its more competent controller can execute the exploration strategy for inadmissible items, which increases the chance of possessing the core required item (redstone) during resource gathering. Second, with a perfect controller in MC-TextWorld, XENON successfully learns the dependencies for all 6 Redstone group items in every single episode.
K.4 Impact of Controller Capacity in Long-horizon Goal Planning
Table 14: Long-horizon task success rate (SR) comparison between the Modified MineRL (a setting where resource items are easier to obtain) and Standard MineRL environments. All methods are provided with the correct dependency graph. DEPS $\dagger$ and Optimus-1 $\dagger$ are our reproductions of the respective methods using Qwen2.5-VL-7B as a planner. OracleActionPlanner, which generates the correct plan for all goals, represents the performance upper bound. SR for Optimus-1 $\dagger$ and XENON â in the Modified MineRL column are taken from Table Ë 3 in Section Ë 5.3.
| Method | Dependency | Modified MineRL | Standard MineRL | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Iron | Diamond | Gold | Iron | Diamond | Gold | | |
| DEPS $\dagger$ | - | 0.02 | 0.00 | 0.01 | 0.01 | 0.00 | 0.00 |
| Optimus-1 $\dagger$ | Oracle | 0.23 | 0.10 | 0.11 | 0.13 | 0.00 | 0.00 |
| XENON â | Oracle | 0.83 | 0.75 | 0.73 | 0.24 | 0.00 | 0.00 |
| OracleActionPlanner | Oracle | - | - | - | 0.27 | 0.00 | 0.00 |
Because our work focuses on building a robust planner, to isolate the planning from the significant difficulty of item gatheringâa task assigned to the controllerâour main experiments for long-horizon tasks (Section Ë 5.3) uses a modified MineRL environment following the official implementation of Optimus-1. This modification makes essential resource items like iron, gold, and diamond easier for the controller to find, allowing for a clearer evaluation of planning algorithms (modifications are detailed in Section Ë J.2.5). However, to provide a more comprehensive analysis, we also evaluated our agent and baselines in the unmodified, standard MineRL environment. In this setting, items like iron, gold, and diamond are naturally rare, making item gathering a major bottleneck.
The results are shown in Table Ë 14. Most importantly, XENON â consistently outperforms the baselines in both the modified and standard MineRL. Notably, in the standard environment, XENON â âs performance on the Iron group (0.24 SR) is comparable to that of the OracleActionPlanner (0.27 SR), which always generates correct plans for all goals. This comparison highlights the severity of the controller bottleneck: even the OracleActionPlanner achieves a 0.00 success rate for the Diamond and Gold groups in the standard MineRL. This shows that the failures are due to the controllerâs inability to gather rare resources in the standard environment.
K.5 Long-horizon task benchmark experiments analysis
This section provides a detailed analysis of the performance differences observed in Table Ë 3 between Optimus-1 â and XENON â on long-horizon tasks, even when both access to a true dependency graph and increased item spawn probabilities (Section Ë J.2.5). We specifically examine various plan errors encountered when reproducing Optimus-1 â using Qwen2.5-VL-7B as the planner, and explain how XENON â robustly constructs plans through step-by-step planning with FAM.
Table 15: Analysis of primary plan errors observed in Optimus-1 â and XENON â during long-horizon tasks benchmark experiments. This table presents the ratio of specified plan error among the failed episodes for Optimus-1 â and XENON â respectively. Invalid Action indicates errors where an invalid action is used for an item in a subgoal. Subgoal Omission refers to errors where a necessary subgoal for a required item is omitted from the plan. Note that these plan error values are not exclusive; one episode can exhibit multiple types of plan errors.
| Plan Error Type | Optimus-1 â Error Rate (%) | XENON â Error Rate (%) |
| --- | --- | --- |
| Invalid Action | 37 | 2 |
| Subgoal Omission | 44 | 0 |
Optimus-1 â has no fine-grained action knowledge correction mechanism. Furthermore, Optimus-1 â âs LLM planner generates a long plan at once with a long input prompt including a sequence of aggregated requirements $((q_{1},u_{1}),...,(q_{L_{v}},u_{L_{v}})=(1,v))$ for the goal item $v$ . Consequently, as shown in Table Ë 15, Optimus-1 generates plans with invalid actions for required items, denoted as Invalid Action. Furthermore, Optimus-1 omits necessary subgoals for required items, even they are in the input prompts, denoted as Subgoal Omission.
In contrast, XENON discovers valid actions by leveraging FAM, which records the outcomes of each action for every item, thereby enabling it to avoid empirically failed ones and and reuse successful ones. Furthermore, XENON mitigates the problem of subgoal omission through constructing a plan by making a subgoal for each required item one-by-one.
K.6 Robust Dependency learning under dynamic true knowledge
<details>
<summary>x52.png Details</summary>

### Visual Description
## Legend: Data Series Identifiers
### Overview
The image displays a horizontal legend bar with five distinct symbols and labels, each representing a unique data series or category. The legend is enclosed in a light gray border and arranged in a single row.
### Components/Axes
- **Legend Elements**:
- **XENON**: Blue circle (â) with a horizontal line through the center.
- **SC**: Pink diamond (â) with a horizontal line through the center.
- **ADAM**: Orange hexagon (â) with a horizontal line through the center.
- **DECKARD**: Green square (â ) with a horizontal line through the center.
- **RAND**: Gray cross (â) with a horizontal line through the center.
- **Placement**: All symbols and labels are centered horizontally within the legend bar, with equal spacing between entries.
### Detailed Analysis
- **Symbol-Label Pairing**:
- Blue circle (â) â XENON
- Pink diamond (â) â SC
- Orange hexagon (â) â ADAM
- Green square (â ) â DECKARD
- Gray cross (â) â RAND
- **Color Consistency**: Each symbolâs color matches its legend entry exactly (e.g., blue for XENON, pink for SC).
- **Text Formatting**: Labels are in uppercase, black sans-serif font, aligned to the right of each symbol.
### Key Observations
- The legend uses distinct geometric shapes and colors to differentiate categories, ensuring visual clarity.
- No numerical values, axes, or data points are present; this is purely a categorical legend.
- The order of entries (left to right) is: XENON, SC, ADAM, DECKARD, RAND.
### Interpretation
This legend serves as a key for interpreting data series in a chart or diagram. Each symbol corresponds to a specific category, likely used in a multi-series visualization (e.g., line chart, scatter plot). The use of unique symbols and colors minimizes ambiguity, while the horizontal layout ensures readability. The absence of numerical data suggests this legend is part of a larger dataset where these categories are mapped to quantitative values elsewhere.
</details>
<details>
<summary>x53.png Details</summary>

### Visual Description
## Line Graph: EGA Trends Across Environment Steps
### Overview
The image depicts a line graph tracking four distinct data series labeled "EGA" (Environmental Growth Adaptation) across incremental "Environment steps" (0â3000). A vertical dashed line at 1500 marks a noted change in "True requirements." Each series is represented by a unique color and marker, with shaded regions indicating uncertainty or confidence intervals.
### Components/Axes
- **X-axis**: "Environment step" (0â3000), linear scale.
- **Y-axis**: "EGA" (0.0â1.0), linear scale.
- **Legend**: Located on the right, associating:
- Blue circles: "Series A"
- Orange crosses: "Series B"
- Pink diamonds: "Series C"
- Green squares: "Series D"
- **Annotations**: Text "True requirements are changed" near the dashed line at 1500.
### Detailed Analysis
1. **Series A (Blue Circles)**:
- Starts at ~0.15 EGA at step 0.
- Sharp upward trend to ~0.75 at step 1000.
- Plateaus at ~0.95 after step 1500, with minor fluctuations.
- Final value at step 3000: ~0.95.
2. **Series B (Orange Crosses)**:
- Begins at ~0.15 EGA at step 0.
- Gradual increase to ~0.65 at step 1500.
- Slows to ~0.68 by step 3000.
3. **Series C (Pink Diamonds)**:
- Starts at ~0.15 EGA at step 0.
- Steady rise to ~0.55 at step 2000.
- Final value at step 3000: ~0.58.
4. **Series D (Green Squares)**:
- Begins at ~0.15 EGA at step 0.
- Slow increase to ~0.45 at step 2000.
- Final value at step 3000: ~0.47.
**Key Observations**:
- All series start at similar EGA values (~0.15) but diverge significantly post-step 1500.
- Series A exhibits the most dramatic change, with a sharp rise and plateau.
- Series B and C show moderate growth, while Series D remains the most stable.
- The vertical dashed line at 1500 correlates with a notable inflection point for Series A and B.
### Interpretation
The graph suggests that environmental steps influence EGA differently across series, with Series A responding most aggressively to the requirement change at step 1500. The shaded regions imply uncertainty in EGA estimates, possibly due to measurement variability or model confidence. The plateau in Series A after step 1500 may indicate saturation or stabilization of adaptation mechanisms. Series Dâs gradual growth could reflect slower adaptation or lower sensitivity to environmental changes. The "True requirements are changed" annotation implies a systemic shift (e.g., policy, resource allocation) that disproportionately impacts certain series.
</details>
(a) Dynamic True Required Items
<details>
<summary>x54.png Details</summary>

### Visual Description
## Line Graph: Evolution of Expected Goal Achievement (EGA) Over Environment Steps
### Overview
The graph depicts the progression of Expected Goal Achievement (EGA) across four distinct data series over 3,000 environment steps. A vertical dashed line at 1,500 steps marks a critical transition point labeled "True actions are changed." All series show varying trajectories, with uncertainty bands (shaded regions) indicating variability in measurements.
### Components/Axes
- **X-axis**: "Environment step" (0 to 3,000), with ticks at 0, 1,000, 2,000, and 3,000.
- **Y-axis**: "EGA" (0.0 to 1.0), with ticks at 0.2, 0.4, 0.6, 0.8, and 1.0.
- **Legend**: Located on the right, with four entries:
- Blue circles (solid line)
- Orange crosses (dashed line)
- Pink diamonds (dotted line)
- Green squares (dash-dot line)
- **Dashed Line**: Vertical line at ~1,500 steps with annotation "True actions are changed."
### Detailed Analysis
1. **Blue Circles (Solid Line)**:
- Starts at ~0.2 EGA at step 0.
- Sharp upward trajectory to 1.0 EGA by step 1,500.
- Remains at 1.0 EGA for all subsequent steps.
- Uncertainty band narrows significantly after step 1,500.
2. **Orange Crosses (Dashed Line)**:
- Begins at ~0.4 EGA at step 0.
- Peaks at ~0.6 EGA by step 1,000.
- Drops to ~0.5 EGA by step 1,500, then stabilizes.
- Uncertainty band widens slightly after step 1,500.
3. **Pink Diamonds (Dotted Line)**:
- Starts at ~0.3 EGA at step 0.
- Rises to ~0.5 EGA by step 1,000.
- Declines to ~0.4 EGA by step 1,500, then plateaus.
- Uncertainty band remains consistent.
4. **Green Squares (Dash-Dot Line)**:
- Begins at ~0.1 EGA at step 0.
- Increases to ~0.4 EGA by step 1,000.
- Drops to ~0.3 EGA by step 1,500, then stabilizes.
- Uncertainty band widens slightly after step 1,500.
### Key Observations
- **Blue Line Anomaly**: The blue series exhibits a discontinuous jump at step 1,500, aligning with the "True actions are changed" annotation. This suggests a structural shift in the system's behavior.
- **Post-Transition Divergence**: All series experience performance declines after step 1,500, though the blue line recovers fully.
- **Uncertainty Patterns**: Variability (shaded regions) increases for orange, pink, and green lines after step 1,500, indicating reduced confidence in measurements post-transition.
### Interpretation
The graph demonstrates that the blue data series (likely representing an optimal or adaptive strategy) achieves near-perfect EGA after the transition point, while other strategies degrade. This implies that the system's true actions (e.g., policy changes, environmental shifts) critically impact performance, with only certain approaches maintaining robustness. The shaded uncertainty bands highlight measurement noise or model variability, which becomes more pronounced for non-adaptive strategies after the transition. The blue line's abrupt recovery suggests it may incorporate real-time adjustments or exploit new system dynamics introduced at step 1,500.
</details>
(b) Dynamic True Actions
<details>
<summary>x55.png Details</summary>

### Visual Description
## Line Graph: Evolution of EGA Over Environment Steps
### Overview
The graph depicts the evolution of Expected Generalization Accuracy (EGA) across four distinct algorithms or strategies over 3000 environment steps. A vertical dashed line at ~1500 steps marks a critical event labeled "Both true rules are changed." Four colored lines with shaded confidence intervals represent different approaches, each with unique performance trajectories.
### Components/Axes
- **X-axis**: "Environment step" (0 to 3000, linear scale)
- **Y-axis**: "EGA" (0.0 to 1.0, linear scale)
- **Legend**:
- Blue circles (top line)
- Orange diamonds (second line)
- Pink diamonds (third line)
- Green squares (bottom line)
- **Key annotation**: Vertical dashed line at ~1500 steps with label "Both true rules are changed"
### Detailed Analysis
1. **Blue Circles (Top Line)**:
- Starts at ~0.15 EGA at step 0
- Sharp upward trend to ~0.8 EGA by step 1000
- Plateaus at ~0.95 EGA after step 1500
- Shaded region indicates ±0.05 uncertainty
2. **Orange Diamonds (Second Line)**:
- Begins at ~0.4 EGA at step 0
- Gradual increase to ~0.6 EGA by step 1500
- Stabilizes at ~0.55 EGA post-rule change
- Shaded region ±0.03
3. **Pink Diamonds (Third Line)**:
- Initial value ~0.3 EGA at step 0
- Steady rise to ~0.5 EGA by step 1500
- Plateaus at ~0.48 EGA after rule change
- Shaded region ±0.02
4. **Green Squares (Bottom Line)**:
- Starts at ~0.2 EGA at step 0
- Slow increase to ~0.4 EGA by step 1500
- Stabilizes at ~0.38 EGA post-rule change
- Shaded region ±0.01
### Key Observations
- **Post-rule change plateau**: All lines flatten after step 1500, suggesting adaptation to new rules
- **Performance hierarchy**: Blue > Orange > Pink > Green in final EGA values
- **Divergent trajectories**: Blue line shows fastest improvement pre-rule change
- **Uncertainty patterns**: Shaded regions narrow as steps increase, indicating stabilizing measurements
### Interpretation
The data demonstrates that all tested strategies eventually stabilize their EGA after the rule change event, but with varying efficiency:
1. **Blue algorithm** achieves highest performance (0.95 EGA) and fastest adaptation, suggesting optimal rule exploitation
2. **Orange and pink lines** show moderate adaptation, with orange maintaining higher EGA than pink despite similar trajectories
3. **Green squares** represent the least effective strategy, with slowest improvement and lowest final EGA
4. The shaded confidence intervals imply measurement variability decreases with experience, supporting the reliability of plateau values
The abrupt change in slope at step 1500 correlates with the rule modification event, indicating this was a pivotal moment in the environment's dynamics. The persistent performance gaps between strategies suggest inherent differences in their ability to generalize to new rule sets.
</details>
(c) Dynamic Both Rules
Figure 22: Robustness against dynamic true knowledge. EGA over 3,000 environment steps in the where the true item acquisition rules are changed during the learning process.
Table 16: The ratio of correctly learned dependencies among whose rules are dynamically changed (out of 7 total) by each agent. Columns correspond to the type of ground-truth rules changed during learning: requirements only, actions only, or both.
| Agent | (3,0) | (0,3) | (3,3) |
| --- | --- | --- | --- |
| XENON | 1.0 | 1.0 | 1.0 |
| SC | 0.80 | 0.0 | 0.0 |
| ADAM | 0.83 | 0.0 | 0.0 |
| DECKARD | 0.49 | 0.0 | 0.0 |
| RAND | 0.29 | 0.0 | 0.0 |
Additionally, We show XENON is also applicable to scenarios where the latent true knowledge changes dynamically. We design three dynamic scenarios where the environment begins with the vanilla setting, (0,0), for the first 1,500 steps, then transitions to a level-3 perturbation setting for the subsequent 1,500 steps: either required items-only (3,0), action-only (0,3), or both (3,3). Upon this change, the agent is informed of which itemsâ rules are modified but not what the new rules are, forcing it to relearn from experience. As shown in Figure Ë 22, XENON rapidly adapts by re-learning the new dependencies and recovering its near-perfect EGA in all three scenarios. In contrast, all baselines fail to adapt effectively, with their performance remaining significantly degraded after the change. Specifically, for the 7 items whose rules are altered, Table Ë 16 shows that XENON achieves a perfect re-learning ratio of 1.0 in all scenarios, while all baselines score 0.0 whenever actions are modified.
K.7 Ablation studies for long-horzion goal planning
Table 17: Ablation experiment results for long-horizon goal planning in MineRL. Without Learned Dependency, XENON employs a dependency graph initialized with LLM predictions and human-written examples. Without Action Correction, XENON saves and reuses successful actions in FAM, but it does not utilize the information of failed actions.
| Learned Dependency | Action Correction | CRe |
<details>
<summary>x56.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x57.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x58.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x59.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x60.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x61.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
<details>
<summary>x62.png Details</summary>

### Visual Description
Icon/Small Image (20x20)
</details>
|
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Wood | Stone | Iron | Diamond | Gold | Armor | Redstone | | | |
| 0.54 | 0.39 | 0.10 | 0.26 | 0.45 | 0.0 | 0.0 | | | |
| 0.54 | 0.38 | 0.09 | 0.29 | 0.45 | 0.0 | 0.0 | | | |
| 0.82 | 0.69 | 0.36 | 0.59 | 0.69 | 0.22 | 0.0 | | | |
| 0.82 | 0.79 | 0.45 | 0.59 | 0.68 | 0.21 | 0.0 | | | |
| 0.85 | 0.81 | 0.46 | 0.64 | 0.74 | 0.28 | 0.0 | | | |
To analyze how each of XENONâs components contributes to its long-horizon planning, we conducted an ablation study in MineRL, with results shown in Table Ë 17. The findings first indicate that without accurate dependency knowledge, our action correction using FAM provides no significant benefit on its own (row 1 vs. row 2). The most critical component is the learned dependency graph, which dramatically improves success rates across all item groups (row 3). Building on this, adding FAMâs action correction further boosts performance, particularly for the Stone and Iron groups where it helps overcome the LLMâs flawed action priors (row 4). Finally, Context-aware Reprompting (CRe, Section Ë 4.3) provides an additional performance gain on more challenging late-game items, such as Iron, Gold, and Armor. This is likely because their longer episode horizons offer more opportunities for CRe to rescue a stalled controller.
K.8 The Necessity of Knowledge Correction even with External Sources
<details>
<summary>x63.png Details</summary>

### Visual Description
## Line Chart: Experienced Items Ratio vs Environment Steps
### Overview
The chart displays the performance of five algorithms (XENON, SC, ADAM, DECKARD, RAND) over environment steps, measured as the "Experienced Items Ratio" (0-1.0 scale). XENON dominates with a sharp upward trend, while other algorithms remain stagnant below 0.2.
### Components/Axes
- **X-axis**: "Environment step" (0â3000), linear scale.
- **Y-axis**: "Experienced Items Ratio" (0.0â1.0), linear scale.
- **Legend**: Top-left corner, color-coded labels:
- **XENON**: Blue line with shaded uncertainty band.
- **SC**: Pink line.
- **ADAM**: Orange line.
- **DECKARD**: Green line.
- **RAND**: Gray line.
### Detailed Analysis
1. **XENON**:
- Starts at 0.0 at step 0.
- Rises sharply to ~1.0 by step 2000.
- Plateaus at 1.0 from step 2000 onward.
- Shaded blue band indicates uncertainty (widening slightly near step 1000).
2. **SC**:
- Flat line at ~0.1 throughout all steps.
3. **ADAM**:
- Flat line at ~0.05 throughout all steps.
4. **DECKARD**:
- Flat line at ~0.15 throughout all steps.
5. **RAND**:
- Flat line at ~0.1 throughout all steps.
### Key Observations
- **XENON** achieves near-perfect performance (1.0) by step 2000, with no further improvement.
- All other algorithms remain below 0.2, showing no meaningful improvement over 3000 steps.
- XENONâs uncertainty band suggests higher variability in early steps (0â1000), stabilizing afterward.
### Interpretation
The data demonstrates that **XENON** is orders of magnitude more effective than other algorithms in this environment. Its rapid convergence to 1.0 suggests superior optimization or learning efficiency. In contrast, SC, ADAM, DECKARD, and RAND exhibit stagnation, implying either suboptimal design, insufficient training, or inherent limitations in their approach. The lack of improvement in non-XENON algorithms raises questions about their scalability or adaptability in this context. The shaded uncertainty band for XENON highlights confidence in its performance after step 1000, reinforcing its reliability.
</details>
Figure 23: Ratio of goal items obtained in one MC-TextWorld episode when each agentâs dependency graph is initialized from an oracle graph while the environmentâs ground-truth dependency graph is perturbed. Solid lines denote the mean over 15 runs; shaded areas denote the standard deviation.
Even when an external source is available to initialize an agentâs knowledge, correcting that knowledge from interaction remains essential for dependency and action learning, because such sources can be flawed or outdated. To support this, we evaluate XENON and the baselines in the MC-TextWorld environment where each agentâs dependency graph is initialized from an oracle graph, while the environmentâs ground-truth dependency graph is perturbed (perturbation level 3 in Table Ë 4). We measure performance as the ratio of the 67 goal items obtained within a single episode. We use an intrinsic exploratory item selection method for all agents (i.e., which item each agent chooses on its own to try to obtain next): they choose, among items not yet obtained in the current episode, the one with the fewest attempts so far.
As shown in Figure Ë 23, this experiment demonstrates that, even when an external source is available, (1) interaction experience-based knowledge correction remains crucial when the external source is mismatched with the environment, and (2) XENON is also applicable and robust in this scenario. By continually revising its dependency knowledge, XENON achieves a much higher ratio of goal items obtained in an episode than all baselines. In contrast, the baselines either rely on unreliable LLM self-correction (e.g., SC) or do not correct flawed knowledge at all (e.g., DECKARD, ADAM, RAND), and therefore fail to obtain many goal items. Their performance is especially poor because there are dependencies between goals: for example, when the true required items for stone pickaxe and iron pickaxe are perturbed, the baselines cannot obtain these items and thus cannot obtain other goal items that depend on them.
K.9 Scalability of Dependency and Action Learning with More Goals and Actions
<details>
<summary>x64.png Details</summary>

### Visual Description
## Line Graph: EGA Convergence Across Environment Steps with Varying Goal Counts
### Overview
The graph illustrates the convergence of Expected Goal Achievement (EGA) over environment steps for three distinct goal counts: 67, 100, and 120. EGA is plotted on the y-axis (0.0â1.0), while environment steps (0â3000) are on the x-axis. Three colored lines represent each goal count, with shaded regions indicating confidence intervals.
### Components/Axes
- **X-axis**: "Environment step" (0 to 3000, linear scale).
- **Y-axis**: "EGA" (0.0 to 1.0, linear scale).
- **Legend**: Located in the bottom-right corner, mapping:
- Black line: `# of goals: 67`
- Orange line: `# of goals: 100`
- Blue line: `# of goals: 120`
- **Shaded Regions**: Gray (black line), light orange (orange line), light blue (blue line), representing 95% confidence intervals.
### Detailed Analysis
1. **Black Line (# of goals: 67)**:
- Starts at ~0.15 EGA at 0 steps.
- Rises sharply, reaching ~1.0 EGA by ~1,500 steps.
- Confidence interval narrows as EGA approaches 1.0.
2. **Orange Line (# of goals: 100)**:
- Begins at ~0.1 EGA at 0 steps.
- Gradual ascent, achieving ~1.0 EGA by ~2,000 steps.
- Confidence interval widens initially, then tightens near convergence.
3. **Blue Line (# of goals: 120)**:
- Starts at ~0.05 EGA at 0 steps.
- Slowest rise, reaching ~1.0 EGA by ~2,500 steps.
- Broadest confidence interval throughout, indicating higher variability.
### Key Observations
- **Convergence Speed**: Fewer goals (67) achieve higher EGA faster than more goals (100, 120).
- **Confidence Intervals**: Wider intervals for higher goal counts suggest greater uncertainty in performance.
- **Asymptotic Behavior**: All lines plateau near 1.0 EGA, but higher goal counts require more steps to reach this threshold.
### Interpretation
The data demonstrates an inverse relationship between the number of goals and the rate of EGA convergence. Systems with fewer goals (67) exhibit faster and more consistent improvement in EGA, while those with more goals (120) show slower progress and higher variability. This suggests that increasing goal complexity or quantity may require additional computational resources or algorithmic adjustments to maintain efficiency. The widening confidence intervals for higher goal counts imply that performance metrics become less predictable as task difficulty increases.
</details>
(a) Effect of increasing the number of goals
<details>
<summary>x65.png Details</summary>

### Visual Description
## Line Chart: EGA Performance vs. Environment Steps
### Overview
The chart illustrates the convergence of Expected Goal Achievement (EGA) over environment steps for four distinct action configurations (3, 15, 30, and 45 actions). Each line represents a different action count, with shaded regions indicating variability in performance.
### Components/Axes
- **Y-axis (EGA)**: Ranges from 0.0 to 1.0 in increments of 0.2. Represents the proportion of goals achieved.
- **X-axis (Environment Step)**: Ranges from 0 to 10,000 in increments of 5,000. Denotes training progress.
- **Legend**: Located in the bottom-right corner, mapping colors to action counts:
- Black: 3 actions
- Orange: 15 actions
- Blue: 30 actions
- Green: 45 actions
- **Shaded Regions**: Surround each line, representing confidence intervals or variability (e.g., ±0.05 for black line).
### Detailed Analysis
1. **Black Line (3 actions)**:
- Rapidly ascends to EGA=1.0 by ~1,000 steps.
- Maintains EGA=1.0 thereafter.
- Shaded region: Narrow (â±0.05), indicating low variability.
2. **Orange Line (15 actions)**:
- Reaches EGA=1.0 by ~5,000 steps.
- Shaded region: Moderate width (â±0.10), suggesting higher variability than 3 actions.
3. **Blue Line (30 actions)**:
- Achieves EGA=1.0 by ~7,500 steps.
- Shaded region: Wider (â±0.15), reflecting increased uncertainty.
4. **Green Line (45 actions)**:
- Reaches EGA=1.0 by ~10,000 steps.
- Shaded region: Broadest (â±0.20), indicating the highest variability.
### Key Observations
- **Inverse Relationship**: Higher action counts correlate with delayed convergence to EGA=1.0.
- **Variability Trend**: Shaded regions widen as action count increases, implying greater instability in performance for complex action spaces.
- **Saturation Point**: All configurations eventually achieve EGA=1.0, but the required environment steps scale linearly with action count.
### Interpretation
The data suggests that increasing the number of actions in an environment introduces two key challenges:
1. **Delayed Convergence**: More actions require significantly more training steps to achieve optimal performance (e.g., 45 actions take ~10x longer than 3 actions).
2. **Exploration-Exploitation Trade-off**: Wider shaded regions for higher action counts imply that agents struggle to balance exploration (trying new actions) and exploitation (using known effective actions) in complex action spaces.
This pattern aligns with reinforcement learning principles, where larger action spaces increase the "curse of dimensionality," necessitating more sophisticated exploration strategies to achieve comparable performance. The consistent saturation at EGA=1.0 across all configurations indicates that the task is solvable regardless of action count, but with diminishing returns on efficiency as complexity grows.
</details>
(b) Effect of increasing the number of actions
Figure 24: Scalability of XENON with more goals and actions. EGA over environment steps in MC-TextWorld when (a) increasing the number of goal items and (b) increasing the number of available actions. In (a), we keep the three actions (âmineâ, âcraftâ, âsmeltâ) fixed, while in (b) we keep the 67 goal items fixed. Solid lines denote the mean over 15 runs; shaded areas denote the standard deviation.
To evaluate the scalability of XENONâs dependency and action learning, we vary the number of goal items and available actions in the MC-TextWorld environment. For the goal-scaling experiment, we increase the number of goals from 67 to 100 and 120 by adding new goal items (see Table Ë 11 for the added goals), while keeping the original three actions âmineâ, âcraftâ, and âsmeltâ fixed. For the action-scaling experiment, we increase the available actions from 3 to 15, 30, and 45 (e.g., âharvestâ, âhuntâ, âplaceâ), while keeping the original 67 goals fixed.
The results in Figure Ë 24 show that XENON maintains high EGA as both the number of goals and the number of actions grow, although the number of environment steps required for convergence naturally increases. As seen in Figure Ë 24(a), increasing the number of goals from 67 to 100 and 120 only moderately delays convergence (from around 1,400 to about 2,100 and 2,600 steps). In contrast, Figure Ë 24(b) shows a larger slowdown when increasing the number of actions (from about 1,400 steps with 3 actions to roughly 4,000, 7,000, and 10,000 steps with 15, 30, and 45 actions), which is expected because XENON only revises an itemâs dependency after all available actions for that item have been classified as empirically invalid by FAM. We believe this convergence speed could be improved with minimal changes, such as by lowering $x_{0}$ , the failure count threshold for classifying an action as invalid, or by triggering dependency revision once the agent has failed to obtain an item a fixed number of times, regardless of which actions were tried in subgoals.
K.10 Ablation on action selection methods for making subgoals
<details>
<summary>x66.png Details</summary>

### Visual Description
## Legend: Categorical Data Representation
### Overview
The image displays a horizontal legend with five categorical entries, each represented by a distinct symbol and color. The legend is enclosed in a light gray rectangular box with a thin border. Each entry combines a unique glyph, a label, and a color-coded line segment.
### Components/Axes
- **Legend Structure**: Horizontal arrangement with equal spacing between entries.
- **Symbol-Label Mapping**:
- Brown triangle: `Random+FAM`
- Purple star: `UCB`
- Blue cross: `LLM`
- Pink diamond: `SC`
- Light blue circle: `XENON`
- **Color Coding**: Each symbol is paired with a horizontal line segment in its respective color (brown, purple, blue, pink, light blue).
### Detailed Analysis
- **Symbol-Label Alignment**: All labels are right-aligned with their corresponding symbols.
- **Typography**: Labels use uppercase sans-serif font (likely Arial or similar) with consistent spacing.
- **Color Consistency**: No color overlaps or ambiguities observed between entries.
- **Spatial Grounding**:
- Legend occupies the full width of the image.
- Symbols are left-aligned within their respective legend cells.
- Line segments extend horizontally from symbols to the right edge of the legend.
### Key Observations
1. **Categorical Diversity**: Five distinct categories represented through unique glyphs and colors.
2. **Color Semantics**:
- Brown (`Random+FAM`) suggests organic/random processes.
- Purple (`UCB`) may indicate academic/educational context.
- Blue (`LLM`) aligns with technical/technological themes.
- Pink (`SC`) could denote social or collaborative elements.
- Light blue (`XENON`) might represent specialized or high-energy applications.
3. **No Numerical Data**: The legend lacks axes, scales, or quantitative markers, confirming its role as a categorical key rather than a data visualization.
### Interpretation
This legend serves as a categorical key for a larger dataset or visualization system. The choice of symbols and colors likely encodes domain-specific meanings:
- **Random+FAM** (brown triangle) might represent randomized or familial data groupings.
- **UCB** (purple star) could denote University of California, Berkeley-related metrics.
- **LLM** (blue cross) likely refers to Large Language Models in AI contexts.
- **SC** (pink diamond) may stand for Social Computing or similar interdisciplinary fields.
- **XENON** (light blue circle) suggests specialized applications, possibly in physics or high-energy research.
The absence of numerical data implies this legend is designed for qualitative differentiation rather than quantitative analysis. Its placement and structure indicate it is intended to be referenced alongside a corresponding chart, graph, or interactive visualization where these symbols would map to specific data series or variables.
</details>
<details>
<summary>x67.png Details</summary>

### Visual Description
## Line Graph: Success Rate vs. Number of Actions
### Overview
The image depicts a line graph comparing three success rate trends across four action thresholds (3, 15, 30, 45 actions). Three distinct data series are represented by colored lines: blue (constant success), purple (gradual decline), and red (steep decline). The graph uses a Cartesian coordinate system with numerical axes and a legend for data series identification.
### Components/Axes
- **X-axis**: Labeled "# of actions" with discrete markers at 3, 15, 30, and 45.
- **Y-axis**: Labeled "Success Rate" with a scale from 0.0 to 1.0 in increments of 0.2.
- **Legend**: Positioned at the top-right corner, associating:
- Blue line with "Success Rate = 1.0" (constant)
- Purple line with "Success Rate = 0.95â0.35" (declining)
- Red line with "Success Rate = 1.0â0.0" (steep decline)
### Detailed Analysis
1. **Blue Line (Constant Success)**:
- Maintains a success rate of 1.0 across all action thresholds.
- Data points: (3, 1.0), (15, 1.0), (30, 1.0), (45, 1.0).
- Spatial grounding: Topmost line, consistently aligned with the y-axis maximum.
2. **Purple Line (Gradual Decline)**:
- Starts at 1.0 at 3 actions, declines linearly to 0.35 at 45 actions.
- Data points: (3, 1.0), (15, 0.95), (30, 0.7), (45, 0.35).
- Spatial grounding: Middle line, intersecting the y-axis at 1.0 and descending diagonally.
3. **Red Line (Steep Decline)**:
- Begins at 1.0 at 3 actions, drops sharply to 0.0 at 45 actions.
- Data points: (3, 1.0), (15, 0.8), (30, 0.2), (45, 0.0).
- Spatial grounding: Bottom line, forming a steep downward slope.
### Key Observations
- The blue line remains perfectly horizontal, indicating no correlation between actions and success rate.
- The purple line exhibits a moderate negative slope (-0.015 per action).
- The red line shows a steeper negative slope (-0.022 per action).
- All lines originate at (3, 1.0), suggesting identical initial conditions.
- At 45 actions, success rates diverge significantly: 1.0 (blue), 0.35 (purple), 0.0 (red).
### Interpretation
The graph demonstrates three distinct trajectories of success rate as actions increase:
1. **Optimal Stability**: The blue line suggests scenarios where success remains unaffected by action volume (e.g., perfect systems or controlled variables).
2. **Diminishing Returns**: The purple line implies gradual degradation of success with increased actions, possibly due to resource dilution or complexity.
3. **Catastrophic Failure**: The red line indicates a critical threshold where excessive actions lead to complete failure, potentially reflecting system overload or error accumulation.
Notably, the red line's abrupt drop at 45 actions suggests a tipping point where additional efforts negate prior progress. This pattern could inform risk assessment frameworks or resource allocation strategies in high-stakes environments.
</details>
(a) Success rate
<details>
<summary>x68.png Details</summary>

### Visual Description
## Line Graph: Environment Steps to Success vs. Number of Actions
### Overview
The image depicts a line graph comparing three strategies ("Strategy A," "Strategy B," and "Baseline") across four action thresholds (3, 15, 30, 45 actions). The y-axis measures "Environment Steps to Success" (0â300), while the x-axis represents the number of actions. The legend is positioned in the top-right corner, with distinct markers for each strategy: brown triangles (Strategy A), purple stars (Strategy B), and blue circles (Baseline).
### Components/Axes
- **X-axis**: Labeled "# of actions," with discrete values at 3, 15, 30, and 45.
- **Y-axis**: Labeled "Environment Steps to Success," scaled from 0 to 300 in increments of 50.
- **Legend**: Located in the top-right corner, associating:
- Brown triangles â Strategy A
- Purple stars â Strategy B
- Blue circles â Baseline
- **Data Series**:
- **Strategy A**: Brown line with triangular markers.
- **Strategy B**: Purple line with star markers.
- **Baseline**: Blue line with circular markers.
### Detailed Analysis
1. **Strategy A (Brown)**:
- At 3 actions: ~60 steps.
- At 15 actions: ~180 steps.
- At 30 actions: ~260 steps.
- At 45 actions: ~280 steps.
- **Trend**: Steady upward slope, with the sharpest increase between 3 and 15 actions.
2. **Strategy B (Purple)**:
- At 3 actions: ~60 steps.
- At 15 actions: ~140 steps.
- At 30 actions: ~230 steps.
- At 45 actions: ~275 steps.
- **Trend**: Gradual upward trajectory, with a slower rate of improvement after 30 actions compared to Strategy A.
3. **Baseline (Blue)**:
- At all action thresholds: ~50â55 steps.
- **Trend**: Flat line, indicating no improvement with increased actions.
### Key Observations
- Both Strategy A and Strategy B show significant improvement in "Environment Steps to Success" as the number of actions increases.
- Strategy A outperforms Strategy B at all action thresholds, with a more pronounced acceleration in performance.
- The Baseline remains constant, suggesting no inherent improvement without intervention.
- At 45 actions, Strategy A achieves ~280 steps, while Strategy B reaches ~275 steps, with a narrow gap between them.
### Interpretation
The data suggests that increasing the number of actions correlates with improved success metrics, but the effectiveness of the strategy matters. Strategy A demonstrates the highest efficiency, achieving near-maximum performance (280 steps) with 45 actions. Strategy B, while effective, lags slightly behind. The Baselineâs stagnation highlights the necessity of active intervention. The graph implies diminishing returns for Strategy A after 30 actions, as the growth rate slows, whereas Strategy Bâs performance plateaus more gradually. This could indicate differing scalability or resource requirements between the two strategies.
</details>
(b) Steps to success (lower is better)
Figure 25: Ablation on action selection methods for subgoal construction. We evaluate different action selection methods for solving long-horizon goals given an oracle dependency graph, as the size of the available action set increases. (a) Success rate and (b) number of environment steps per successful episode. Note that in (a), the curves for LLM and SC overlap at 0.0 because they fail on all episodes, and in (b), they are omitted since they never succeed.
We find that, while LLMs can in principle accelerate the search for valid actions, they do so effectively only when their flawed knowledge is corrected algorithmically. To support this, we study how different action selection methods for subgoal construction affect performance on long-horizon goals. In this ablation, the agent is given an oracle dependency graph and a long-horizon goal, and only needs to output one valid action from the available actions for each subgoal item to achieve that goal. Each episode specifies a single goal item, and it is counted as successful if the agent obtains this item within 300 environment steps in MC-TextWorld. To study scalability with respect to the size of the available action set, we vary the number of actions as 3, 15, 30, and 45 by gradually adding actions such as âharvestâ and âhuntâ to the original three actions (âmineâ, âcraftâ, âsmeltâ).
Methods and metrics
We compare five action selection methods: Random+FAM (which randomly samples from available actions that have not yet repeatedly failed and reuses past successful actions), UCB, LLM without memory, LLM self-correction (SC), and XENON, which combines an LLM with FAM. We report the average success rate and the average number of environment steps to success over 20 runs per goal item, where goal items are drawn from the Redstone group.
As shown in Figure Ë 25, among the three LLM-based methods (LLM, SC, XENON), only XENONâwhich corrects the LLMâs action knowledge by removing repeatedly failed actions from the set of candidate actions the LLM is allowed to selectâsolves long-horizon goals reliably, maintaining a success rate of 1.0 and requiring roughly 50 environment steps across all sizes of the available action set. In contrast, LLM and SC never succeed in any episode, because they keep selecting incorrect actions for subgoal items (e.g., redstone), and therefore perform worse than the non-LLM baselines, Random+FAM and UCB. Random+FAM and UCB perform well when the number of available actions is small, but become increasingly slow and unreliable as the number of actions grows, often failing to reach the goal within the episode horizon.
K.11 Robustness to Smaller Planner LLMs and Limited Initial Knowledge
<details>
<summary>x69.png Details</summary>

### Visual Description
## Legend: Symbolic Representation Key
### Overview
The image displays a horizontal legend containing five symbolic entries, each paired with a distinct color and geometric shape. The legend serves as a reference for interpreting data series or categories in a technical visualization (e.g., chart, diagram).
### Components/Axes
- **Legend Structure**:
- Entries are arranged left-to-right in a single row.
- Each entry includes:
1. A colored geometric symbol.
2. A bold, uppercase label.
- No axis titles, scales, or numerical values are present.
### Detailed Analysis
1. **XENON**:
- Symbol: Blue circle (solid fill).
- Position: Far left.
2. **SC**:
- Symbol: Pink diamond (solid fill).
- Position: Second from left.
3. **ADAM**:
- Symbol: Orange hexagon (solid fill).
- Position: Third from left.
4. **DECKARD**:
- Symbol: Green square (solid fill).
- Position: Fourth from left.
5. **RAND**:
- Symbol: Gray plus sign (solid fill).
- Position: Far right.
### Key Observations
- All symbols are solid-filled with no gradients or patterns.
- Colors are distinct and non-overlapping for clarity.
- No textual annotations or additional metadata (e.g., units, scales) are visible.
### Interpretation
This legend likely corresponds to a multi-series chart or diagram where each symbol represents a unique data category or variable. For example:
- **XENON** (blue circle) might denote a primary dataset or control group.
- **RAND** (gray plus) could indicate randomized or outlier data.
- The use of geometric shapes (circle, diamond, hexagon, square, plus) suggests categorical differentiation beyond color alone, aiding accessibility for colorblind viewers.
The absence of numerical data implies this is a preparatory element for a larger visualization, where these symbols will map to specific datasets or analytical outcomes. The systematic arrangement ensures unambiguous cross-referencing in technical contexts.
</details>
<details>
<summary>x70.png Details</summary>

### Visual Description
## Line Chart: EGA Performance vs. Number of Human-Written Plans
### Overview
The chart compares the performance of different AI-human collaboration models (measured by EGA) across two scenarios: 1 and 3 provided human-written plans. Four distinct data series are plotted with unique markers and colors.
### Components/Axes
- **X-axis**: "# of provided human-written plans" (Categorical: 1, 3)
- **Y-axis**: "EGA" (Continuous scale: 0.0â1.0)
- **Legend**: Located on the right, associating:
- Blue Circle: Full Human
- Orange X: Human+AI
- Pink Diamond: AI+Human
- Green Square: AI Only
- Dark Blue Cross: AI Only + Human
### Detailed Analysis
1. **Full Human (Blue Circle)**
- Values: 0.98 (x=1), 0.97 (x=3)
- Trend: Nearly flat with a slight downward slope.
2. **Human+AI (Orange X)**
- Values: 0.45 (x=1), 0.53 (x=3)
- Trend: Steady upward trajectory.
3. **AI+Human (Pink Diamond)**
- Values: 0.42 (x=1), 0.48 (x=3)
- Trend: Gradual increase.
4. **AI Only (Green Square)**
- Values: 0.36 (x=1), 0.39 (x=3)
- Trend: Minimal upward slope.
5. **AI Only + Human (Dark Blue Cross)**
- Values: 0.10 (x=1), 0.14 (x=3)
- Trend: Slight increase but remains the lowest-performing series.
### Key Observations
- **Highest Performance**: Full Human (Blue) consistently dominates, though marginally declining.
- **Most Improvement**: Human+AI (Orange) shows the largest absolute gain (+0.08).
- **Lowest Baseline**: AI Only + Human (Dark Blue) starts and ends at the lowest EGA values.
- **AI-Only Limitations**: AI Only (Green) demonstrates minimal responsiveness to additional plans.
### Interpretation
The data suggests that human input significantly enhances EGA performance, particularly when combined with AI (Human+AI and AI+Human models). The Full Human modelâs near-perfect EGA implies human-written plans alone achieve optimal results, though the slight decline at x=3 may indicate diminishing returns or measurement noise. The AI Only + Human series underperforms despite human involvement, possibly due to suboptimal AI-human integration. Notably, the AI+Human and Human+AI models show comparable gains, suggesting bidirectional collaboration (human-AI vs. AI-human) yields similar benefits. The AI Only modelâs stagnation highlights its limitations without human guidance.
</details>
(a) Planner LLM size: 4B
<details>
<summary>x71.png Details</summary>

### Visual Description
## Line Chart: EGA Performance Across Human-AI Collaboration Models
### Overview
The chart compares Expected Goal Achievement (EGA) scores across five collaboration models (Full Human, Human+AI, AI, AI+Human, AI Only) at two points: 1 and 3 provided human-written plans. All models show distinct performance trajectories, with Full Human maintaining perfect scores.
### Components/Axes
- **X-axis**: "# of provided human-written plans" (values: 1, 3)
- **Y-axis**: "EGA" (scale: 0.0â1.0)
- **Legend**: Right-aligned, color-coded labels:
- Blue Circle: Full Human
- Orange X: Human+AI
- Pink Diamond: AI
- Green Square: AI+Human
- Dark Blue Cross: AI Only
### Detailed Analysis
1. **Full Human (Blue Circle)**
- Constant at 1.0 EGA for both x=1 and x=3
- Perfect performance maintained regardless of plan quantity
2. **Human+AI (Orange X)**
- x=1: ~0.55 EGA
- x=3: ~0.65 EGA
- Steady upward trend with plan quantity
3. **AI (Pink Diamond)**
- x=1: ~0.5 EGA
- x=3: ~0.6 EGA
- Moderate improvement with more plans
4. **AI+Human (Green Square)**
- x=1: ~0.4 EGA
- x=3: ~0.45 EGA
- Smallest absolute gain (+0.05)
5. **AI Only (Dark Blue Cross)**
- x=1: ~0.2 EGA
- x=3: ~0.25 EGA
- Minimal improvement and lowest baseline
### Key Observations
- Full Human performance remains perfectly stable (1.0 EGA)
- Human+AI shows strongest growth (+0.1 EGA) with increased plans
- AI+Human and AI Only demonstrate weakest scalability
- AI Only remains consistently the lowest performer
### Interpretation
The data reveals a clear hierarchy of effectiveness:
1. **Full Human** dominance suggests irreplaceable human contribution
2. **Human+AI** collaboration outperforms pure AI models by 20-30%
3. **AI+Human** underperforms expectations, indicating potential implementation gaps
4. **AI Only**'s poor performance highlights critical limitations in autonomous systems
The trends emphasize that human involvementâwhether direct (Full Human) or collaborative (Human+AI)âdrives superior outcomes. The AI+Human model's weak gains suggest either suboptimal human-AI integration or insufficient plan quality. Notably, AI's performance improves with more plans, but remains significantly below human-assisted models, indicating fundamental capability gaps in unassisted AI systems.
</details>
(b) Planner LLM size: 7B
Figure 26: Effect of planner LLM size and initial dependency graph quality in dependency and action learning. The plots show EGA after 3,000 environment steps of dependency and action learning in MC-TextWorld, obtained by varying the planner LLM size and the amount of correct knowledge in the initial dependency graph (controlled by the number of provided human-written plans). In (a), the planner is Phi-4-mini (4B) (Microsoft et al., 2025); in (b), the planner is Qwen2.5-VL-7B (7B) (Bai et al., 2025).
We further evaluate robustness of XENON and the baselines to limited prior knowledge by measuring dependency and action learning in MC-TextWorld while (i) varying the planner LLM size and (ii) degrading the quality of the initial dependency graph. For the planner LLM, we compare a 7B model (Qwen2.5-VL-7B (Bai et al., 2025)) against a 4B model (Phi-4-mini (Microsoft et al., 2025)); for the initial graph quality, we vary the number of provided human-written plans used to initialize the graph from three (âcraft iron_swordâ, âmine diamondâ, âcraft golden_swordâ) to one (âcraft iron_swordâ).
As shown in Figure Ë 26, XENON remains robust across all these settings: its EGA stays near-perfect even with the smaller 4B planner and the weakest initial graph, indicating that leveraging experiences can quickly compensate for weak priors. In contrast, baselines that rely on LLM self-correction (SC) or that strongly depend on the LLM or initial graph (ADAM, DECKARD) suffer substantial drops in EGA as the planner LLM becomes smaller and the initial graph contains less correct prior knowledge. This suggests that, in our setting, algorithmic knowledge correction is more critical than scaling up the planner LLM or richer initial human-provided knowledge.
K.12 Full results on the long-horizon tasks benchmark
In this section, we report XENONâs performance on each goal within the long-horizon tasks benchmark, detailing metrics such as the goal item, number of sub-goals, success rate (SR), and evaluation episodes.
Table Ë 18 and 19 present XENONâs results when utilizing the dependency graph learned through 400 episodes of exploration. Conversely, Table Ë 20 and 21 display XENON â âs performance, which leverages an oracle dependency graph.
Table 18: The results of XENON (with dependency graph learned via exploration across 400 episodes) on the Wood group, Stone group, and Iron group. SR denotes success rate.
| Group | Goal | Sub-Goal Num. | SR | Eval Episodes |
| --- | --- | --- | --- | --- |
| Wood | bowl | 4 | 92.68 | 41 |
| chest | 4 | 95.24 | 42 | |
| crafting_table | 3 | 95.83 | 48 | |
| ladder | 5 | 0.00 | 31 | |
| stick | 3 | 95.45 | 44 | |
| wooden_axe | 5 | 90.91 | 44 | |
| wooden_hoe | 5 | 95.35 | 43 | |
| wooden_pickaxe | 5 | 93.02 | 43 | |
| wooden_shovel | 5 | 93.75 | 48 | |
| wooden_sword | 5 | 95.35 | 43 | |
| Stone | charcoal | 8 | 87.50 | 40 |
| furnace | 7 | 88.10 | 42 | |
| smoker | 8 | 0.00 | 47 | |
| stone_axe | 7 | 97.78 | 45 | |
| stone_hoe | 7 | 90.70 | 43 | |
| stone_pickaxe | 7 | 95.45 | 44 | |
| stone_shovel | 7 | 89.58 | 48 | |
| stone_sword | 7 | 89.80 | 49 | |
| torch | 7 | 93.02 | 43 | |
| Iron | blast_furnace | 13 | 0.00 | 42 |
| bucket | 11 | 0.00 | 47 | |
| chain | 12 | 0.00 | 42 | |
| hopper | 12 | 0.00 | 47 | |
| iron_axe | 11 | 75.56 | 45 | |
| iron_bars | 11 | 80.43 | 46 | |
| iron_hoe | 11 | 89.13 | 46 | |
| iron_nugget | 11 | 79.55 | 44 | |
| iron_pickaxe | 11 | 77.08 | 48 | |
| iron_shovel | 11 | 75.56 | 45 | |
| iron_sword | 11 | 84.78 | 46 | |
| rail | 11 | 0.00 | 44 | |
| shears | 11 | 0.00 | 43 | |
| smithing_table | 11 | 93.75 | 48 | |
| stonecutter | 12 | 0.00 | 43 | |
| tripwire_hook | 11 | 78.43 | 51 | |
Table 19: The results of XENON (with dependency graph learned via exploration across 400 episodes) on the Gold group, Diamond group, Redstone group, and Armor group. SR denotes success rate.
| Group | Goal Item | Sub Goal Num. | SR | Eval Episodes |
| --- | --- | --- | --- | --- |
| Gold | gold_ingot | 13 | 76.92 | 52 |
| golden_axe | 14 | 72.00 | 50 | |
| golden_hoe | 14 | 66.67 | 48 | |
| golden_pickaxe | 14 | 76.00 | 50 | |
| golden_shovel | 14 | 71.74 | 46 | |
| golden_sword | 14 | 78.26 | 46 | |
| Diamond | diamond | 12 | 87.76 | 49 |
| diamond_axe | 13 | 72.55 | 51 | |
| diamond_hoe | 13 | 63.79 | 58 | |
| diamond_pickaxe | 13 | 60.71 | 56 | |
| diamond_shovel | 13 | 84.31 | 51 | |
| diamond_sword | 13 | 76.79 | 56 | |
| jukebox | 13 | 0.00 | 48 | |
| Redstone | activator_rail | 14 | 0.00 | 3 |
| compass | 13 | 0.00 | 3 | |
| dropper | 13 | 0.00 | 3 | |
| note_block | 13 | 0.00 | 4 | |
| piston | 13 | 0.00 | 12 | |
| redstone_torch | 13 | 0.00 | 19 | |
| Armor | diamond_boots | 13 | 64.29 | 42 |
| diamond_chestplate | 13 | 0.00 | 44 | |
| diamond_helmet | 13 | 67.50 | 40 | |
| diamond_leggings | 13 | 0.00 | 37 | |
| golden_boots | 14 | 69.23 | 39 | |
| golden_chestplate | 14 | 0.00 | 39 | |
| golden_helmet | 14 | 60.53 | 38 | |
| golden_leggings | 14 | 0.00 | 38 | |
| iron_boots | 11 | 94.44 | 54 | |
| iron_chestplate | 11 | 0.00 | 42 | |
| iron_helmet | 11 | 4.26 | 47 | |
| iron_leggings | 11 | 0.00 | 41 | |
| shield | 11 | 0.00 | 46 | |
Table 20: The results of XENON â (with oracle dependency graph) on the Wood group, Stone group, and Iron group. SR denotes success rate.
| Group | Goal Item | Sub-Goal Num. | SR | Eval Episodes |
| --- | --- | --- | --- | --- |
| Wood | bowl | 4 | 94.55 | 55 |
| chest | 4 | 94.74 | 57 | |
| crafting_table | 3 | 94.83 | 58 | |
| ladder | 5 | 94.74 | 57 | |
| stick | 3 | 95.08 | 61 | |
| wooden_axe | 5 | 94.64 | 56 | |
| wooden_hoe | 5 | 94.83 | 58 | |
| wooden_pickaxe | 5 | 98.33 | 60 | |
| wooden_shovel | 5 | 96.49 | 57 | |
| wooden_sword | 5 | 94.83 | 58 | |
| Stone | charcoal | 8 | 92.68 | 41 |
| furnace | 7 | 90.00 | 40 | |
| smoker | 8 | 87.50 | 40 | |
| stone_axe | 7 | 95.12 | 41 | |
| stone_hoe | 7 | 94.87 | 39 | |
| stone_pickaxe | 7 | 94.87 | 39 | |
| stone_shovel | 7 | 94.87 | 39 | |
| stone_sword | 7 | 92.11 | 38 | |
| torch | 7 | 92.50 | 40 | |
| Iron | blast_furnace | 13 | 82.22 | 45 |
| bucket | 11 | 89.47 | 38 | |
| chain | 12 | 83.33 | 36 | |
| hopper | 12 | 77.78 | 36 | |
| iron_axe | 11 | 82.50 | 40 | |
| iron_bars | 11 | 85.29 | 34 | |
| iron_hoe | 11 | 75.68 | 37 | |
| iron_nugget | 11 | 84.78 | 46 | |
| iron_pickaxe | 11 | 83.33 | 42 | |
| iron_shovel | 11 | 78.38 | 37 | |
| iron_sword | 11 | 85.42 | 48 | |
| rail | 11 | 80.56 | 36 | |
| shears | 11 | 82.05 | 39 | |
| smithing_table | 11 | 83.78 | 37 | |
| stonecutter | 12 | 86.84 | 38 | |
| tripwire_hook | 11 | 91.18 | 34 | |
Table 21: The results of XENON â (with oracle dependency graph) on the Gold group, Diamond group, Redstone group, and Armor group. SR denotes success rate.
| Group | Goal Item | Sub Goal Num. | SR | Eval Episodes |
| --- | --- | --- | --- | --- |
| Gold | gold_ingot | 13 | 78.38 | 37 |
| golden_axe | 14 | 65.12 | 43 | |
| golden_hoe | 14 | 70.27 | 37 | |
| golden_pickaxe | 14 | 75.00 | 36 | |
| golden_shovel | 14 | 78.38 | 37 | |
| Diamond | diamond | 12 | 71.79 | 39 |
| diamond_axe | 13 | 70.00 | 40 | |
| diamond_hoe | 13 | 85.29 | 34 | |
| diamond_pickaxe | 13 | 72.09 | 43 | |
| diamond_shovel | 13 | 76.19 | 42 | |
| diamond_sword | 13 | 80.56 | 36 | |
| jukebox | 13 | 69.77 | 43 | |
| Redstone | activator_rail | 14 | 67.39 | 46 |
| compass | 13 | 70.00 | 40 | |
| dropper | 13 | 75.00 | 40 | |
| note_block | 13 | 89.19 | 37 | |
| piston | 13 | 65.79 | 38 | |
| redstone_torch | 13 | 84.85 | 33 | |
| Armor | diamond_boots | 13 | 60.78 | 51 |
| diamond_chestplate | 13 | 20.00 | 50 | |
| diamond_helmet | 13 | 71.79 | 39 | |
| diamond_leggings | 13 | 33.33 | 39 | |
| golden_boots | 14 | 75.00 | 40 | |
| golden_chestplate | 14 | 0.00 | 36 | |
| golden_helmet | 14 | 54.05 | 37 | |
| golden_leggings | 14 | 0.00 | 38 | |
| iron_boots | 11 | 93.62 | 47 | |
| iron_chestplate | 11 | 97.50 | 40 | |
| iron_helmet | 11 | 86.36 | 44 | |
| iron_leggings | 11 | 97.50 | 40 | |
| shield | 11 | 97.62 | 42 | |
K.13 Experiments compute resources
All experiments were conducted on an internal computing cluster equipped with RTX3090, A5000, and A6000 GPUs. We report the total aggregated compute time from running multiple parallel experiments. For the dependency learning, exploration across 400 episodes in the MineRL environment, the total compute time was 24 days. The evaluation on the long-horizon tasks benchmark in the MineRL environment required a total of 34 days of compute. Experiments within the MC-TextWorld environment for dependency learning utilized a total of 3 days of compute. We note that these values represent aggregated compute time, and the actual wall-clock time for individual experiments was significantly shorter due to parallelization.
Appendix L The Use of Large Language Models (LLMs)
In preparing this manuscript, we used an LLM as a writing assistant to improve the text. Its role included refining grammar and phrasing, suggesting clearer sentence structures, and maintaining a consistent academic tone. All technical contributions, experimental designs, and final claims were developed by the human authors, who thoroughly reviewed and take full responsibility for the paperâs content.