2602.00276v1
Model: nemotron-free
# Localizing and Correcting Errors for LLM-based Planners
**Authors**: Aditya Kumar, William Cohen
Abstract
Large language models (LLMs) have demonstrated strong reasoning capabilities on math and coding, but frequently fail on symbolic classical planning tasks. Our studies, as well as prior work, show that LLM-generated plans routinely violate domain constraints given in their instructions (e.g., walking through walls). To address this failure, we propose iteratively augmenting instructions with Localized In-Context Learning (L-ICL) demonstrations: targeted corrections for specific failing steps. Specifically, L-ICL identifies the first constraint violation in a trace and injects a minimal input-output example giving the correct behavior for the failing step. Our proposed technique of L-ICL is much effective than explicit instructions or traditional ICL, which adds complete problem-solving trajectories, and many other baselines. For example, on an 8Γ8 gridworld, L-ICL produces valid plans 89% of the time with only 60 training examples, compared to 59% for the best baseline, an increase of 30%. L-ICL also shows dramatic improvements in other domains (gridworld navigation, mazes, Sokoban, and BlocksWorld), and on several LLM architectures.
Machine Learning, ICML
1 Introduction
Large language models (LLMs) and agentic systems reason and plan effectively in domains such as mathematics, coding, and question answering (Khattab et al., 2023; Yao et al., 2023a), suggesting that modern LLMs possess strong general planning capabilities. However, studies on classical planning benchmarks reveals a more nuanced picture: LLMs frequently fail, even on simple planning tasks that symbolic planners solve easily (Valmeekam et al., 2023; Stechly et al., 2024). Past researchers have analyzed plans produced by LLMs such as SearchFormer (Lehnert et al., 2024), which are fine-tuned to generate structured reasoning chains that can be parsed, and shown that LLMs frequently violate domain constraints given in their instructions (Stechly et al., 2024). For example, LLMs might propose plans that walk through a wall in a maze, or pick up a block when the robotβs gripper is already occupied.
Table 1: Performance on an 8 $Γ$ 8 two-room gridworld using DeepSeek V3. Paths start in one room and end in the other. Valid plans never leave the grid or cross walls; Successful plans reach their goals; and Optimal plans are successful and use the minimum number of steps. L-ICL[ $m$ ] denotes our method trained on $m$ examples, with the corresponding character count of L-ICL examples provided. All experiments are provided with an ASCII representation of the grid.
| Zero-Shot | 16 | 0 | 0 |
| --- | --- | --- | --- |
| RAG-ICL [10k chars] | 20 | 6 | 6 |
| RAG-ICL [20k chars] | 21 | 9 | 9 |
| ReAct | 48 | 41 | 37 |
| Self-Consistency ( $k{=}5$ ) | 59 | 45 | 43 |
| Self-Refine ( $k{=}5$ ) | 51 | 44 | 38 |
| PTP PTP, introduced in (Cohen and Cohen, 2024) describes a method to prompt LLMs with partially specified programs / L-ICL [ $m=0$ ] | 40 | 33 | 28 |
| L-ICL [ours, $m=60$ , 2k chars] | 89 | 89 | 77 |
Table 1 demonstrates this on a very simple 8 $Γ$ 8 two-room gridworld navigation task. Despite receiving complete information about the domain (grid layout and obstacles) no baseline method produces valid plans even 60% of the time. Agentic and test-time-scaling approaches perform better, but still produce many invalid plans. We conjecture that LLMs cannot build valid plans for this task because they fail to access the necessary domain-specific knowledge in the prompt consistently. This hypothesis is consistent with the failure of LLMs in these domains, and with their success in math and coding, where the necessary knowledge is general, and hence learnable in pre-training or fine-tuning.
In-context learning (ICL) is a natural remedy. However, complete solution trajectories demonstrate that plans work, not why individual steps are validβleaving constraints implicit. As Table 1 shows, even 20,000 characters of retrieved trajectories RAG-ICL, retrieving demonstrations for tasks with similar start and end goals. yield only 9% success. The rules must still be inferred, and inference fails.
L-ICL escapes this trap by letting failures reveal which constraints need explicit specification. Rather than full trajectories, we augment prompts with localized examples that demonstrate correct behavior on individual steps where models err. We call this approach Localized In-Context Learning (L-ICL). This approach achieves higher performance with much less context: 2,000 characters of targeted corrections outperforms 20,000 characters of trajectories. Generating L-ICL examples requires analyzing and correcting reasoning traces at training time, which we enable by prompting models to produce structured reasoning traces, and then correcting the traces with a symbolic planner. Thus, L-ICL might be viewed as distilling domain knowledge from a symbolic system into an LLM.
Figure 1 summarizes our approach, which builds on Program Trace Prompting (PTP) (Cohen and Cohen, 2024). PTP recasts reasoning as producing a βprogram traceβ for a partially specified program. A PTP prompt includes, for each type of reasoning βstepβ, documentation (but not code) for a corresponding subroutine, along with (optional) example inputs and outputs. For instance, a gridworld navigation task might include a subroutine get_applicable_actions(cell) that returns the set of obstacle-free cells adjacent to the input cell. Because no executable code is provided in PTP, just documentation, the LLM must infer how to perform the reasoning step: e.g., in gridworld navigation, the LLM must infer which moves are valid for a task. PTPβs prompting scheme provides a natural insertion point for localized corrections: when a subroutine call fails, we locally augment that subroutineβs documentation by adding a new input/output example. The input/output examples use Pythonβs doctest syntax, a format well-represented in LLM training data, so readily understandable by code-trained LLMs.
<details>
<summary>graphs/corrected/Research_Presentation.png Details</summary>

### Visual Description
## Flowchart: Program Analysis Process
### Overview
This flowchart illustrates a structured approach to analyzing a program fragment with hidden implementation details. It outlines a workflow from problem definition to error localization, using color-coded elements and code examples to demonstrate key steps.
### Components/Axes
- **Main Flow**: Left-to-right progression through analysis stages
- **Color Coding**:
- Red: Placeholders/MAZE elements
- Blue: Program structure/code
- Green: Execution example
- **Key Elements**:
1. Prompt Template (top-left)
2. Code snippets with comments
3. Structured trace box
4. Error analysis box
5. In-context example box
### Detailed Analysis
#### Prompt Template Section
- Contains a program fragment with hidden implementation marked by "..." placeholders
- Includes a question about program output for input (1,2),(6,5)
- Red arrow labeled "MAZE" points to placeholder markers
#### Code Snippet Section
- Blue box shows Python-like code with:
- Function definition: `def get_applicable_actions(state: PlanningState)`
- Action retrieval: `Set(Action): "Get all applicable actions in the current state."`
- Example execution: `get_applicable_actions((1,2))` returning `['move_north', 'move_east']`
#### Structured Trace
- Connects prompt to analysis stages via downward arrows
- Represents transformation of input into executable trace
#### Error Analysis
- Contains "Symbolic Planner" label in gray box
- Represents error localization component
#### In-Context Example
- Green box shows execution step:
- `get_applicable_actions((5,4))` returning `['move_east', 'move_west']`
- Demonstrates single-step execution (L-ICL)
### Key Observations
1. Color-coded elements create visual hierarchy:
- Red for missing implementation
- Blue for program structure
- Green for execution example
2. Arrows show directional flow:
- Red β Blue (problem definition to code)
- Blue β Green (code to execution)
3. Placeholders indicate incomplete implementation
4. Example demonstrates concrete code execution
### Interpretation
This diagram represents a systematic debugging/analysis framework where:
1. Program fragments with hidden logic are first defined
2. Structured traces map input to execution paths
3. Errors are localized through symbolic planning
4. Concrete examples demonstrate single-step execution
The color coding suggests a pedagogical approach to teaching program analysis, where students first identify missing implementation (red), then map program structure (blue), before executing and analyzing results (green). The example shows how the system would handle specific input coordinates through action retrieval and goal evaluation.
</details>
Figure 1: Overview of L-ICL. The prompt template follows PTP: it includes documentation for each subroutine but no executable code. Prompting an LLM produces a trace that follows the format of the $k$ provided example traces. The trace is parsed to find the first failing step, and the failing input is passed to an oracle that returns the correct output. This yields a localized example (e.g., $x{=}\texttt{(5,4)}$ , $y{=}\texttt{['move\_east','move\_west']}$ ) that is inserted into the subroutineβs documentation. This process iterates over training instances to accumulate examples in a failure-driven manner.
Given a planning task, we first prompt the LLM to generate a trace using the PTP format. We then analyze this trace programmatically to identify the first failing step, i.e., the first subroutine call whose output violates domain constraints. An oracle (a symbolic simulator or verifier) provides the correct output for that input, yielding a localized correction. This correction is then inserted into the prompt. For instance, if the LLMβs first invalid move is from cell $(3,4)$ , we L-ICL will add to the prompt an example showing get_applicable_actions((3,4)) should return [βmove_northβ, βmove_southβ]. This localized correction directly addresses the failure, and of course can also be generalized by the LLM to other similar cases.
This process iterates over multiple training instances, accumulating a bank of targeted examples that progressively refine the modelβs understanding of domain constraints. Crucially, the oracle is required only during training.
Experimentally, prompt augmentation with L-ICL dramatically reduces domain violations, and thus improves LLM planning performance across multiple domains. Beyond the results of Table 1 and other gridworld tasks, we evaluate on classical planning benchmarks like BlocksWorld and Sokoban, seeing similar gains. L-ICL is also remarkably sample-efficient: peak performance is typically achieved with only 30β60 training examples. L-ICL works on multiple LLM architectures (DeepSeek V3, DeepSeek V3.1, Claude Haiku 4.5, Claude Sonnet 4.5), and learned constraints can transfer across problem sizes (see Appendix B).
To summarize our contributions: (1) Using the PTP variant of semi-structured reasoning, we precisely measure constraint violation rates in LLM-generated plans across multiple planning domains, revealing that such violations are the dominant failure mode. (2) We introduce L-ICL, a method that improves planning validity through localized, failure-driven corrections, and show that targeted examples outperform retrieval of complete trajectories even when the latter uses 10 $Γ$ more context. (3) We demonstrate consistent improvements across multiple planning domains and four LLM architectures. (4) We release our benchmark suite and code to facilitate future research on LLM planning.
2 Related Work
2.1 LLM Planning: Capabilities and Limitations
The planning capabilities of LLMs remain contested. One line of work reports strong performance on some planning tasks when LLMs are augmented with appropriate scaffolding: e.g., Tree of Thoughts achieves 74% on Game of 24 versus 4% for chain-of-thought (Yao et al., 2023a), RAP-MCTS reaches 100% on Blocksworld instances requiring 6 or fewer steps (Hao et al., 2023), and ReAct improves interactive decision-making by 34% over baselines (Yao et al., 2023b). However, systematic evaluation on classical planning benchmarks reveals persistent failures. Valmeekam et al. (2023) show GPT-4 achieves only 12% success on International Planning Competition (IPC) domains; and Stechly et al. (2024) demonstrate that chain-of-thought improvements are brittle and fail to generalize beyond surface patterns. The LLM-Modulo framework (Kambhampati et al., 2024) argues that LLMs function as approximate knowledge sources rather than autonomous planners, achieving strong results only when paired with external verifiers. Kaesberg et al. (2025) also documented that LLMs are challenged by 2D navigation tasks, similar to ones we study here. Most recently, Shojaee et al. (2025) identify a βcomplexity collapseβ phenomenon: reasoning modelsβ performance degrades sharply beyond certain problem complexities, with accuracy dropping to zero on harder instances even when token budgets remain available.
We follow Stechly et al. (2024) in working to diagnose why LLMs violate constraints using structured reasoning chains; however, we work with PTP as a prompting scheme, rather than models fine-tuned to produce structured reasoning chains, allowing us to consider more kinds of models, and more powerful ones. With L-ICL, we also propose a practical method to reduce these violations. Our work confirms that constraint violations are a common failure mode, and shows that targeted corrections outperform both agentic scaffolding and retrieval-based ICL approaches.
2.2 Approaches to Improve LLM Reasoning
Prior work addresses LLM reasoning limitations through three main strategies: structured output formats, test-time compute scaling, and in-context learning. Structured Reasoning. Chain-of-thought prompting (Wei et al., 2022) improves performance by eliciting intermediate steps, though explanations may be unfaithful to actual computation (Turpin et al., 2023). PTP (Cohen and Cohen, 2024) offers interpretable traces: prompts specify subroutine signatures without implementations, and the LLM produces structured outputs that can be parsed and verified (Leng et al., 2025). We build on PTP because its explicit subroutine structure provides natural insertion points for localized corrections. Test-Time Compute. Several methods improve reasoning by expending more computation at inference. Self-Consistency (Wang et al., 2023) aggregates multiple sampled paths via majority voting; Tree of Thoughts (Yao et al., 2023a) explores branching reasoning trajectories; and Self-Refine (Madaan et al., 2023) iteratively improves outputs through self-critique. Tool-augmented approaches interleave reasoning with execution: Program of Thoughts (Chen et al., 2022), PAL (Gao et al., 2023), and Chain of Code (Li et al., 2023) generate executable code, while ReAct (Yao et al., 2023b) interleaves reasoning with tool calls. These methods require multiple LLM calls or external tools at inference. Critically, Stechly et al. (2025) show that LLM self-verification is unreliable, making self-critique ineffective for planning. In-Context Learning. ICL enables task adaptation through examples (Brown et al., 2020), with effectiveness depending on example selection (Liu et al., 2022) and format (Min et al., 2022). For planning, a natural approach is retrieving complete solution trajectories (RAG-ICL). However, we find this ineffective: 20,000 characters of retrieved trajectories yield only 9% success on our gridworld benchmark. Complete trajectories demonstrate that solutions work but leave implicit why individual steps are valid. L-ICL addresses this by providing localized input-output pairs that directly encode constraints. Table 2 summarizes how L-ICL relates to prior approaches.
Table 2: Comparison of L-ICL with related approaches. L-ICL uniquely combines example-based training with localized feedback while requiring only single-pass inference.
| Self-Refine Tree of Thoughts Self-Consistency | none none none | many many many | none none none |
| --- | --- | --- | --- |
| ReAct | none | many | none |
| ReAct + oracle f/b | none | many | yes |
| Fine-tuning | trajectory | one | none |
| RAG-ICL | trajectory | one | none |
| L-ICL (ours) | one step | one | train only |
3 Method
We first describe Program Trace Prompting (PTP), the structured reasoning framework underlying our approach. We then introduce Localized In-Context Learning (L-ICL), our method for iteratively injecting domain constraints into the prompt. Finally, we describe our experimental domains and evaluation setup.
3.1 Background: Program Trace Prompting
Program Trace Prompting (PTP) (Cohen and Cohen, 2024) recasts reasoning as producing an execution trace for a partially specified program. A PTP prompt contains documentation for each subroutine (function name, typed arguments, return type, and a natural language description of its purpose), a small number of example traces showing how subroutines are called, and the query problem to solve. Crucially, subroutine implementations are withheld; the LLM must infer correct behavior from context. For planning tasks, we define subroutines corresponding to planning primitives. For instance, a gridworld navigation task includes a subroutine that returns applicable actions from a given state (those that stay in bounds and avoid walls), a subroutine that returns the resulting state after executing an action, and a subroutine that checks whether the current state satisfies the goal. The LLM generates a trace by repeatedly invoking these subroutines, producing outputs consistent with the documentation and examples. Because the trace follows a predictable structure, we can parse it programmatically and verify each step against a ground-truth oracle. This explicit subroutine structure provides natural insertion points for corrections: when a specific subroutine call fails, we can augment that subroutineβs documentation without modifying the rest of the prompt. Full subroutine specifications for each domain appear in Appendix E.
3.2 Localized In-Context Learning (L-ICL)
The key insight behind L-ICL is that domain constraints can be taught more effectively through targeted examples than through complete solution trajectories. When an LLM violates a constraint (e.g., proposing to move through a wall), traditional approaches either reject the entire plan or provide feedback on the final outcome. L-ICL instead identifies the precise point of failure and injects a minimal correction for that specific subroutine call. First Failure Identification. Given an LLM-generated trace, we parse each subroutine call and verify its output against an oracle. Let $c_{1},c_{2},...,c_{n}$ denote the sequence of subroutine calls in the trace. We identify the first failing call $c_{i^{*}}$ such that the LLMβs output differs from the oracleβs:
$$
i^{*}=\min\{i:\text{LLM}(c_{i})\neq\text{Oracle}(c_{i})\}
$$
Focusing on the first failure is deliberate. Planning errors cascade: an invalid move at step $k$ renders all subsequent state representations incorrect, making later βerrorsβ artifacts of the initial mistake rather than independent failures. Correcting the root cause addresses multiple downstream errors simultaneously. Localized Correction. For the failing call $c_{i^{*}}$ with input $x$ and incorrect output $\hat{y}$ , we query the oracle to obtain the correct output $y^{*}=\text{Oracle}(x)$ . This yields a correction tuple $(f,x,y^{*})$ where $f$ is the subroutine name. We format this correction as a doctest-style example and insert it into the documentation for subroutine $f$ , augmenting the original description with an additional input-output pair. This format, drawn from Pythonβs widely used doctest convention, is well-represented in LLM training data. Appendix E.3 provides concrete examples of the correction format. Iterative Accumulation. L-ICL iterates over a set of training problems $\{P_{1},P_{2},...,P_{m}\}$ . For each problem, we generate a trace using the current prompt, identify the first failing subroutine call (if any), and add the corresponding correction to the prompt. Corrections accumulate across training problems, progressively βhardeningβ the prompt to avoid constraint violations. Algorithm 1 provides pseudocode. L-ICL converges quickly: we see diminishing returns after only 30β60 training examples on our benchmark tasks (see Section 4).
Algorithm 1 Localized In-Context Learning (L-ICL)
0: Base prompt $\mathcal{P}_{0}$ with PTP structure, training problems $\{P_{1},...,P_{m}\}$ , oracle $\mathcal{O}$
0: Augmented prompt $\mathcal{P}$
$\mathcal{P}β\mathcal{P}_{0}$
$\mathcal{C}β\emptyset$ $\triangleright$ Correction set
for $j=1$ to $m$ do
$\tauβ\textsc{GenerateTrace}(\mathcal{P}_{0},P_{j})$
$\{c_{1},...,c_{n}\}β\textsc{ParseCalls}(\tau)$
for $i=1$ to $n$ do
$(f,x,\hat{y})β c_{i}$
$y^{*}β\mathcal{O}(f,x)$
if $\hat{y}β y^{*}$ then
$\mathcal{C}β\mathcal{C}\cup\{(f,x,y^{*})\}$ $\triangleright$ Record first failure
break
end if
end for
end for
$\mathcal{P}β\textsc{InsertCorrections}(\mathcal{P}_{0},\mathcal{C})$ $\triangleright$ Batch update
return $\mathcal{P}$
3.3 Experimental Domains
We design our experimental domains as a progressive ablation study that isolates different facets of planning difficulty. Starting from simple navigation, we incrementally add complexity along several axes: spatial structure, action diversity, state tracking requirements, and strategic reasoning. Table 3 summarizes how each domain isolates specific challenges.
Table 3: Progressive ablation across experimental domains. Each domain adds complexity along one or more axes while controlling others.
| 8 $Γ$ 8 Grid | Simple | 4 | 1 | No |
| --- | --- | --- | --- | --- |
| 10 $Γ$ 10 Maze | Complex | 4 | 1 | No |
| Sokoban Grid | Complex | 4 | 1 | No |
| Full Sokoban | Complex | 8 | 3 | Yes |
| BlocksWorld | None | 2 | 5 | No |
The 8 $Γ$ 8 Two-Room Gridworld is our simplest setting, testing basic spatial reasoning: an agent must navigate between two rooms connected by a single doorway. The 10 $Γ$ 10 Maze increases spatial complexity with narrow corridors and dead ends, requiring longer plans (typically 15β25 steps versus 8β12 for the gridworld). Full Sokoban introduces the critical challenge of multi-object state tracking (an agent and a box), where the agent must coordinate its position with multiple box positions, and where certain pushes lead to irreversible trap states. Sokoban-Style Gridworld ablates Sokoban by removing pushable boxes, but keeping the spatial layout and action semantics, isolating the effect of richer environment structure. Finally, BlocksWorld differs qualitatively from navigation: every object (block) is dynamic, constraints depend on relational configurations rather than spatial positions, and we provide an algorithmic sketch to test whether L-ICL can improve adherence to prescribed planning strategies. Full domain specifications appear in Appendix C.
3.4 Baselines and Metrics
We compare L-ICL against several approaches spanning prompting strategies, agentic methods, and retrieval. Zero-Shot. The LLM receives the problem description and instructions with no in-context examples, measuring baseline capability without demonstration. RAG-ICL. We retrieve complete CoT-formatter solution trajectories for similar problems based on start/goal similarity, and evaluate at 10k and 20k character budgets. ReAct. The LLM is instructed to interleave reasoning and action selection in its output, following the prompt format specified in Appendix F.2. We evaluate a prompt-only version and an oracle-augmented version that queries a verifier during planning. Self-Consistency. Majority voting with $k{=}5$ reasoning paths sampled at temperature 0.7. Self-Refine. The LLM generates a solution, then critiques and refines it, based on its own feedback, for $k{=}5$ iterations. Tree-of-Thoughts. The LLM explores a tree of intermediate steps, evaluating and pruning branches (prompt-only, no external search). Crucially, ReAct (Oracle) queries the verifier at test time for each proposed action, while L-ICL uses the oracle only during training. At inference, L-ICL requires a single forward pass with no external dependencies. For L-ICL, we report results with different numbers of training examples $m$ (denoted L-ICL[ $m$ ]) to assess sample efficiency.
We evaluate plans along three axes that form a natural hierarchy. A plan is valid if it violates no domain constraints (e.g., no wall collisions). A plan is successful if it is valid and reaches the goal state. A plan is optimal if it is successful and uses the minimum number of steps. Hence, a large valid-to-success gap indicates the model follows rules but fails to reach goals, and a large success-to-optimal gap indicates inefficient but functional plans.
3.5 Experimental Setup
Our primary experiments use DeepSeek V3 (DeepSeek-AI, 2024), with additional evaluation on DeepSeek V3.1, Claude 4.5 Haiku, and Claude Sonnet 4.5 (Anthropic, 2025) to assess cross-architecture generalization. For each domain, we generate 100 test problems with random start and goal configurations. Training problems for L-ICL are drawn from a disjoint pool of 250 instances. For domains other than blocks world, prompts use a textual state representation, as suggested in Figure 1, and unless stated otherwise, use an ASCII representation of the grid. Oracles are domain-specific: simple simulators for gridworlds and mazes, and the Fast Downward planner (Helmert, 2006) and tools like the K-Star Planner (Katz and Lee, 2023; Lee et al., 2023) for Sokoban and BlocksWorld. We use temperature 1 for optimal model performance (DeepSeek-AI, 2024) unless stated. L-ICL is trained on up to 240 examples.
4 Results
We evaluate L-ICL across our domain suite, demonstrating that localized corrections dramatically improve constraint adherence while remaining sample-efficient. We ask four key questions about L-ICL: (1) Does it learn domain constraints? (2) Is it more efficient than retrieval-based ICL? (3) Does it require explicit spatial representations? (4) Does it generalize across LLM architectures?
4.1 L-ICL Learns Domain Constraints
Table 4 presents our main results across all domains. L-ICL consistently outperforms all baselines, often by substantial margins. Beyond raw performance gains, the pattern of results across our progressive domain suite reveals which aspects of planning L-ICL addresses effectively.
Table 4: Main results across all domains. We report %(V)alid and %(S)uccessful. All baselines receive ASCII grid representations. L-ICL[ $m$ ] denotes training on $m$ examples. Best results in bold, second-best underlined. $\dagger$ ReAct (Oracle f/b) receives oracle feedback at inference time. β L-ICL (no grid) methods are handicapped: they receive no ASCII grid, and rely purely on L-ICL to infer structure.
| | 8 $Γ$ 8 Grid | 10 $Γ$ 10 Maze | Sokoban Grid | Full Sokoban | BlocksWorld | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Method | V | S | V | S | V | S | V | S | V | S |
| Zero-Shot | 16 | 0 | 3 | 0 | 15 | 0 | 1 | 0 | 10 | 10 |
| RAG-ICL (10k chars) | 20 | 6 | 7 | 1 | 17 | 4 | 31 | 11 | 25 | 25 |
| RAG-ICL (20k chars) | 21 | 9 | 7 | 4 | 25 | 10 | 36 | 15 | 32 | 32 |
| ReAct (Prompt-Only) | 48 | 41 | 6 | 5 | 19 | 12 | 1 | 0 | 46 | 45 |
| Self-Consistency ( $k{=}5$ ) | 59 | 45 | 3 | 3 | 10 | 5 | 2 | 1 | 31 | 31 |
| Self-Refine ( $k{=}5$ ) | 51 | 44 | 3 | 1 | 13 | 8 | 0 | 0 | 49 | 49 |
| ToT (Prompt-Only) | 33 | 12 | 1 | 0 | 3 | 2 | 0 | 0 | 50 | 40 |
| ReAct (Oracle f/b) β | 55 | 45 | 6 | 5 | 21 | 13 | 3 | 0 | 51 | 51 |
| L-ICL[ $m{=}0$ ] (ours) | 40 | 33 | 20 | 16 | 21 | 17 | 19 | 13 | 50 | 48 |
| L-ICL[ $m{=}60$ ] (ours) | 89 | 89 | 40 | 21 | 63 | 49 | 46 | 20 | 68 | 66 |
| L-ICL[ $m{=}0$ ] β (ours) | 19 | 12 | 7 | 6 | 10 | 8 | 12 | 9 | 50 | 48 |
| L-ICL[ $m{=}60$ ] β (ours) | 73 | 63 | 57 | 27 | 62 | 44 | 42 | 14 | 68 | 66 |
8 $Γ$ 8 Gridworld. The complete failure of zero-shot prompting (0%) on this simple two-room task is striking: the model receives full information about walls, start, and goal, yet fails completely. This reveals that the bottleneck is not knowledge but application. L-ICL achieves 63% success, demonstrating that localized corrections bridge this gap. Figure 2 shows rapid improvement in the first 30 examples, with continued gains for $β$ 160 examples before plateauing. 10 $Γ$ 10 Maze. The mazeβs narrow corridors and longer optimal paths (15β25 steps) challenge all methods. L-ICL reaches 27% success where baselines achieve at most 5%. Notably, valid rates reach 57%, indicating that most L-ICL plans respect maze constraints even when they fail to reach the goal. This valid-to-success gap suggests that constraint satisfaction and goal-directed search are separable challenges; L-ICL addresses the former effectively. Sokoban Grid. Despite adopting Sokobanβs richer spatial structure, this domain (without pushable boxes) yields results intermediate between the prior domains: L-ICL achieves 49% success versus 13% for the best baseline. The similarity suggests that spatial complexity, not action vocabulary, dominates difficulty in navigation tasks. Full Sokoban. Introducing pushable boxes causes the sharpest performance degradation across all methods. L-ICL improves success from 13% to only 20%, yet increases valid action rates from 19% to 46%. This dissociation isolates multi-object state tracking as a distinct challenge: L-ICL teaches which pushes are legal, but coordinating agent and box positions toward the goal requires capabilities beyond constraint satisfaction, furhter analyzed in Appendix A. BlocksWorld. This domain differs qualitatively: constraints are relational (βblock A is on block Bβ) rather than spatial, and every object is dynamic. L-ICL still improves success from 48% to 66%, demonstrating that localized corrections generalize beyond navigation.
<details>
<summary>graphs/misc/8x8_nogrid_success_optimal_combined.png Details</summary>

### Visual Description
## Line Chart: 8x8 Gridworld: Success vs Optimal Rate
### Overview
The chart compares the performance of two methods ("Best Baseline" and "L-ICL") across two metrics ("Success" and "Optimal Rate") as training examples increase from 0 to 240. Performance is measured as a percentage rate, with shaded regions indicating confidence intervals.
### Components/Axes
- **X-axis**: Training Examples (0β240, increments of 30)
- **Y-axis**: Rate (%) (0β90, increments of 10)
- **Legend**:
- Dashed Blue: Best Baseline Success
- Dashed Orange: Best Baseline Optimal
- Solid Blue: L-ICL Success
- Solid Orange: L-ICL Optimal
- **Shaded Regions**: Confidence intervals (wider for L-ICL lines)
### Detailed Analysis
1. **Best Baseline Success (Dashed Blue)**:
- Starts at ~10% at 0 training examples.
- Peaks at ~60% at 60 examples.
- Fluctuates between ~50β70% up to 240 examples.
- Confidence interval widens slightly after 60 examples.
2. **Best Baseline Optimal (Dashed Orange)**:
- Starts at ~5% at 0 training examples.
- Peaks at ~55% at 60 examples.
- Fluctuates between ~40β65% up to 240 examples.
- Confidence interval remains narrow throughout.
3. **L-ICL Success (Solid Blue)**:
- Starts at ~10% at 0 training examples.
- Peaks at ~75% at 160 examples.
- Dips to ~70% at 240 examples.
- Confidence interval widens significantly after 60 examples.
4. **L-ICL Optimal (Solid Orange)**:
- Starts at ~5% at 0 training examples.
- Peaks at ~70% at 160 examples.
- Dips to ~70% at 240 examples.
- Confidence interval widens significantly after 60 examples.
### Key Observations
- **Performance Trends**:
- L-ICL methods outperform Best Baseline in both metrics after ~90 training examples.
- L-ICL Success achieves the highest peak (~75%) but shows higher variability.
- L-ICL Optimal maintains a narrower confidence interval despite lower peak performance (~70%).
- **Crossovers**:
- L-ICL Success surpasses Best Baseline Success after ~90 examples.
- L-ICL Optimal overtakes Best Baseline Optimal after ~60 examples.
- **Confidence Intervals**:
- L-ICL methods exhibit greater uncertainty (wider shaded regions) compared to Best Baseline.
### Interpretation
The data suggests that L-ICL methods are more effective in the 8x8 Gridworld task as training examples increase, particularly for the "Success" metric. However, their higher confidence intervals indicate less consistency compared to Best Baseline. The "Optimal" metric shows L-ICL methods achieving comparable performance to Best Baseline with fewer training examples but maintaining narrower confidence intervals. This could imply that L-ICL methods are more efficient but less robust to variability in training data. The divergence in performance between "Success" and "Optimal" rates highlights a potential trade-off between achieving high success rates and maintaining optimal behavior in the Gridworld environment.
</details>
Figure 2: 8 $Γ$ 8 Gridworld learning curves. Success and Optimal rates vs. training examples. L-ICL (without being given the ASCII grid) improves rapidly in the first 30β60 examples, substantially outperforming all baselines, which are given access to the ASCII grid (horizontal line shows best baseline).
4.2 L-ICL Is More Efficient Than Retrieval-Based ICL
A key advantage of L-ICL is sample efficiency: localized corrections convey more information per token than complete solution trajectories. Figure 3 compares L-ICL and RAG-ICL as a function of context size. RAG-ICL with 20,000 characters of retrieved trajectories achieves 16% success. L-ICL matches this performance with approximately 5,000 characters and reaches 63% success with 7,000 characters. At matched context size, L-ICL outperforms RAG-ICL by 40+ percentage points. This efficiency stems from the compression achieved by localized examples. A complete trajectory demonstrates that a solution works but leaves implicit why individual steps are valid. A local example like get_applicable_actions((3,4)) -> [βmove_northβ,βmove_southβ] directly encodes that eastward movement from (3,4) is blocked.
<details>
<summary>graphs/efficiency/8x8_grid_nogrid_efficiency.png Details</summary>

### Visual Description
## Line Chart: 8x8 Gridworld: Sample Efficiency
### Overview
The chart compares the sample efficiency of two methods, **RAG-CoT** (orange) and **L-ICL** (blue), across varying context sizes (0 to 20,000 characters). Success rate (%) is plotted on the y-axis, with confidence intervals shaded around each line. The x-axis is labeled "Context Size (chars)" with markers at 0, 5k, 10k, 15k, and 20k. The legend is positioned at the bottom, with orange representing RAG-CoT and blue representing L-ICL.
### Components/Axes
- **X-axis**: "Context Size (chars)" with discrete markers at 0, 5,000, 10,000, 15,000, and 20,000 characters.
- **Y-axis**: "Success Rate (%)" ranging from 0% to 90% in 10% increments.
- **Legend**: Located at the bottom center, with orange squares for RAG-CoT and blue circles for L-ICL.
- **Shading**: Confidence intervals are shaded around both lines, with lighter blue for L-ICL and lighter orange for RAG-CoT.
### Detailed Analysis
#### RAG-CoT (Orange Line)
- **Data Points**:
- 0 chars: 12% (Β±2%)
- 5k chars: 11% (Β±3%)
- 10k chars: 20% (Β±4%)
- 15k chars: 23% (Β±3%)
- 20k chars: 31% (Β±5%)
- **Trend**: Steady upward slope with minimal fluctuation. Success rate increases consistently as context size grows, though the rate of improvement slows at larger context sizes.
#### L-ICL (Blue Line)
- **Data Points**:
- 0 chars: 12% (Β±2%)
- 5k chars: 46% (Β±5%)
- 10k chars: 59% (Β±4%)
- 15k chars: 79% (Β±6%)
- 20k chars: 74% (Β±7%)
- **Trend**: Sharp initial increase, peaking at 15k chars (79%), followed by a slight decline at 20k chars (74%). The confidence interval widens significantly at larger context sizes, indicating greater variability in performance.
### Key Observations
1. **Initial Parity**: Both methods start with identical success rates (12%) at 0 context size.
2. **Divergence at 5k Chars**: L-ICL outperforms RAG-CoT by 35 percentage points (46% vs. 11%) at 5k chars.
3. **Sustained Growth for RAG-CoT**: RAG-CoT shows consistent improvement, reaching 31% at 20k chars, though it remains far below L-ICLβs peak.
4. **L-ICLβs Plateau**: L-ICLβs success rate plateaus after 15k chars, with a minor decline at 20k chars, suggesting diminishing returns.
5. **Confidence Intervals**: L-ICLβs confidence intervals are broader, particularly at 20k chars (Β±7%), compared to RAG-CoTβs narrower intervals (Β±5% max).
### Interpretation
- **Sample Efficiency**: L-ICL demonstrates superior sample efficiency at mid-to-large context sizes (5kβ15k chars), achieving near-doubling of success rates compared to RAG-CoT. However, its performance stabilizes and slightly regresses at 20k chars, raising questions about scalability.
- **RAG-CoTβs Consistency**: RAG-CoTβs steady growth suggests robustness in handling larger contexts, albeit at a slower rate. Its narrower confidence intervals imply more reliable performance across varying samples.
- **Practical Implications**: For applications requiring high success rates with moderate context sizes (e.g., 10kβ15k chars), L-ICL is preferable. For scenarios prioritizing stability and scalability beyond 15k chars, RAG-CoT may be more suitable despite lower peak performance.
- **Anomalies**: The slight decline in L-ICLβs success rate at 20k chars warrants further investigationβpotential overfitting or noise in the data at extreme context sizes.
</details>
Figure 3: Sample efficiency: L-ICL vs. RAG-ICL. Success rate vs. context size (characters) on 8 $Γ$ 8 Gridworld. L-ICL achieves higher performance with substantially less context.
4.3 L-ICL Does Not Need Full Domain Knowledge
In Table 4, in the tasks aside from BlocksWorld, all prompting schemes use an ASCII grid visualization of the gridworld to be explored (preliminary experiments suggested this approach was most effective for these tasks.) Since L-ICL learns to correct domain violations, a natural question is whether the ASCII grid is actually necessary for it: can it learn the domain from examples alone?
Figure 4 shows the learning curve for L-ICL on the 10x10 grid task with and without the ASCII visualization of the grid. The visualization accelerates performance early on (21% at $m{=}30$ with grid vs. 15% without), but peak performance is comparable (39% vs. 37%). Thus, L-ICL does not require visual scaffolding, although the grid provides useful inductive bias during early training. However, to obtain the full benefit of such scaffolding, the LLM requires some L-ICL training; with more examples being needed for more complex domains. Thus, the 8 $Γ$ 8 grid almost immediately benefits, whereas all harder domains only display the benefit of the scaffolded version over the non-scaffolded version later on in their training, as seen in the figure.
<details>
<summary>graphs/misc/10x10_maze_grid_ablation.png Details</summary>

### Visual Description
## Line Chart: 10x10 Maze: Grid Ablation
### Overview
The chart visualizes the success rates of different maze-solving strategies across varying numbers of training examples. It compares two baseline methods ("Best Baseline") and two L-ICL variants, each tested with and without a grid structure. Success rates are plotted against training examples (0β240), with shaded regions indicating variability.
---
### Components/Axes
- **X-axis**: Training Examples (0, 30, 60, 90, 120, 150, 180, 210, 240)
- **Y-axis**: Success Rate (%) (0β50)
- **Legend**:
- Dashed blue: Best Baseline (With Grid)
- Dashed orange: Best Baseline (No Grid)
- Solid blue: L-ICL (With Grid)
- Solid orange: L-ICL (No Grid)
---
### Detailed Analysis
1. **Best Baseline (With Grid)**
- Starts at ~5% success rate at 0 training examples.
- Increases steadily to ~45% at 240 examples.
- Shaded region (confidence interval) narrows slightly as training progresses.
2. **Best Baseline (No Grid)**
- Starts at ~5% and rises to ~35% at 240 examples.
- Shaded region widens at lower training examples but stabilizes.
3. **L-ICL (With Grid)**
- Begins at ~15% and peaks at ~40% at 240 examples.
- Shaded region is the widest, indicating high variability.
4. **L-ICL (No Grid)**
- Starts at ~5% and reaches ~35% at 240 examples.
- Shaded region remains relatively consistent.
---
### Key Observations
- **Grid Impact**: All methods perform significantly better with a grid. For example, Best Baseline (With Grid) achieves 45% vs. 35% without a grid.
- **L-ICL Performance**: L-ICL (With Grid) underperforms Best Baseline (With Grid) at 240 examples (40% vs. 45%), suggesting the grid amplifies the effectiveness of simpler baselines.
- **Variability**: L-ICL (With Grid) shows the highest uncertainty (widest shaded region), while Best Baseline (With Grid) has the most stable results.
- **Convergence**: All methods plateau near 35β45% success rates at 240 examples, indicating diminishing returns with more training.
---
### Interpretation
1. **Grid Ablation Insights**:
- The grid structure is critical for success, as all methods show substantial improvements when grids are included.
- Best Baseline (With Grid) outperforms L-ICL (With Grid) at scale, implying that simpler heuristics may leverage grid structures more effectively than complex L-ICL models.
2. **L-ICL Limitations**:
- L-ICL (With Grid) exhibits higher variability, possibly due to overfitting or sensitivity to grid-specific patterns.
- Without a grid, L-ICL performs similarly to Best Baseline (No Grid), suggesting its advantages are grid-dependent.
3. **Practical Implications**:
- For grid-based mazes, prioritizing simpler baselines (e.g., Best Baseline) may yield more reliable results than complex L-ICL models.
- The gridβs role in structuring the problem space likely reduces the need for advanced reasoning in this context.
4. **Anomalies**:
- L-ICL (With Grid) underperforms Best Baseline (With Grid) despite its complexity, highlighting potential inefficiencies in the L-ICL approach for this task.
- The shaded regions for L-ICL (With Grid) suggest inconsistent training outcomes, warranting further investigation into model stability.
</details>
Figure 4: Grid representation ablation on 10 $Γ$ 10 Maze. The ASCII grid accelerates early learning but does not change peak performance. Without L-ICL, the grid provides little benefit.
4.4 L-ICL Works On Many LLM Architectures
To assess whether L-ICLβs benefits are architecture-specific, we evaluate on three additional models: DeepSeek V3.1, Claude 4.5 Haiku, and Claude Sonnet 4.5. Figure 5 shows results on the 10 $Γ$ 10 Maze. All models improve substantially with L-ICL. Claude Sonnet 4.5 shows the strongest gains (10% to 74%), followed by DeepSeek V3.1 (2% to 47%) and Claude 4.5 Haiku (1% to 39%). The relative ordering changes with training: at $m{=}0$ models are comparable, but by $m{=}120$ Claude Sonnet 4.5 leads substantially. This suggests stronger models leverage accumulated corrections more effectively, though all models benefit.
<details>
<summary>graphs/misc/llm_ablation_success.png Details</summary>

### Visual Description
## Line Chart: 10x10 Maze: L-ICL Performance Across LLMs
### Overview
The chart visualizes the success rate (%) of four large language models (LLMs) across varying numbers of training examples (0β240) in a 10x10 maze task. Success rates are plotted with shaded confidence intervals (likely Β±1β2 standard deviations). The models compared are DeepSeek V3 (blue), DeepSeek V3.1 (orange), Claude Haiku 4.5 (green), and Claude Sonnet 4.5 (red).
### Components/Axes
- **X-axis**: Training Examples (0, 30, 60, 90, 120, 150, 180, 210, 240)
- **Y-axis**: Success Rate (%) (0β90)
- **Legend**: Located at the bottom center, mapping colors to models:
- Blue: DeepSeek V3
- Orange: DeepSeek V3.1
- Green: Claude Haiku 4.5
- Red: Claude Sonnet 4.5
### Detailed Analysis
1. **Claude Sonnet 4.5 (Red Line)**:
- Starts at ~10% success rate at 0 examples.
- Peaks at ~75% at 210 examples, with a slight decline to ~70% at 240.
- Shaded area widens significantly after 120 examples, indicating higher variability.
2. **DeepSeek V3.1 (Orange Line)**:
- Begins at ~5% at 0 examples.
- Rises steadily to ~45% at 210 examples, then dips to ~35% at 240.
- Shaded area remains relatively narrow, suggesting consistent performance.
3. **Claude Haiku 4.5 (Green Line)**:
- Starts at ~1% at 0 examples.
- Peaks at ~35% at 180 examples, then declines to ~30% at 240.
- Shaded area broadens after 150 examples.
4. **DeepSeek V3 (Blue Line)**:
- Begins at ~5% at 0 examples.
- Peaks at ~35% at 120 examples, then declines to ~30% at 240.
- Shaded area is narrowest overall, indicating stable performance.
### Key Observations
- **Claude Sonnet 4.5** dominates performance, achieving the highest success rates across all training scales.
- **DeepSeek V3.1** shows the most dramatic improvement with training, surpassing other models after 120 examples.
- **Claude Haiku 4.5** and **DeepSeek V3** exhibit diminishing returns after ~150β180 examples.
- All models show initial rapid gains, followed by plateauing or slight declines at higher training scales.
### Interpretation
The data suggests that **Claude Sonnet 4.5** is the most robust model for this task, maintaining high success rates even with limited training. **DeepSeek V3.1** demonstrates strong scalability, outperforming others at higher training volumes. The shaded areas highlight that **Claude Sonnet 4.5** has the highest variability, possibly due to complex decision-making in the maze. The decline in performance for some models at 240 examples may indicate overfitting or task-specific limitations. Notably, **DeepSeek V3.1**βs peak at 210 examples aligns with its narrowest confidence interval, suggesting optimal training efficiency.
</details>
Figure 5: L-ICL across LLM architectures. Success rate on 10 $Γ$ 10 Maze for four models. All improve substantially; Claude Sonnet 4.5 shows the largest gains (10% $β$ 74%).
4.5 Summary of Findings
(1) L-ICL dramatically improves constraint adherence, achieving consistently higher success rates than baselines across all domains. (2) L-ICL is sample-efficient: 30β90 training examples typically suffice, and L-ICL outperforms RAG-ICL while using 4 $Γ$ less context. (3) Explicit spatial representations are not required: ASCII grids accelerate early learning but do not change peak performance. (4) L-ICL generalizes across architectures: four LLMs from different families all benefit substantially. (5) Multi-object tracking and strategic planning remain challenging: the valid-to-success gap in Sokoban and BlocksWorld indicates that localized corrections address constraint violations but do not fully solve long-horizon coordination (see Appendix A).
5 Discussion
Our experiments demonstrate that L-ICL consistently improves LLM planning performance, often by substantial margins. Beyond raw performance gains, these results support a specific conceptual interpretation that clarifies both what L-ICL achieves and where challenges remain.
5.1 L-ICL as In-Context Unit Testing
In software engineering, unit testing is a means of βhardeningβ code subroutines (i.e., making them more reliable and predictable), and it is considered good practice to use unit tests even when end-to-end tests exist. ICL demonstrations instruct a model as to desired behavior, rather than confirming that it has a desired behavior; modulo this important difference, however, L-ICL demonstrations are analogous to unit tests, and traditional ICL demonstrations are analogous to end-to-end tests. L-ICL demonstrations can be viewed as a technique for βhardeningβ individual reasoning steps, in that they makes an LLMβs instruction-following behavior more reliable and consistent.
Full-trajectory demonstrations are more like end-to-end tests; in software engineering, these tests have a different role than unit tests, confirming that individual modules interact correctly: in LLM terms, they encourage process correctness, and only incidentally encourage step correctness. In planning tasks, an invalid plan may have many correctly perform steps and only a single invalidly performed step, so adding a full-trajectory demonstration is at best an inefficient way to improve performance, in terms of the useful information per prompt token, relative to accumulating local demonstrations in a failure-driven way.
5.2 Qualitative Evidence: From Guessing to Navigation
Figure 6 provides visual evidence of L-ICLβs effect. At $m{=}0$ , the model proposes moves without regard for walls, quickly entering invalid states. By $m{=}60$ , it produces a coherent start-to-goal path respecting all walls. Crucially, this improvement occurs without the model ever seeing the ASCII grid. The doctests encode constraints implicitly through input-output pairs, and the model learns to satisfy them. This demonstrates that L-ICL induces a transferable constraint prior rather than memorizing specific layouts.
<details>
<summary>graphs/misc/maze_pictures/0_final.png Details</summary>

### Visual Description
## Diagram: Pathfinding Grid with Start (S) and Goal (G)
### Overview
The image depicts a grid-based maze or pathfinding problem. A black-and-white grid represents obstacles (black squares) and traversable paths (white squares). A green circle labeled "S" marks the start position, and a red circle labeled "G" marks the goal. A dashed red line connects "S" to "G," indicating a calculated path through the grid.
### Components/Axes
- **Grid Structure**:
- 10x10 grid (approximate, based on visible cells).
- Black squares: Obstacles (non-traversable).
- White squares: Open paths (traversable).
- **Labels**:
- **S**: Green circle, positioned at grid coordinates (3, 4) (row 3, column 4, assuming top-left is (0,0)).
- **G**: Red circle, positioned at grid coordinates (8, 9) (row 8, column 9).
- **Path**:
- Dashed red line connects "S" to "G," forming a zigzag route around obstacles.
### Detailed Analysis
- **Start (S)**: Located in the left-middle section of the grid.
- **Goal (G)**: Positioned in the top-right corner.
- **Path Characteristics**:
- The dashed red line avoids black obstacles by navigating around them.
- The path includes 12 grid cells (including start and goal).
- Key turns occur at grid intersections (e.g., moving right, then down, then right again).
### Key Observations
1. The path is non-direct, requiring detours around obstacles.
2. The gridβs obstacle density suggests a moderately complex pathfinding challenge.
3. No alternative paths are marked, implying a single optimal solution.
### Interpretation
This diagram illustrates a pathfinding algorithmβs output, likely using methods like A* or Dijkstraβs, to navigate from "S" to "G" while avoiding obstacles. The dashed red line represents the shortest or most efficient route in terms of traversable cells. The absence of multiple paths suggests the algorithm prioritizes minimal distance or cost. The gridβs design could be used to test pathfinding logic, validate algorithm efficiency, or demonstrate spatial reasoning in computational systems.
**Note**: No numerical data, legends, or textual annotations beyond "S" and "G" are present. The gridβs dimensions and obstacle distribution are inferred visually.
</details>
<details>
<summary>graphs/misc/maze_pictures/60_final.png Details</summary>

### Visual Description
## Diagram: Grid-Based Maze with Pathfinding Route
### Overview
The image depicts a 10x10 grid-based maze with black squares representing obstacles and white squares representing traversable paths. A blue dashed line traces a path from a green-labeled start point ("S") to a red-labeled goal point ("G"). The maze contains no textual annotations beyond the "S" and "G" labels.
### Components/Axes
- **Grid Structure**:
- 10x10 grid with alternating black (obstacles) and white (paths) squares.
- No axis titles, scales, or numerical markers.
- **Path**:
- Blue dashed line connecting "S" (bottom-left quadrant) to "G" (top-right quadrant).
- Path navigates around obstacles, forming a zigzag pattern.
- **Labels**:
- "S" (green circle) positioned at grid coordinates (3, 5) (row 3, column 5).
- "G" (red circle) positioned at grid coordinates (8, 9) (row 8, column 9).
- **Legend**:
- No explicit legend present. Colors are inferred:
- Black = Obstacles
- White = Traversable paths
- Blue = Pathfinding route
- Green = Start ("S")
- Red = Goal ("G")
### Detailed Analysis
- **Pathfinding Route**:
- The blue path begins at "S" (3,5), moves upward to (3,7), then right to (5,7), down to (5,5), right to (7,5), up to (7,7), and finally right to "G" (8,9).
- Total path length: 12 grid units (approximate, based on dashed line segments).
- **Obstacle Distribution**:
- Black squares form vertical/horizontal barriers, creating a labyrinthine structure.
- Notable obstacle clusters:
- Left side: Vertical black column at column 1 (rows 1β10).
- Center: Horizontal black bar at row 2 (columns 3β9).
- Right side: Vertical black column at column 9 (rows 1β10).
### Key Observations
1. The path avoids all black obstacles, adhering strictly to white squares.
2. The route prioritizes horizontal movement over vertical in the upper half of the grid.
3. No dead-ends or loops in the path; it is a direct, non-repeating trajectory.
4. The maze lacks symmetry, with uneven obstacle placement favoring the right side.
### Interpretation
This diagram illustrates a classic pathfinding problem, likely used to demonstrate algorithms like A* or Dijkstraβs. The absence of a legend implies the diagram assumes prior knowledge of standard maze conventions (e.g., black = blocked, white = open). The pathβs efficiency suggests an optimal solution, though the lack of alternative routes prevents comparison. The "S" and "G" labels anchor the problemβs objective: navigating from start to goal while avoiding obstacles. The gridβs irregular obstacle distribution highlights the importance of adaptive pathfinding strategies in non-uniform environments.
</details>
Figure 6: From blind guessing to structured navigation. Two rollouts on the same held-out maze as training examples $m$ increase. At $m{=}0$ (left), the model ignores walls entirely. By $m{=}60$ (right), the model produces a valid trajectory without ever seeing the grid representation, demonstrating that L-ICL induces transferable constraint knowledge.
5.3 Limitations and Scope
One limitation is that L-ICL requires an oracle that can verify constraint satisfaction and provide correct outputs during training; however, this oracle is needed only during training βat test time, L-ICL requires a single forward pass with no external dependencies, distinguishing it from methods like ReAct with oracle feedback that require verification at inference. Extending to domains without formal specifications may require weaker supervision (learned verifiers, stronger models) that could introduce noise.
A second limitation of this work is that we have only addressed one problem for LLM planners: their difficulty in correctly applying domain knowledge. LLM planners also struggle with strategic reasoning, i.e., performing valid actions in a way that quickly reaches the goal. While L-ICL excels improving validity, this does not always lead to good strategic reasoning, as shown by the valid-to-success gap in Sokoban (46% valid, 20% success). We leave to future work the question of whether localized corrections, or some extension of them, can also correct strategic failures, which seem to require multi-step lookahead, or whether L-ICL must be combined with complementary approaches such as search or value functions.
A third limitation of this paper is that we consider only formally-describable planning benchmarks from the LLM planning literature. Transfer to open-ended natural-language tasks is not studied.
6 Conclusion
We began with a puzzle: LLMs receive complete specifications of domain constraints yet routinely violate them. For example, stating that an agent cannot walk through walls is insufficient, because models do not consistently apply that information at test time. L-ICL addresses this issue in a simple way: when a constraint is violated, we add a minimal input-output example correcting that error, hence putting additional emphasis on the precise knowledge that was not applied. These minimal corrections are accumulating during training, progressively distilling behavioral knowledge from an oracle symbolic system into the prompt. The improvement is remarkable: on an 8 $Γ$ 8 gridworld where zero-shot prompting achieves 0% success, L-ICL reaches 89% with only 60 training examples, and L-ICL consistently outperforms other baselines across domains.
One key finding is that demonstration structure matters more than quantity. L-ICL achieves higher performance with 2,000 characters of targeted corrections than RAG-ICL achieves with 20,000 characters of complete trajectories. Complete solutions demonstrate that a plan works; localized examples demonstrate why individual steps are valid. This compression explains L-ICLβs sample efficiency and suggests a broader principle: LLM reliability can be improved by making implicit knowledge explicit at the point of application. This also reduces prompt engineering burden: rather than exhaustively specifying every constraint upfront, practitioners can let L-ICL discover them through failure-driven corrections.
L-ICL does not solve planning. The valid-to-success gap in Sokoban shows that respecting domain constraints is necessary but not sufficient; strategic reasoning remains challenging in this domain. We view this not as a limitation but as a clarification of scope. L-ICL provides a procedural hardening layer: a reliable foundation of constraint-satisfying primitives on which higher-level reasoning can build. Just as unit tests do not write the program but ensure its components behave correctly, L-ICL does not plan but ensures that proposed actions respect domain physics. We hope this decomposition proves useful for future work on LLM reasoning systems.
Impact Statement
This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
References
- Anthropic (2025) Claude 4.5 model family. Note: https://www.anthropic.com/claude Sonnet 4.5 released September 2025; Haiku 4.5 released October 2025 Cited by: Β§3.5.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. (2020) Language models are few-shot learners. In Advances in Neural Information Processing Systems, Vol. 33, pp. 1877β1901. Cited by: Β§2.2.
- W. Chen, X. Ma, X. Wang, and W. W. Cohen (2022) Program of thoughts prompting: disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588. Cited by: Β§2.2.
- C. A. Cohen and W. W. Cohen (2024) Watch your steps: observable and modular chains of thought. arXiv preprint arXiv:2409.15359. Cited by: Β§1, Β§2.2, Β§3.1, footnote 1.
- DeepSeek-AI (2024) DeepSeek-V3 technical report. arXiv preprint arXiv:2412.19437. Cited by: Β§3.5.
- G. FrancΓ©s, M. Ramirez, and Collaborators (2018) Tarski: an AI planning modeling framework. GitHub. Note: https://github.com/aig-upf/tarski Cited by: Β§E.5.
- L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig (2023) PAL: program-aided language models. In International Conference on Machine Learning, pp. 10764β10799. Cited by: Β§2.2.
- S. Hao, Y. Gu, H. Ma, J. J. Hong, Z. Wang, D. Z. Wang, and Z. Hu (2023) Reasoning with language model is planning with world model. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 8154β8173. Cited by: Β§2.1.
- M. Helmert (2006) The fast downward planning system. In Journal of Artificial Intelligence Research, Vol. 26, pp. 191β246. Cited by: Β§E.4.2, Β§E.5, Β§3.5.
- R. Howey, D. Long, and M. Fox (2004) VAL: automatic plan validation, continuous effects and mixed initiative planning using pddl. In 16th IEEE International Conference on Tools with Artificial Intelligence, Vol. , pp. 294β301. External Links: Document Cited by: Β§E.4.1.
- L. B. Kaesberg, J. P. Wahle, T. Ruas, and B. Gipp (2025) SPaRC: a spatial pathfinding reasoning challenge. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, C. Christodoulopoulos, T. Chakraborty, C. Rose, and V. Peng (Eds.), Suzhou, China, pp. 10359β10390. External Links: Link, Document, ISBN 979-8-89176-332-6 Cited by: Β§2.1.
- S. Kambhampati, K. Valmeekam, L. Guan, M. Verma, K. Stechly, S. Bhambri, L. Saldyt, and A. Murthy (2024) Position: llms canβt plan, but can help planning in llm-modulo frameworks. In Proceedings of the 41st International Conference on Machine Learning, ICMLβ24. Cited by: Β§2.1.
- M. Katz and J. Lee (2023) K* search over orbit space for top-k planning. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023), Cited by: Β§E.5, Β§3.5.
- O. Khattab, A. Singhvi, P. Maheshwari, Z. Zhang, K. Santhanam, S. Vardhamanan, S. Haq, A. Sharma, T. T. Joshi, H. Moazam, H. Miller, M. Zaharia, and C. Potts (2023) DSPy: compiling declarative language model calls into self-improving pipelines. External Links: 2310.03714, Link Cited by: Β§1.
- J. Lee, M. Katz, and S. Sohrabi (2023) On k* search for top-k planning. In Symposium on Combinatorial Search, External Links: Link Cited by: Β§3.5.
- L. Lehnert, S. Sukhbaatar, D. Su, Q. Zheng, P. McVay, M. Rabbat, and Y. Tian (2024) Beyond A*: better planning with transformers via search dynamics bootstrapping. arXiv preprint arXiv:2402.14083. Cited by: Β§1.
- J. Leng, C. A. Cohen, Z. Zhang, C. Xiong, and W. W. Cohen (2025) Semi-structured llm reasoners can be rigorously audited. External Links: 2505.24217, Link Cited by: Β§2.2.
- C. Li, J. Liang, A. Zeng, X. Chen, K. Hausman, D. Sadigh, S. Levine, L. Fei-Fei, F. Xia, and B. Ichter (2023) Chain of code: reasoning with a language model-augmented code emulator. arXiv preprint arXiv:2312.04474. Cited by: Β§2.2.
- J. Liu, D. Shen, Y. Zhang, B. Dolan, L. Carin, and W. Chen (2022) What makes good in-context examples for GPT-3?. In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, E. Agirre, M. Apidianaki, and I. VuliΔ (Eds.), Dublin, Ireland and Online, pp. 100β114. External Links: Link, Document Cited by: Β§2.2.
- A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wiegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang, S. Gupta, B. P. Majumder, K. Hermann, S. Welleck, A. Yazdanbakhsh, and P. Clark (2023) Self-refine: iterative refinement with self-feedback. In Advances in Neural Information Processing Systems, Vol. 36. Cited by: Β§D.1.4, Β§2.2.
- S. Min, X. Lyu, A. Holtzman, M. Arber, M. Lewis, H. Hajishirzi, and L. Zettlemoyer (2022) Rethinking the role of demonstrations: what makes in-context learning work?. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 11048β11064. Cited by: Β§2.2.
- P. Shojaee, I. Mirzadeh, K. Alizadeh, M. Horton, S. Bengio, and M. Farajtabar (2025) The illusion of thinking: understanding the strengths and limitations of reasoning models via the lens of problem complexity. In Advances in Neural Information Processing Systems, Vol. 38. Cited by: Β§2.1.
- K. Stechly, K. Valmeekam, and S. Kambhampati (2024) Chain of thoughtlessness? an analysis of cot in planning. In Advances in Neural Information Processing Systems, Vol. 37. Cited by: 3rd item, Β§F.1.3, Β§1, Β§2.1, Β§2.1.
- K. Stechly, K. Valmeekam, and S. Kambhampati (2025) On the self-verification limitations of large language models on reasoning and planning tasks. In The Thirteenth International Conference on Learning Representations, External Links: Link Cited by: Β§D.1.4, Β§2.2.
- M. Turpin, J. Michael, E. Perez, and S. R. Bowman (2023) Language models donβt always say what they think: unfaithful explanations in chain-of-thought prompting. In Thirty-seventh Conference on Neural Information Processing Systems, External Links: Link Cited by: Β§2.2.
- K. Valmeekam, M. Marquez, S. Sreedharan, and S. Kambhampati (2023) On the planning abilities of large language modelsβa critical investigation. In Advances in Neural Information Processing Systems, Vol. 36. Cited by: Β§1, Β§2.1.
- X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou (2023) Self-consistency improves chain of thought reasoning in language models. In International Conference on Learning Representations, Cited by: Β§D.1.3, Β§2.2.
- J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou (2022) Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, Vol. 35, pp. 24824β24837. Cited by: Β§2.2.
- S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao, and K. Narasimhan (2023a) Tree of thoughts: deliberate problem solving with large language models. In Advances in Neural Information Processing Systems, Vol. 36. Cited by: Β§D.1.6, Β§1, Β§2.1, Β§2.2.
- S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao (2023b) ReAct: synergizing reasoning and acting in language models. In International Conference on Learning Representations, Cited by: Β§D.1.5, Β§2.1, Β§2.2.
Appendix A Analysis: The Valid-to-Success Gap
While L-ICL dramatically improves constraint adherence, a gap often remains between validity and success. This gap is most pronounced in Full Sokoban, where L-ICL achieves 46% valid plans but only 20% success (Table 4). Understanding this gap illuminates both L-ICLβs strengths and its limitations.
A.1 Trap Rate Analysis
In Sokoban, certain states are traps: configurations from which the goal is unreachable regardless of future actions (e.g., a box pushed into a corner). We measure the adjusted trap rate: among valid plans, what fraction enters a trap state?
Figure 7 shows that L-ICL reduces trap rates. On Sokoban Grid, the adjusted trap rate drops from 50% at $m{=}0$ to 10% at $m{=}210$ . This indicates that L-ICL teaches not only immediate constraint satisfaction but also some degree of trap avoidance.
However, the absolute trap rate remains non-negligible, and the valid-to-success gap persists. We hypothesize that trap avoidance requires multi-step lookahead that localized corrections cannot fully provide. A correction like βpushing box B east from (3,4) is validβ does not encode that this push leads to an unsolvable configuration three moves later. Addressing this limitation may require complementary approaches such as search or learned value functions.
<details>
<summary>graphs/misc/sokoban_trap_rate_adjusted.png Details</summary>

### Visual Description
## Line Chart: Sokoban Gridworld Adjusted Trap Rate
### Overview
The chart compares the adjusted trap rate (%) for two Sokoban Gridworld configurations ("Without Grid" and "With Grid") across varying numbers of training examples (0β240). Both lines show a general downward trend, with the "With Grid" configuration consistently outperforming the "Without Grid" variant, particularly at lower training example counts.
### Components/Axes
- **X-axis**: Training Examples (0, 30, 60, 90, 120, 150, 180, 210, 240)
- **Y-axis**: Adjusted Trap Rate (%) (0β80)
- **Legend**: Located at the bottom center, with orange representing "Without Grid" and blue representing "With Grid."
- **Shaded Regions**: Gray bands around each line indicate variability/confidence intervals.
### Detailed Analysis
1. **"Without Grid" (Orange Line)**
- Starts at **~65%** trap rate at 0 training examples.
- Drops sharply to **~0%** at 30 examples.
- Fluctuates between **~10%β25%** for 60β240 examples, with minor peaks at 90 (~20%) and 180 (~22%) examples.
- Variability decreases over time, with the shaded region narrowing after 120 examples.
2. **"With Grid" (Blue Line)**
- Begins at **~50%** trap rate at 0 examples.
- Declines to **~10%** at 60 examples.
- Peaks at **~35%** at 90 examples, then dips to **~15%** at 120 examples.
- Rises again to **~25%** at 180 examples before declining to **~10%** at 240 examples.
- Variability is higher than "Without Grid," especially at 90 and 180 examples.
### Key Observations
- The "With Grid" configuration achieves **~50% higher trap rate** than "Without Grid" at 0 training examples.
- Both methods converge to **~10β20%** trap rate by 240 examples, suggesting diminishing returns with increased training.
- The "Without Grid" line exhibits a steeper initial decline, while "With Grid" shows more pronounced fluctuations.
- Shaded regions indicate that variability in trap rate decreases for "Without Grid" but remains inconsistent for "With Grid."
### Interpretation
The data suggests that incorporating a grid improves trap rate efficiency in Sokoban Gridworld, particularly during early training phases. The grid likely provides structural guidance, reducing exploration errors. However, both configurations eventually plateau, implying that additional training examples yield minimal improvements. The higher variability in the "With Grid" line may reflect sensitivity to grid configuration choices or dynamic adjustments during training. The convergence at higher training counts highlights the importance of balancing grid complexity with training data volume for optimal performance.
</details>
Figure 7: Trap rate decreases with L-ICL. Adjusted trap rate (fraction of valid plans entering unsolvable states) on Sokoban Grid. L-ICL reduces trap rates from 50% to 10%, indicating partial learning of strategic constraints.
A.2 Multi-Object State Tracking
Comparing Sokoban Grid (no boxes) to Full Sokoban reveals the cost of multi-object tracking. With identical spatial layouts, Sokoban Grid achieves 49% success while Full Sokoban reaches only 20%. The difference lies in state complexity: Full Sokoban requires tracking the agent position and all box positions, with constraints that depend on their joint configuration.
This difficulty is also evident in BlocksWorld, where every object is dynamic. L-ICL improves BlocksWorld success from 48% to 66%, but a gap remains between validity (68%) and success. The pattern suggests that relational constraint learning, while improved by L-ICL, remains more challenging than spatial constraint learning.
A.3 Decomposing Planning Difficulty
The valid-to-success gap reveals a clean decomposition of planning difficulty:
1. Constraint satisfaction: Generating actions that respect domain physics. L-ICL addresses this effectively across all domains.
1. Strategic selection: Among valid actions, choosing those that lead toward the goal without entering traps. This requires multi-step reasoning that localized corrections do not directly provide.
This decomposition suggests a practical architecture: use L-ICL to harden constraint satisfaction, then layer strategic reasoning (search, learned policies, or hierarchical planning) on top. The hardened base ensures that any action proposed by the strategic layer is physically valid, separating concerns and simplifying both components.
Table 5 summarizes the valid-to-success gaps across domains, highlighting where strategic failures dominate.
Table 5: Valid-to-success gap analysis across domains with L-ICL[ $m{=}60$ ]. Larger gaps indicate that constraint satisfaction alone is insufficientβstrategic reasoning is the bottleneck.
| 8 $Γ$ 8 Grid 10 $Γ$ 10 Maze Sokoban Grid | 89 57 63 | 89 27 49 | 0 30 14 |
| --- | --- | --- | --- |
| Full Sokoban | 46 | 20 | 26 |
| BlocksWorld | 68 | 66 | 2 |
The 8 $Γ$ 8 gridworld shows no gap: once constraints are satisfied, the simple structure makes goal-reaching straightforward. The 10 $Γ$ 10 maze and Full Sokoban show the largest gaps, reflecting the strategic complexity of navigating dead ends and avoiding irreversible trap states, respectively. BlocksWorld shows a small gap, suggesting that while relational constraints are harder to learn, once learned they suffice for task completion in our 5-block instances.
Appendix B Out-of-Distribution Generalization
<details>
<summary>graphs/domains/gridworld-10x10-sokoban_grid.png Details</summary>

### Visual Description
## Grid Diagram: 10x10 Matrix with Filled Cells
### Overview
The image depicts a 10x10 grid with specific cells filled in black. The grid is labeled with numerical indices from 1 to 10 along both the horizontal (x-axis) and vertical (y-axis) axes. No explicit legend or axis titles are present, but the filled cells suggest a structured pattern or data representation.
### Components/Axes
- **X-axis (horizontal)**: Labeled 1 to 10, positioned at the bottom of the grid.
- **Y-axis (vertical)**: Labeled 1 to 10, positioned on the left side of the grid.
- **Filled cells**: Black squares located at specific coordinates, forming a border and a central block.
### Detailed Analysis
- **Border pattern**:
- **Top row (y=1)**: All cells from x=1 to x=10 are filled.
- **Bottom row (y=10)**: All cells from x=1 to x=10 are filled.
- **Left column (x=1)**: All cells from y=1 to y=10 are filled.
- **Right column (x=10)**: All cells from y=1 to y=10 are filled.
- **Central block**:
- A 2x2 block of filled cells located at coordinates (5,4), (5,5), (6,4), (6,5).
### Key Observations
1. The grid forms a complete border around the edges, with all cells in the first and last rows and columns filled.
2. A smaller 2x2 block of filled cells is centered near the middle of the grid (coordinates 5-6 on the x-axis and 4-5 on the y-axis).
3. No other cells are filled, creating a stark contrast between the border and the central block.
### Interpretation
The diagram likely represents a structured layout or data distribution. The filled border could symbolize a boundary or perimeter, while the central 2x2 block might indicate a focal point or critical region. The absence of a legend or axis titles leaves the exact purpose ambiguous, but the pattern suggests a deliberate design, such as a maze, a matrix with specific constraints, or a visual representation of a mathematical or logical structure. The central blockβs placement near the center (but not exactly at the center) may imply asymmetry or a specific positional significance.
</details>
(a) 10 $Γ$ 10 maze (training distribution)
<details>
<summary>graphs/domains/gridworld-15x15-sokoban_grid.png Details</summary>

### Visual Description
## Heatmap: Binary Matrix Representation
### Overview
The image depicts a 15x15 grid with specific cells filled in dark blue. The grid has labeled axes (1β15) on both the vertical (rows) and horizontal (columns). No explicit title, legend, or textual annotations are present. The filled cells form a structured pattern, suggesting a binary or categorical representation.
### Components/Axes
- **Axes**:
- **X-axis (columns)**: Labeled 1β15 at the bottom.
- **Y-axis (rows)**: Labeled 1β15 on the left.
- **Grid Structure**:
- 15 rows and 15 columns, creating 225 cells.
- Cells are either empty (white) or filled (dark blue).
### Detailed Analysis
- **Filled Cells**:
- **Border**: All cells in row 1, row 15, column 1, and column 15 are filled.
- **Internal Blocks**:
- A 5-cell horizontal block in row 8, columns 5β9.
- A 2x2 block in rows 5β6, columns 4β5.
- **Other Filled Cells**:
- Row 6, column 6.
- Row 7, column 5.
- Row 9, column 5.
- **Empty Cells**:
- All non-border cells outside the described blocks are empty.
### Key Observations
1. **Symmetry**: The border forms a complete frame, creating a "box" around the grid.
2. **Internal Patterns**:
- The 5-cell block in row 8 (columns 5β9) is the longest contiguous horizontal segment.
- The 2x2 block in rows 5β6, columns 4β5, is the only vertical cluster outside the border.
3. **Sparse Distribution**: Most cells are empty, with filled cells concentrated in specific regions.
### Interpretation
This grid likely represents a binary matrix or a heatmap encoding categorical data (e.g., presence/absence, on/off states). The absence of labels or legends prevents direct interpretation of the dataβs meaning. However, the structured placement of filled cells suggests:
- **Possible Encoding**: A puzzle (e.g., Sudoku variant), a binary image, or a sparse matrix for computational purposes.
- **Anomalies**: The 2x2 block in rows 5β6, columns 4β5, breaks the otherwise linear patterns, potentially indicating a deliberate design choice or error.
- **Functionality**: The border may serve as a boundary condition, while internal blocks could represent specific features or constraints.
### Limitations
No textual or numerical data is present to confirm the purpose or context of the grid. The interpretation relies solely on the spatial distribution of filled cells.
</details>
(b) 15 $Γ$ 15 maze (OOD evaluation)
Figure 8: Out-of-distribution generalization setup. L-ICL corrections are accumulated on 10 $Γ$ 10 mazes (left) and evaluated on 15 $Γ$ 15 mazes (right). The larger mazes contain positions not seen during training, yet corrections transfer substantially, despite the penalty for boundary violations differing.
A key question for any learning-based approach is whether acquired knowledge transfers beyond the training distribution. For L-ICL, this translates to: do corrections learned on smaller problem instances improve performance on larger, unseen instances? We investigate this by training L-ICL on 10 $Γ$ 10 mazes and evaluating on 15 $Γ$ 15 mazes, showin in Figure 8.
B.1 Experimental Setup
We accumulate L-ICL corrections using the standard training procedure on 10 $Γ$ 10 maze instances (Section 3.5). We then evaluate the resulting prompts on a held-out test set of 100 15 $Γ$ 15 mazes. The larger mazes are generated using the same procedural algorithm (randomized depth-first search) with proportionally scaled wall density, but contain positions and path structures never seen during training.
B.2 Results
Figure 9 shows that L-ICL corrections provide substantial transfer to larger instances. At $m{=}0$ (no corrections), the 15 $Γ$ 15 maze achieves only 9% successβcomparable to the 10 $Γ$ 10 baseline without corrections. With corrections accumulated from 10 $Γ$ 10 training instances, 15 $Γ$ 15 success improves to 49% at $m{=}120$ , representing a 5 $Γ$ improvement over the no-correction baseline.
<details>
<summary>graphs/misc/ood_10x10_to_15x15.png Details</summary>

### Visual Description
## Line Chart: OOD Generalization: 10x10 β 15x15 Transfer
### Overview
The chart compares the success rates of two models: a 10x10 "In-Distribution" model and a 15x15 "OOD Transfer" model, as training examples increase from 0 to 240. The 10x10 model consistently outperforms the 15x15 model, with both showing improvement as training data grows. Shaded regions around each line likely represent confidence intervals or variability in performance.
### Components/Axes
- **X-Axis (Training Examples)**: Ranges from 0 to 240 in increments of 30.
- **Y-Axis (Success Rate %)**: Ranges from 0 to 70 in increments of 10.
- **Legend**:
- Blue line: "10x10 (In-Distribution)"
- Orange line: "15x15 (OOD Transfer)"
- **Shaded Areas**: Surround both lines, indicating variability (e.g., Β±2% for blue, Β±5% for orange).
### Detailed Analysis
#### 10x10 (In-Distribution) Model (Blue Line)
- **Data Points**:
- 0 examples: 28%
- 30 examples: 52%
- 60 examples: 58%
- 90 examples: 58%
- 120 examples: 57%
- 150 examples: 57%
- 180 examples: 61%
- 210 examples: 56%
- 240 examples: 54%
- **Trend**: Steady upward trajectory from 28% to 61% (peak at 180 examples), followed by a slight decline. Success rate remains above 50% after 30 examples.
#### 15x15 (OOD Transfer) Model (Orange Line)
- **Data Points**:
- 0 examples: 9%
- 30 examples: 24%
- 60 examples: 38%
- 90 examples: 30%
- 120 examples: 49%
- 150 examples: 35%
- 180 examples: 32%
- 210 examples: 44%
- 240 examples: 44%
- **Trend**: Volatile performance with peaks at 120 (49%) and 210/240 examples (44%). Initial rise to 38% at 60 examples, followed by a dip to 30% at 90 examples.
### Key Observations
1. **Performance Gap**: The 10x10 model consistently achieves higher success rates (50β61% vs. 9β49%).
2. **Training Impact**: Both models improve with more examples, but the 10x10 modelβs gains are more stable.
3. **Variability**: The 15x15 modelβs shaded region is wider, suggesting greater uncertainty in its performance.
4. **Plateauing**: The 10x10 model plateaus near 60% after 180 examples, while the 15x15 model fluctuates without a clear plateau.
### Interpretation
The chart demonstrates that the 10x10 model generalizes better to in-distribution tasks, likely due to simpler architecture or better regularization. The 15x15 modelβs lower success rate and higher variability suggest challenges in OOD transfer, possibly due to overfitting or insufficient training data. While both models benefit from increased training examples, the 10x10 modelβs robustness makes it more reliable for practical applications. The 15x15 modelβs peak at 120 examples hints at a potential "sweet spot" for OOD transfer, but its instability limits real-world utility.
</details>
Figure 9: Out-of-distribution transfer: 10 $Γ$ 10 $β$ 15 $Γ$ 15. Corrections learned on 10 $Γ$ 10 mazes transfer to larger instances, improving success from 9% to 49%. A gap remains compared to in-distribution performance (57% on 10 $Γ$ 10), but transfer is substantial.
Table 6 summarizes the transfer results at key checkpoints.
Table 6: Out-of-distribution generalization: corrections trained on 10 $Γ$ 10 mazes evaluated on 15 $Γ$ 15 mazes. We report success rate (%) and compare to in-distribution 10 $Γ$ 10 performance.
| Training Examples $m=0$ $m=30$ | 10 $Γ$ 10 (in-dist.) 16 21 | 15 $Γ$ 15 (OOD) 9 18 |
| --- | --- | --- |
| $m=60$ | 27 | 31 |
| $m=120$ | 57 | 49 |
B.3 Why Does Transfer Work?
The transfer is notable because 15 $Γ$ 15 mazes contain positions (e.g., $(12,14)$ ) and wall configurations that never appear in 10 $Γ$ 10 training instances. We hypothesize that corrections transfer because they encode constraint types rather than specific positions.
Consider a correction like: >>> get_applicable_actions((3, 4)) [βmove_northβ, βmove_southβ]
While this example specifies position $(3,4)$ , it implicitly teaches a general principle: when east and west are blocked (by walls or boundaries), only north and south are valid. The LLM can generalize this pattern to novel positions in larger grids.
This interpretation is supported by the observation that transfer improves with more corrections ( $m$ ). Early corrections address common constraint patterns (boundary violations, simple wall configurations); as $m$ increases, rarer patterns are covered, and the accumulated examples provide a richer specification that generalizes more robustly.
B.4 Transfer Gap Analysis
While transfer is substantial, a gap remains between in-distribution and OOD performance (57% vs. 49% at $m{=}120$ ). We identify two contributing factors:
1. Unseen spatial configurations: Larger mazes contain junction types and corridor patterns that may not appear in smaller instances. Some constraint violations specific to these configurations are not addressed by 10 $Γ$ 10 training.
1. Longer planning horizons: 15 $Γ$ 15 mazes require longer plans, providing more opportunities for errors to accumulate. Even with improved per-step validity, the probability of completing an error-free trajectory decreases with plan length.
These findings suggest that for maximum OOD performance, practitioners should either (a) train on a mixture of problem sizes, or (b) accept a modest performance gap when deploying to larger instances than those seen during training.
B.5 Cross-Domain Transfer
We also conducted preliminary experiments on cross-domain transfer: using corrections from one domain (e.g., 8 $Γ$ 8 gridworld) to improve another (e.g., 10 $Γ$ 10 maze). Results were mixedβcorrections for basic movement constraints (boundary checking) transferred, but domain-specific spatial structures (two-room vs. maze corridors) did not. This suggests that L-ICL learns a combination of general procedural knowledge and domain-specific constraint instantiations, with only the former transferring across domains.
Appendix C Domain Specifications
This appendix provides detailed specifications of the experimental domains used in our evaluation. For each domain, we describe the state representation, action space, constraints and goal conditions.
C.1 8 $Γ$ 8 Two-Room Gridworld
State Space.
The state consists of the agentβs $(x,y)$ position on an 8 $Γ$ 8 grid. Coordinates range from $(1,1)$ at the bottom-left to $(8,8)$ at the top-right.
Environment Structure.
The grid is divided into two rooms by a vertical wall running through column 5, with a single doorway allowing passage between rooms (doorway position varies by instance). Start positions are randomly sampled from one room, and goal positions from the other, ensuring all paths must traverse the doorway.
Action Space.
Four actions: move_north $(+y)$ , move_south $(-y)$ , move_east $(+x)$ , and move_west $(-x)$ .
Constraints.
An action is valid if and only if:
1. The resulting position remains within grid bounds.
1. The movement does not cross a wall segment.
Goal Condition.
The agentβs position equals the goal position.
Optimal Solution.
The shortest path between start and goal, computed via breadth-first search. Optimal paths typically require 8β12 steps.
<details>
<summary>graphs/domains/gridworld-8x8_grid.png Details</summary>

### Visual Description
## Bar Chart: Single Data Point Representation
### Overview
The image depicts a grid-based chart with a single horizontal dark blue bar positioned at the bottom of the grid. The grid spans 8 units along both the x-axis (horizontal) and y-axis (vertical). The bar occupies the range from x=1 to x=6 at y=3, with no other data points or annotations visible.
### Components/Axes
- **X-axis**: Labeled with integers 1 through 8, positioned at the bottom of the grid.
- **Y-axis**: Labeled with integers 1 through 8, positioned along the left edge of the grid.
- **Legend**: Not present in the image.
- **Grid**: Light gray grid lines divide the chart into 8x8 cells.
- **Bar**: A single dark blue horizontal bar spans x=1 to x=6 at y=3.
### Detailed Analysis
- **Bar Position**: The bar is centered at y=3, extending from x=1 to x=6. This corresponds to a width of 6 units.
- **Grid Coverage**: The grid extends beyond the bar, with no additional bars, markers, or annotations in the remaining cells (x=7β8, y=1β8 and y=4β8, x=1β6).
- **Color**: The bar is uniformly dark blue, but no legend or key is provided to interpret its meaning.
### Key Observations
1. **Single Data Point**: The chart contains only one bar, suggesting it may represent a single value or category in a larger dataset.
2. **Positioning**: The bar is located at the bottom of the grid, which could imply a baseline or reference value.
3. **Missing Context**: No axis titles, legend, or annotations are present to clarify the chartβs purpose or the significance of the bar.
### Interpretation
The chart appears to represent a single data point (e.g., a value of 6 at y=3) in a grid system. However, the absence of axis titles, legends, or additional data points makes it impossible to determine the chartβs purpose or the relationship between variables. The dark blue color of the bar could indicate a specific category or threshold, but without further context, this remains speculative. The gridβs structure suggests a potential for multiple data points, but only one is visible. This may indicate incomplete data, a simplified visualization, or a placeholder for further analysis.
</details>
Figure 10: Example 8 $Γ$ 8 two-room gridworld instance. Walls are shown as filled cells.
C.2 10 $Γ$ 10 Maze
State Space.
The state consists of the agentβs $(x,y)$ position on a 10 $Γ$ 10 grid. Coordinates range from $(1,1)$ to $(10,10)$ .
Environment Structure.
Mazes are procedurally generated using a randomized depth-first search algorithm, producing a spanning tree of corridors with exactly one path between any two open cells. This ensures unique shortest paths and creates narrow corridors with dead ends that require backtracking if the agent makes suboptimal choices.
Action Space.
Four actions: move_north, move_south, move_east, move_west.
Constraints.
Identical to the 8 $Γ$ 8 gridworld: actions must keep the agent in bounds and cannot cross walls.
Goal Condition.
The agentβs position equals the goal position.
Optimal Solution.
The unique shortest path through the maze.
<details>
<summary>graphs/domains/maze_grid.png Details</summary>

### Visual Description
## Grid Diagram: 10x10 Cell Pattern
### Overview
The image depicts a 10x10 grid with rows labeled 1β10 (left axis) and columns labeled 1β10 (bottom axis). Certain cells are filled with dark blue squares, forming a specific pattern. No textual labels, legends, or explicit data points are present in the image.
### Components/Axes
- **Row Labels**: Left axis labeled 1β10 (vertical).
- **Column Labels**: Bottom axis labeled 1β10 (horizontal).
- **Grid Lines**: Thin black lines separating cells.
- **Filled Cells**: Dark blue squares in specific positions (see Detailed Analysis).
### Detailed Analysis
The filled cells follow a non-uniform distribution:
- **Column 1**: Rows 2β7 filled.
- **Column 2**: No filled cells.
- **Column 3**: Rows 2, 4, 5, 6, 7 filled.
- **Column 4**: Rows 2β7 filled.
- **Column 5**: Rows 2, 4, 5, 6, 7 filled.
- **Column 6**: Rows 2, 4, 5, 6, 7 filled.
- **Column 7**: Rows 2, 4, 5, 6, 7 filled.
- **Column 8**: Rows 2β7 filled.
- **Column 9**: Rows 2β7 filled.
- **Column 10**: Rows 2β7 filled.
### Key Observations
- The filled cells form a vertical "bar" in columns 1, 4, 8, 9, and 10, with varying heights.
- Columns 3, 5, 6, and 7 have partial vertical bars, missing row 3.
- No horizontal or diagonal patterns are evident.
### Interpretation
The grid appears to represent a stylized visual code or symbolic pattern, possibly encoding text or a logo. The absence of explicit labels or legends suggests the pattern itself is the primary data. The vertical bars in columns 1, 4, 8, 9, and 10 may correspond to specific characters or symbols, but without additional context, the exact meaning remains ambiguous. The missing row 3 in columns 3, 5, 6, and 7 introduces irregularity, which could indicate intentional design or data omission.
## No textual information is present in the image. The grid and filled cells are the sole data elements.
</details>
Figure 11: Example 10 $Γ$ 10 maze instance. The maze structure creates narrow corridors and dead ends, requiring longer plans than the two-room gridworld.
C.3 Sokoban-Style Gridworld
State Space.
The state consists of the agentβs $(x,y)$ position on a grid that uses Sokoban-style layouts. Coordinates are 1-indexed.
Environment Structure.
We use grid layouts from standard Sokoban benchmarks but remove all pushable boxes. The layouts retain walls, open floor cells, and the spatial structure of Sokoban puzzles, including irregular room shapes and narrow passages. This domain serves as an ablation to isolate the effect of Sokobanβs spatial complexity from the challenge of multi-object state tracking.
Action Space.
Four actions: move_north, move_south, move_east, move_west.
Constraints.
Actions must keep the agent within the walkable floor area and cannot cross walls.
Goal Condition.
The agent reaches a designated goal cell.
<details>
<summary>graphs/domains/gridworld_sokoban_noDZ_final_grid.png Details</summary>

### Visual Description
## Grid Diagram: 10x10 Cell Pattern
### Overview
The image depicts a 10x10 grid with specific cells filled in black. The filled cells form a border around the grid and include internal vertical and horizontal lines. No explicit labels, legends, or textual annotations are present beyond the row and column numbers.
### Components/Axes
- **Rows**: Labeled 1 to 10 (vertical axis, left side).
- **Columns**: Labeled 1 to 10 (horizontal axis, bottom).
- **Filled Cells**: Black squares indicate a binary state (e.g., "occupied" vs. "empty").
- **No Legend**: No explicit key or color coding is provided.
### Detailed Analysis
- **Border**: All cells in rows 1 and 10, and columns 1 and 10 are filled.
- **Internal Vertical Line**: Column 5 has filled cells from rows 3 to 8.
- **Internal Horizontal Line**: Row 8 has filled cells from columns 6 to 9.
- **Additional Filled Cells**:
- Row 3: Columns 1, 5, 6, 10.
- Row 4: Columns 1, 5, 6, 10.
- Row 5: Columns 1, 5, 6, 10.
- Row 6: Columns 1, 5, 6, 10.
- Row 7: Columns 1, 5, 6, 10.
- Row 8: Columns 1, 5, 6, 7, 8, 9, 10.
- Row 9: Columns 1, 10.
### Key Observations
1. **Symmetry**: The filled cells create a symmetrical border around the grid.
2. **Internal Structure**: A vertical line in column 5 and a horizontal line in row 8 intersect at cell (8,5), forming a cross.
3. **Asymmetry**: The horizontal line in row 8 extends further (columns 6β9) compared to other rows.
### Interpretation
The grid likely represents a structured layout, such as a maze, a circuit diagram, or a spatial pattern. The filled cells may indicate:
- **Boundaries**: The outer border could signify a perimeter or enclosure.
- **Pathways**: The internal lines (column 5 and row 8) might represent routes or connections.
- **Data Representation**: Without a legend, the filled cells could symbolize a binary condition (e.g., "active" vs. "inactive").
The absence of a legend limits definitive interpretation, but the pattern suggests a deliberate design for navigation, segmentation, or data encoding. The intersection at (8,5) may serve as a focal point or junction.
</details>
Figure 12: Example Sokoban-style gridworld instance. The layout is derived from a Sokoban puzzle but contains no pushable boxes, isolating spatial navigation from object manipulation.
C.4 Full Sokoban
State Space.
The state consists of:
- The agentβs $(x,y)$ position (1-indexed).
- The box position $(x,y)$ .
Our instances use 1 box.
Environment Structure.
Standard Sokoban puzzle layouts from established benchmarks, including walls, floor cells, and designated target locations where boxes must be placed.
Action Space.
Eight actions:
- Movement: move_north, move_south, move_east, move_west βmove the agent one cell in the specified direction if the destination is empty floor.
- Pushing: push_north, push_south, push_east, push_west βmove the agent into a cell containing a box, pushing the box one cell further in the same direction.
Constraints.
An action is valid if and only if:
1. Movement: The destination cell is within bounds, is not a wall, and does not contain a box.
1. Pushing: The cell adjacent to the agent contains a box, and the cell beyond the box (in the push direction) is within bounds, is not a wall, and does not contain another box.
Irreversibility.
Unlike navigation domains, Sokoban contains trap states βconfigurations from which the goal is unreachable. Common traps include:
- Pushing a box into a corner (cannot be retrieved).
- Pushing a box against a wall such that it cannot reach any target.
Goal Condition.
The box occupies the shown target position.
<details>
<summary>graphs/domains/sokoban_final_grid.png Details</summary>

### Visual Description
## Grid Diagram: Coordinate System with Green Diamond Marker
### Overview
The image depicts a 10x10 grid coordinate system with labeled axes (1-10) on both horizontal (x-axis) and vertical (y-axis) dimensions. A single green diamond marker is positioned at coordinates (4,5), while black cells form a border around the grid and create internal barriers. No textual labels, legends, or numerical data beyond axis markers are present.
### Components/Axes
- **Grid Structure**:
- 10x10 matrix of cells with black borders on all outer edges.
- Internal black cells form vertical/horizontal barriers (e.g., column 5 rows 3-6, row 5 columns 6-8).
- **Axes**:
- X-axis (horizontal): Labeled 1-10 from left to right.
- Y-axis (vertical): Labeled 1-10 from bottom to top.
- **Markers**:
- Green diamond at (4,5) (center-left of the grid).
- Black cells occupy:
- All perimeter cells (rows 1, 10; columns 1, 10).
- Internal barriers:
- Column 5 (rows 3-6).
- Row 5 (columns 6-8).
- Additional isolated black cells at (6,7), (7,7), (8,7), (9,7).
### Detailed Analysis
- **Green Diamond Position**:
- Located at (4,5), equidistant from the left border (x=4) and bottom border (y=5).
- No other colored markers or symbols exist.
- **Black Cell Patterns**:
- Perimeter cells form a continuous black border.
- Internal barriers create a fragmented path, potentially restricting movement or defining regions.
- The cluster of black cells at (6,7)-(9,7) suggests a horizontal barrier near the top-right quadrant.
### Key Observations
1. The green diamond at (4,5) is the sole focal point, with no other markers or annotations.
2. Black cells dominate the grid's edges and create a maze-like structure internally.
3. No numerical data, legends, or textual labels are present beyond axis markers.
### Interpretation
This grid likely represents a simplified coordinate-based system, possibly for:
- **Game Mechanics**: The green diamond could denote a player's position or objective, while black cells act as obstacles.
- **Pathfinding Exercise**: The barriers may simulate a maze requiring navigation from (4,5) to an unspecified endpoint.
- **Spatial Analysis**: The grid might model a constrained environment for studying movement patterns or resource distribution.
The absence of legends or numerical data limits quantitative interpretation. The green diamond's central-left placement suggests it may serve as a reference point for relative positioning within the grid's fragmented layout.
</details>
Figure 13: Example Sokoban instance. The agent must push the box onto the target location without creating deadlocks.
C.5 BlocksWorld
State Space.
The state consists of a configuration of $n$ uniquely labeled blocks (we use $n=5$ in our experiments). Each block is either:
- On the table, or
- On top of exactly one other block.
A block is clear if no other block is on top of it. The table has unlimited capacity.
Action Space.
Three actions, described in natural language:
1. Move block from block to block (move-b-to-b): Pick up a block that is currently sitting on top of another block and place it onto a third block. This requires that the block being moved has nothing on top of it (is clear) and that the destination block also has nothing on top of it (is clear). After the move, the block that was underneath the moved block becomes clear.
1. Move block from block to table (move-b-to-t): Pick up a block that is currently sitting on top of another block and place it on the table. This requires that the block being moved has nothing on top of it (is clear). After the move, the block that was underneath becomes clear, and the moved block is now on the table.
1. Move block from table to block (move-t-to-b): Pick up a block that is currently on the table and place it onto another block. This requires that both the block being moved and the destination block have nothing on top of them (are clear). After the move, the destination block is no longer clear.
Constraints.
The preconditions for each action are:
- move-b-to-b( $b_{m}$ , $b_{f}$ , $b_{t}$ ): Block $b_{m}$ is clear, block $b_{t}$ is clear, $b_{m}$ is currently on $b_{f}$ , and $b_{m}β b_{t}$ .
- move-b-to-t( $b_{m}$ , $b_{f}$ ): Block $b_{m}$ is clear and $b_{m}$ is currently on $b_{f}$ .
- move-t-to-b( $b_{m}$ , $b_{t}$ ): Block $b_{m}$ is clear, block $b_{t}$ is clear, $b_{m}$ is currently on the table, and $b_{m}β b_{t}$ .
Goal Condition.
The block configuration matches a target specification, typically given as a set of on( $b_{1}$ , $b_{2}$ ) predicates describing which blocks must be stacked on which.
Differences from Navigation Domains.
BlocksWorld differs qualitatively from the grid-based domains:
- No spatial structure: Constraints are purely relational (βblock A is on block Bβ) rather than geometric.
- All objects are dynamic: Every block can be moved, unlike navigation where only the agent moves.
- Algorithmic solutions: We additionally provide an algorithmic sketch (the Universal Blockswordl Algorithm (Stechly et al., 2024)) to test whether L-ICL can improve adherence to prescribed planning strategies.
Appendix D Baseline Method Implementations
This appendix provides detailed specifications of all baseline methods evaluated in our experiments. All baselines operate on the same task: given a problem description with start position, goal position, walls, and (optionally) deadzones, produce an action sequence to navigate from start to goal. We organize baselines into two categories: prompt-only methods that rely solely on LLM reasoning, and oracle methods that receive feedback from a ground-truth simulator.
D.1 Prompt-Only Baselines
D.1.1 Zero-Shot Chain-of-Thought (Zero-Shot CoT)
The simplest baseline provides the model with task instructions, and asks it to reason step-by-step to produce a navigation plan.
Implementation.
The prompt includes: (1) a task description explaining gridworld navigation, valid actions, and movement constraints; (2) an ASCII representation of the problem (if applicable); (3) the query problem with start/goal coordinates; and (4) output format instructions requiring **Final Action Sequence:** action1, action2, .... We use temperature 1.0 for all experiments unless otherwise noted.
D.1.2 RAG-CoT (Retrieval-Augmented Chain-of-Thought)
This baseline extends Zero-Shot CoT with dynamic example selection based on similarity to the query problem. We retrieve the most relevant training examples within a character budget (10,000 or 20,000 characters in our experiments).
Similarity Metric.
We compute similarity based on Manhattan distance between start-goal pairs:
$$
\text{similarity}(q,c)=\frac{1}{1+|d_{q}-d_{c}|} \tag{1}
$$
where $d=|g_{x}-s_{x}|+|g_{y}-s_{y}|$ is the Manhattan distance from start $s$ to goal $g$ . This metric prefers training examples with similar navigation distances, under the assumption that problems with similar start-to-goal distances share structural similarities.
Retrieval Modes.
We evaluate three retrieval strategies:
- Strict: Add examples until the budget would be exceeded (conservative).
- Generous: Add examples until the budget is just crossed (permissive).
- Fixed: Include fixed examples plus retrieved examples up to the remaining budget.
Our main experiments use the generous mode.
D.1.3 Self-Consistency
Self-Consistency (Wang et al., 2023) generates multiple independent reasoning trajectories and selects the final answer via majority voting.
Implementation.
We sample $k=5$ independent CoT traces using temperature 1.0 for diversity. Each sample uses the same prompt with an annotation indicating the sample number (e.g., βSample 3/5β). We parse action sequences from each sample, count votes for each unique plan (exact sequence match), and select the plan with the highest vote count. For tie-breaking, we use an additional LLM call to evaluate candidates based on their self-critique annotations.
Self-Critique.
Each sample includes a self-critique section where the model evaluates its own reasoning, providing confidence estimates and noting potential issues. This information is used only for tie-breaking.
D.1.4 Self-Refine
Self-Refine (Madaan et al., 2023) allows the model to iteratively review and improve its own solutions without external feedback.
Implementation.
The model generates an initial attempt, then receives up to $N=5$ refinement opportunities. In each refinement round, the model sees its previous response and is instructed to check for potential mistakes: boundary violations, wall collisions, goal reachability, deadzone avoidance, and path optimality. The model may either provide a corrected plan or explicitly state β **No further refinement needed.** β
Termination Conditions.
Refinement stops when: (1) the model explicitly states satisfaction or (2) maximum refinement steps are reached
Key Distinction.
Unlike oracle baselines, Self-Refine receives no external feedback about plan validity. The model must introspect on its own reasoning, which prior work has shown to be unreliable for planning tasks (Stechly et al., 2025).
D.1.5 ReAct (Prompt-Only)
ReAct (Yao et al., 2023b) interleaves reasoning and action selection in a textual trace format.
Implementation.
The model alternates between Thought: steps (reasoning about current state and next move) and Action: steps (single movement action). All reasoning and actions are generated in a single LLM callβno external tool execution occurs. The prompt includes guidelines to keep moves consistent with the grid layout, avoid illegal steps, provide reasoning before each action, and end with an explicit final action sequence.
Trace Format.
β¬
Thought: [analyze current state and next move]
Action: move - direction
Thought: [continue reasoning]
Action: move - direction
...
Final Thought: [summarize path to goal]
** Final Action Sequence:** action1, action2, ...
D.1.6 Tree-of-Thoughts (Prompt-Only)
Tree-of-Thoughts (ToT) (Yao et al., 2023a) explores multiple reasoning paths through iterative expansion and scoring.
Implementation.
We use a prompt-only variant with breadth-first tree search. Parameters: breadth $b=5$ (nodes per level), depth $d=3$ (expansion rounds), max step actions $m=8$ (actions per candidate). At each depth level, the model generates candidate continuations as JSON, including a thought description, proposed actions, confidence score (0β100), terminal flag, and optional final plan. We keep the top- $k$ nodes by confidence (beam search) and finalize by selecting the best terminal node or completing the top non-terminal node.
Scoring.
Without oracle access, scoring uses only LLM self-assessed confidence and plan length:
$$
\text{score}=(\text{confidence},-\text{plan\_length}) \tag{2}
$$
Higher confidence is preferred; shorter plans are preferred as a tiebreaker.
D.2 Oracle Baselines
Oracle baselines have access to a ground-truth environment simulator that provides feedback on plan validity. This represents an upper bound on what prompt-only methods could achieve with perfect self-verification.
D.2.1 ReAct (+Oracle f/b)
This two-step approach allows the model to receive targeted feedback about errors and produce a corrected plan.
Implementation.
Step 1: Generate an initial CoT plan. Step 2: If errors are detected by the oracle, provide specific feedback and request correction. Maximum 2 LLM calls per problem. We use temperature 0.3 for more deterministic outputs in the correction step.
Feedback Types.
The oracle provides two types of feedback:
1. Invalid Move: βYour plan has an ERROR at step $N$ . The action βmove-Xβ at position $(x,y)$ is INVALID because it would move into a wall or out of bounds.β
1. Incomplete Path: βYour plan is INCOMPLETE. After executing all $N$ actions, you ended at position $(x,y)$ but did not reach the goal.β
Key Distinction.
ReAct (+Oracle f/b) queries the verifier at test time for each proposed plan, while L-ICL uses the oracle only during training. At inference, L-ICL requires a single forward pass with no external dependencies.
D.3 Evaluation Infrastructure
Action Parsing.
All baselines use the same action sequence parser that handles multiple output formats. We search for explicit patterns (e.g., **Final Action Sequence:**), fall back to lines with comma-separated actions, and as a last resort extract all move-* actions from the response. Actions are normalized to canonical form (e.g., βnorthβ $β$ βmove-northβ).
Plan Validation.
Plan validity is determined by simulating the action sequence:
1. Parse problem to extract start/goal positions and obstacles.
1. Execute actions sequentially, checking bounds and wall collisions.
1. Verify goal is reached.
1. Compare plan length to BFS optimal.
D.4 Summary of Baseline Characteristics
Table 7 summarizes the key characteristics distinguishing each baseline.
Table 7: Summary of baseline characteristics. βExamplesβ indicates whether the method uses in-context examples; βLLM Callsβ indicates calls per problem; βToolsβ indicates whether external tools are used.
$$
k 1 N O(b\cdot d) \tag{3}
$$
Appendix E Implementation Details
This appendix provides detailed implementation specifications for L-ICL, including the system architecture, correction generation process, and evaluation pipeline. We provide sufficient detail for reproducing our experiments.
E.1 System Architecture
Our implementation consists of four main components that work together to execute the L-ICL training loop:
1. Partial Program Generator: Constructs PTP-style prompts with subroutine documentation and accumulated corrections formatted as input-output examples.
1. LLM Interface: Sends prompts to language models and parses structured traces from responses.
1. Evaluation Engine: Validates generated plans using external tools and step-by-step simulation, identifying the first point of failure.
1. Correction Accumulator: Extracts corrections from evaluation mismatches and injects them into subsequent prompts.
Figure 14 illustrates the data flow between these components during L-ICL training.
Partial Program Generator
LLM Interface
Evaluation Engine
Correction Accumulator prompt trace corrections update
Figure 14: System architecture for L-ICL training. The loop iterates over training problems, accumulating corrections that progressively refine the prompt.
E.2 Subroutine Specifications by Domain
Each domain defines a set of planning primitives that the LLM must βimplementβ through trace generation. We describe the subroutines for each domain, including their signatures, semantics, and the constraints they encode.
E.2.1 Subroutines)
We use the following subroutines in our experiments:
State Extraction.
- extract_initial_state(problem) $β$ State: Parses the problem description to extract the agentβs starting position and environment structure.
- extract_goal(problem) $β$ State: Parses the goal specification from the problem.
Action Generation.
- get_applicable_actions(state, goal) $β$ Set[Action]: Returns the set of actions that can be legally executed from the current state. For navigation, this filters the four cardinal directions to exclude moves that would exit the grid or collide with walls.
- get_optimal_actions(state, goal) $β$ Set[Action]: Returns the subset of applicable actions that lie on an optimal path to the goal. This is computed using shortest-path algorithms/ a planner. For BlocksWorld, we replace this with get_recommended_actions(state, goal) $β$ Set[Action], which returns the set of actions prescribed by the Universal Blocksworld Algorithm.
State Transition and Goal Test.
- apply_action(state, action) $β$ State: Returns the state resulting from executing the action. For navigation, this updates the agentβs coordinates.
- at_goal(state, goal) $β$ bool: Returns whether the current state satisfies the goal condition.
E.3 Correction Format and Integration
L-ICL corrections are formatted as doctest-style input-output examples that are injected into subroutine documentation. This format is well-represented in LLM training data, facilitating generalization.
Correction Structure.
Each correction consists of three components:
1. Function identifier: Which subroutine the correction applies to.
1. Input: The arguments that triggered the mismatch.
1. Correct output: The oracle-provided ground truth.
Example Correction.
Consider an LLM that incorrectly proposes moving east from position $(3,4)$ when a wall blocks that direction. The evaluation detects that move_east is not in the set of applicable actions. L-ICL generates a correction:
β¬
>>> get_applicable_actions (state =(3,4), walls ={(3,5)})
{β move_north β, β move_south β, β move_west β}
This correction is inserted into the documentation for get_applicable_actions, providing an explicit example that eastward movement from $(3,4)$ is invalid.
Correction Accumulation.
Corrections accumulate across training problems. When a new correction duplicates an existing one (same function and inputs), we retain only one copy. This prevents prompt bloat while ensuring coverage of diverse failure cases. The accumulated corrections are batch-inserted into the prompt template before each evaluation iteration.
Additional details are in Section F.3.
E.4 Evaluation Pipeline
Plan evaluation proceeds through multiple validation stages, each providing increasingly detailed feedback.
E.4.1 External Plan Validation
We use the VAL validator (Howey et al., 2004), the standard tool for PDDL plan validation, to verify plan correctness. Given a domain specification, problem instance, and proposed action sequence, VAL checks:
- Each actionβs preconditions are satisfied when it is executed.
- The final state after executing all actions satisfies the goal.
VAL provides a binary validity judgment and, for invalid plans, identifies the first action whose preconditions fail.
E.4.2 Optimality Verification
To assess plan quality, we compute optimal solutions using the Fast Downward planning system (Helmert, 2006). Fast Downward is a state-of-the-art classical planner that guarantees optimal solutions when configured with admissible heuristics. We use the A* search algorithm with the LM-cut heuristic.
For each problem, we:
1. Run Fast Downward to obtain the optimal plan length.
1. Compare the LLMβs plan length against this baseline.
1. Mark plans as optimal if lengths match and the plan is valid.
E.4.3 Step-by-Step Simulation
Beyond end-to-end validation, we simulate plan execution step-by-step using proxy implementations of each subroutine. This enables:
1. First-failure identification: We identify the exact step where the LLMβs trace first diverges from ground truth, enabling localized correction generation.
1. Fine-grained error categorization: We distinguish between:
- Applicability errors: Proposing an action not in the applicable set.
- Optimality errors: Proposing an applicable but suboptimal action.
1. Correction generation: For each error type, we generate the corresponding correction by querying the oracle for the correct output.
Algorithm 2 provides pseudocode for the step-by-step evaluation procedure.
Algorithm 2 Step-by-Step Plan Evaluation
0: Domain $\mathcal{D}$ , problem $P$ , predicted actions $[a_{1},...,a_{n}]$ , oracle $\mathcal{O}$
0: Evaluation result with corrections
$sβ\mathcal{O}.\text{extract\_initial\_state}(P)$
$gβ\mathcal{O}.\text{extract\_goal}(P)$
corrections $β[]$
first_invalid $β$ null
first_suboptimal $β$ null
for $i=1$ to $n$ do
$A_{\text{applicable}}β\mathcal{O}.\text{get\_applicable\_actions}(s,g)$
$A_{\text{optimal}}β\mathcal{O}.\text{get\_optimal\_actions}(s,g)$
if $a_{i}β A_{\text{applicable}}$ and first_invalid is null then
first_invalid $β i$
corrections.append $(\text{``get\_applicable\_actions''},(s,g),A_{\text{applicable}})$
break $\triangleright$ Stop at first invalid action
else if $a_{i}β A_{\text{optimal}}$ and first_suboptimal is null then
first_suboptimal $β i$
corrections.append $(\text{``get\_optimal\_actions''},(s,g),A_{\text{optimal}})$
end if
$sβ\mathcal{O}.\text{apply\_action}(s,a_{i})$
end for
goal_reached $β\mathcal{O}.\text{at\_goal}(s,g)$
return {first_invalid, first_suboptimal, corrections, goal_reached}
E.5 Oracle Implementation
The oracle provides ground-truth outputs for each subroutine. We implement oracles using a combination of external planning tools and logic-based simulation.
Planning Tools.
We use Fast Downward (Helmert, 2006) for optimal plan computation and action applicability. For domains requiring multiple optimal plans (to compute optimal action sets), we additionally use the K* planner (Katz and Lee, 2023), which enumerates the top- $k$ shortest plans.
State Simulation.
Action effects are computed using the Tarski planning library (FrancΓ©s et al., 2018), which provides PDDL parsing and grounded action simulation. Given a PDDL domain and problem, Tarski computes:
- The set of ground actions applicable in any state.
- The successor state resulting from applying an action.
- Whether a state satisfies a goal formula.
Optimality Computation.
Computing optimal actions (those on some optimal path) requires enumerating multiple optimal plans. We use K* to generate all plans of optimal length, then take the union of first actions across these plans. For efficiency, we cache optimal action sets for frequently-queried states.
E.6 Prompt Construction
The final prompt sent to the LLM consists of four components assembled in sequence:
1. Task description: Natural language explanation of the planning domain, valid actions, and objective.
1. Subroutine documentation: For each subroutine, we include:
- Function signature with typed arguments and return type.
- Natural language description of the subroutineβs purpose.
- Accumulated L-ICL corrections as doctest examples.
1. Example traces: A small number ( $k=2$ β $3$ ) of complete reasoning traces showing how subroutines are invoked to solve example problems.
1. Query problem: The problem instance to solve, formatted consistently with the examples, followed by instructions to produce a trace.
State Representation.
For grid-based domains, we evaluate two state representations:
- Textual: Positions as coordinates (e.g., βagent at (3,4)β) with walls listed explicitly.
- ASCII: Visual grid representation where walls are marked characters and open cells are spaces.
Our ablation (Section 4.3) shows that L-ICL achieves comparable peak performance with either representation, though ASCII grids accelerate early learning.
E.7 Experimental Infrastructure
Hardware.
Experiments were conducted on a Linux workstation with 32GB RAM. External planning tools (Fast Downward, VAL) were run locally. LLM inference was performed via API calls.
LLM Services.
We evaluated models through their respective APIs:
- DeepSeek V3 and V3.1 via the DeepSeek API.
- Claude Haiku 4.5 and Claude Sonnet 4.5 via the Anthropic API.
Hyperparameters.
Unless otherwise specified:
- Temperature: 1.0 (following DeepSeek recommendations).
- Maximum generation length: 32000 tokens.
- Training examples per iteration: 1 (single problem per L-ICL update).
- Total training problems: up to 240.
- Thinking tokens for Sonnet 4.5: 10k
- Thinking tokens for Haiku 4.5: 5k
Timeout Handling.
Fast Downward was given a 60-second timeout per problem. Problems exceeding this limit were marked as having unknown optimal cost and excluded from optimality statistics (but included in validity statistics if the validator succeeded).
Appendix F Representative Prompts
This appendix provides representative prompts used in all experiments. We organize prompts into two categories: (1) L-ICL prompts based on Program Trace Prompting (PTP), and (2) baseline method prompts used for comparison approaches. All prompts use template variables (denoted with curly braces, e.g., {partial_program}) that are replaced with problem-specific content at runtime.
F.1 L-ICL Prompts (Program Trace Prompting)
L-ICL prompts follow the Program Trace Prompting (PTP) format, where the LLM is asked to predict the output of a partially specified program. The key insight is that by withholding subroutine implementations (replacing them with ββ¦β markers), the LLM must infer correct behavior from documentation and accumulated examples.
F.1.1 Base L-ICL Prompt (No Domain Visualization)
This is the minimal L-ICL prompt used for gridworld and Sokoban navigation tasks when no ASCII grid visualization is provided. The LLM must infer spatial constraints purely from accumulated L-ICL corrections.
β¬
Consider the program fragment below. This program fragment is
incomplete, with key parts of the implementation hidden, by
replacing them with "..." markers.
PROGRAM:
βββ python
{partial_program}
βββ
QUESTION: Predict what the output of the program above will be,
given the input shown below.
Respond with the FULL program output, and ONLY the expected
program output: you will be PENALIZED if you introduce any
additional explanatory text.
βββ
>>> {task_name}({input_str})
βββ
Template Variables.
- {partial_program}: The PTP-style program with subroutine signatures, documentation, doctest examples (including L-ICL corrections), and ββ¦β placeholders for implementations.
- {task_name}: The function name to invoke (e.g., solve_gridworld).
- {input_str}: The problem specification as a string (e.g., start position, goal position, wall locations).
F.1.2 L-ICL Prompt with ASCII Grid Visualization
When ASCII grid visualization is enabled, the prompt includes a visual representation of the environment. This provides spatial scaffolding that accelerates early learning, though L-ICL achieves comparable peak performance without it.
β¬
Consider the program fragment below. This program fragment is
incomplete, with key parts of the implementation hidden, by
replacing them with "..." markers.
IMPORTANT: You are an agent navigating a {grid_size} gridworld.
The grid has {num_walls} walls that block movement.
** Grid Layout:**
βββ
1 2 3 4 5 6 7 8 9 10
+---+---+---+---+---+---+---+---+---+---+
10 | . | . | . | . | . | . | . | . | . | . |
+---+---+---+---+---+---+---+---+---+---+
9 | . | . | # | # | # | # | # | # | # | . |
+---+---+---+---+---+---+---+---+---+---+
8 | . | # | . | # | . | # | . | . | . | . |
+---+---+---+---+---+---+---+---+---+---+
7 | . | # | . | . | . | # | . | # | # | . |
+---+---+---+---+---+---+---+---+---+---+
6 | . | # | . | # | . | . | . | # | . | . |
+---+---+---+---+---+---+---+---+---+---+
5 | . | . | . | # | . | # | . | # | . | . |
+---+---+---+---+---+---+---+---+---+---+
4 | . | # | . | # | . | # | . | . | # | . |
+---+---+---+---+---+---+---+---+---+---+
3 | . | # | . | . | . | # | . | # | . | . |
+---+---+---+---+---+---+---+---+---+---+
2 | . | # | . | # | . | . | . | # | . | . |
+---+---+---+---+---+---+---+---+---+---+
1 | . | . | . | . | . | . | . | . | . | . |
+---+---+---+---+---+---+---+---+---+---+
βββ
PROGRAM:
βββ python
{partial_program}
βββ
QUESTION: Predict what the output of the program above will be,
given the input shown below.
Respond with the FULL program output, and ONLY the expected
program output: you will be PENALIZED if you introduce any
additional explanatory text.
βββ
>>> {task_name}({input_str})
βββ
Grid Symbols.
- . β Open cell (traversable)
- # β Wall (impassable)
- $ β Box (in Sokoban)
F.1.3 L-ICL BlocksWorld Prompt with UBW Algorithm
For BlocksWorld, we additionally provide algorithmic guidance based on the Universal Blocks World (UBW) algorithm (Stechly et al., 2024). This tests whether L-ICL can improve adherence to prescribed planning strategies beyond simple constraint satisfaction.
β¬
Consider the program fragment below. This program fragment
implements the Universal Blocks World (UBW) algorithm, which is
a systematic two - phase approach for solving blocks world planning
problems. The implementation is incomplete, with key parts
replaced by "..." markers.
UNIVERSAL BLOCKS WORLD ALGORITHM OVERVIEW:
The UBW algorithm works in two distinct phases to efficiently
solve any blocks world configuration:
PHASE 1: STRATEGIC UNSTACKING
- Unstack ALL blocks that are stacked on top of others
- Work from top to bottom, unstacking clear blocks first
- Move incorrectly positioned blocks to the table
PHASE 2: SYSTEMATIC REASSEMBLY
- Build goal configurations from bottom up
- Process blocks in dependency order (place supporting blocks
before supported blocks)
- Only place a block when its target is ready (clear and in
final position)
- Ensure structural integrity throughout construction
KEY HEURISTICS FOR IMPLEMENTATION:
1. STATE ANALYSIS:
- Parse predicates into on (), on - table (), and clear ()
relationships
- Build dependency graphs: what should be on what
- Identify bottom blocks (blocks that should be on table
in goal)
2. UNSTACKING STRATEGY:
- Check each on (X, Y) relationship in current state
- If (X, Y) is NOT in goal relationships, consider
unstacking X
- Only unstack if X is clear (no blocks on top)
- Priority: unstack blocks that block other necessary moves
3. REASSEMBLY STRATEGY:
- For each goal on (X, Y), check if X can be placed on Y
- X must be: clear AND on - table
- Y must be: clear AND in its final position
- Y is in final position if: Y should be on table OR Y is
already correctly placed on its target
4. ACTION SELECTION LOGIC:
βββ
For unstacking: if on (X, Y) in current state AND clear (X):
return move - b - to - t (X, Y)
For assembly: if goal requires on (X, Y) AND
can_place_block (X, Y):
return move - t - to - b (X, Y)
βββ
5. CORRECTNESS VERIFICATION:
- Always verify preconditions before suggesting actions
- Check that actions don β t break existing correct
configurations
- Ensure goal - directed progress in every move during
assembly phase
DETAILED TRACE GUIDANCE:
When implementing the UBW algorithm, provide step - by - step
reasoning inside reasoning () calls if required, which is your
scratchpad.
1. State the current configuration clearly
2. Identify which phase you β re in (unstacking vs assembly)
3. Explain WHY each action is chosen based on UBW principles
4. Show how the action advances toward the goal
5. Verify preconditions are satisfied
6. Update state representation after each action
PROGRAM:
βββ python
{partial_program}
βββ
QUESTION: Predict what the output of the program above will be,
given the input shown below.
IMPLEMENTATION REQUIREMENTS:
- Follow the UBW algorithm phases strictly
- Provide detailed reasoning for each action selection
- Show state analysis and dependency tracking
- Explain how each move contributes to the overall strategy
- Demonstrate understanding of when to unstack vs when to build
- Verify that all actions follow UBW heuristics
Respond with the FULL program output, including detailed
algorithmic traces that demonstrate proper UBW implementation.
Your trace should show:
- Clear identification of current phase (unstacking / assembly)
- Specific reasoning for each action choice
- State updates and goal progress tracking
- Verification that actions follow UBW principles
Under no circumstance must you skip steps in the program output.
You CAN decide to go back and choose different actions if you
feel that you have made a mistake, but the FINAL PLAN must show
the COMPLETE CORRECT PATH ONLY.
βββ
>>> {task_name}({input_str})
βββ
F.1.4 Example Partial Program Structure
The {partial_program} template variable is replaced with a PTP-style program containing subroutine documentation and accumulated L-ICL corrections. Below is a representative example for gridworld navigation:
β¬
import collections
from typing import Dict, List, Set, Tuple, Union, Optional, Any, FrozenSet
PlanningState = Any
Action = Any
@traced
def extract_problem (input_str: str) -> str:
""" Extract a standardized problem description from input.
"""
...
@traced
def extract_initial_state (problem_str: str) -> PlanningState:
""" Extract the initial state from a problem description.
"""
...
@traced
def extract_goal (problem_str: str) -> PlanningState:
""" Extract the goal from a problem description.
"""
...
@traced
def at_goal (state: PlanningState, goal: PlanningState) -> bool:
""" Check if current state satisfies goal conditions.
"""
...
@traced
def get_applicable_actions (state: PlanningState, goal: PlanningState) -> Set [Action]:
""" Get all applicable actions in the current state.
"""
...
@traced
def get_optimal_actions (state: PlanningState, applicable_actions: List [Action],
goal: PlanningState) -> Set [Action]:
""" Get actions that are part of an optimal plan.
"""
...
@traced
def apply_action (state: PlanningState, action: Action, goal: PlanningState) -> PlanningState:
""" Apply an action to a state, returning the resulting state.
"""
...
def pddl_grid (input_str: str):
""" Solve a planning problem described in input_str.
This function processes a planning problem description by:
1. Extracting the initial state and goal
2. Iteratively applying actions until the goal is reached
3. Returning the sequence of actions as a plan
>>> pddl_grid (β(define (problem gw - task -351)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -5))\ n (: goal (at c5 -10))\ n)\ n β)
Calling extract_problem (β(define (problem gw - task -351)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -5))\ n (: goal (at c5 -10))\ n)\ n β)...
... extract_problem returned β gridworld -10 x10 β
Calling extract_initial_state (β(define (problem gw - task -351)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -5))\ n (: goal (at c5 -10))\ n)\ n β)...
... extract_initial_state returned (9, 5)
Calling extract_goal (β(define (problem gw - task -351)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -5))\ n (: goal (at c5 -10))\ n)\ n β)...
... extract_goal returned (5, 10)
Calling at_goal ((9, 5), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((9, 5), (5, 10))...
... get_applicable_actions returned [β move - north β, β move - east β]
Calling get_optimal_actions ((9, 5), [β move - north β, β move - east β], (5, 10))...
... get_optimal_actions returned [β move - north β, β move - east β]
Calling apply_action ((9, 5), β move - north β, (5, 10))...
... apply_action returned (9, 6)
Calling at_goal ((9, 6), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((9, 6), (5, 10))...
... get_applicable_actions returned [β move - south β, β move - east β]
Calling get_optimal_actions ((9, 6), [β move - south β, β move - east β], (5, 10))...
... get_optimal_actions returned [β move - east β]
Calling apply_action ((9, 6), β move - east β, (5, 10))...
... apply_action returned (10, 6)
Calling at_goal ((10, 6), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((10, 6), (5, 10))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((10, 6), [β move - north β, β move - south β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((10, 6), β move - north β, (5, 10))...
... apply_action returned (10, 7)
Calling at_goal ((10, 7), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((10, 7), (5, 10))...
... get_applicable_actions returned [β move - north β, β move - south β]
Calling get_optimal_actions ((10, 7), [β move - north β, β move - south β], (5, 10))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((10, 7), β move - north β, (5, 10))...
... apply_action returned (10, 8)
Calling at_goal ((10, 8), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((10, 8), (5, 10))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((10, 8), [β move - north β, β move - south β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((10, 8), β move - north β, (5, 10))...
... apply_action returned (10, 9)
Calling at_goal ((10, 9), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((10, 9), (5, 10))...
... get_applicable_actions returned [β move - north β, β move - south β]
Calling get_optimal_actions ((10, 9), [β move - north β, β move - south β], (5, 10))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((10, 9), β move - north β, (5, 10))...
... apply_action returned (10, 10)
Calling at_goal ((10, 10), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((10, 10), (5, 10))...
... get_applicable_actions returned [β move - south β, β move - west β]
Calling get_optimal_actions ((10, 10), [β move - south β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((10, 10), β move - west β, (5, 10))...
... apply_action returned (9, 10)
Calling at_goal ((9, 10), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((9, 10), (5, 10))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((9, 10), [β move - east β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((9, 10), β move - west β, (5, 10))...
... apply_action returned (8, 10)
Calling at_goal ((8, 10), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((8, 10), (5, 10))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((8, 10), [β move - east β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((8, 10), β move - west β, (5, 10))...
... apply_action returned (7, 10)
Calling at_goal ((7, 10), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((7, 10), (5, 10))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((7, 10), [β move - east β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((7, 10), β move - west β, (5, 10))...
... apply_action returned (6, 10)
Calling at_goal ((6, 10), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((6, 10), (5, 10))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((6, 10), [β move - east β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((6, 10), β move - west β, (5, 10))...
... apply_action returned (5, 10)
Calling at_goal ((5, 10), (5, 10))...
... at_goal returned True
Final answer: move - north move - east move - north move - north move - north move - north move - west move - west move - west move - west move - west
[β move - north β, β move - east β, β move - north β, β move - north β, β move - north β, β move - north β, β move - west β, β move - west β, β move - west β, β move - west β, β move - west β]
>>> pddl_grid (β(define (problem gw - task -352)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -3))\ n (: goal (at c7 -7))\ n)\ n β)
Calling extract_problem (β(define (problem gw - task -352)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -3))\ n (: goal (at c7 -7))\ n)\ n β)...
... extract_problem returned β gridworld -10 x10 β
Calling extract_initial_state (β(define (problem gw - task -352)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -3))\ n (: goal (at c7 -7))\ n)\ n β)...
... extract_initial_state returned (9, 3)
Calling extract_goal (β(define (problem gw - task -352)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -3))\ n (: goal (at c7 -7))\ n)\ n β)...
... extract_goal returned (7, 7)
Calling at_goal ((9, 3), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((9, 3), (7, 7))...
... get_applicable_actions returned [β move - south β, β move - east β]
Calling get_optimal_actions ((9, 3), [β move - south β, β move - east β], (7, 7))...
... get_optimal_actions returned [β move - south β, β move - east β]
Calling apply_action ((9, 3), β move - south β, (7, 7))...
... apply_action returned (9, 2)
Calling at_goal ((9, 2), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((9, 2), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - east β]
Calling get_optimal_actions ((9, 2), [β move - north β, β move - south β, β move - east β], (7, 7))...
... get_optimal_actions returned [β move - south β]
Calling apply_action ((9, 2), β move - south β, (7, 7))...
... apply_action returned (9, 1)
Calling at_goal ((9, 1), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((9, 1), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - east β, β move - west β]
Calling get_optimal_actions ((9, 1), [β move - north β, β move - east β, β move - west β], (7, 7))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((9, 1), β move - west β, (7, 7))...
... apply_action returned (8, 1)
Calling at_goal ((8, 1), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((8, 1), (7, 7))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((8, 1), [β move - east β, β move - west β], (7, 7))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((8, 1), β move - west β, (7, 7))...
... apply_action returned (7, 1)
Calling at_goal ((7, 1), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 1), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - east β, β move - west β]
Calling get_optimal_actions ((7, 1), [β move - north β, β move - east β, β move - west β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 1), β move - north β, (7, 7))...
... apply_action returned (7, 2)
Calling at_goal ((7, 2), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 2), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((7, 2), [β move - north β, β move - south β, β move - west β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 2), β move - north β, (7, 7))...
... apply_action returned (7, 3)
Calling at_goal ((7, 3), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 3), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β]
Calling get_optimal_actions ((7, 3), [β move - north β, β move - south β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 3), β move - north β, (7, 7))...
... apply_action returned (7, 4)
Calling at_goal ((7, 4), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 4), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - east β]
Calling get_optimal_actions ((7, 4), [β move - north β, β move - south β, β move - east β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 4), β move - north β, (7, 7))...
... apply_action returned (7, 5)
Calling at_goal ((7, 5), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 5), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β]
Calling get_optimal_actions ((7, 5), [β move - north β, β move - south β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 5), β move - north β, (7, 7))...
... apply_action returned (7, 6)
Calling at_goal ((7, 6), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 6), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((7, 6), [β move - north β, β move - south β, β move - west β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 6), β move - north β, (7, 7))...
... apply_action returned (7, 7)
Calling at_goal ((7, 7), (7, 7))...
... at_goal returned True
Final answer: move - south move - south move - west move - west move - north move - north move - north move - north move - north move - north
[β move - south β, β move - south β, β move - west β, β move - west β, β move - north β, β move - north β, β move - north β, β move - north β, β move - north β, β move - north β]
>>> pddl_grid (β(define (problem gw - task -353)\ n (: domain gridworld -10 x10)\ n (: init (at c7 -2))\ n (: goal (at c2 -5))\ n)\ n β)
Calling extract_problem (β(define (problem gw - task -353)\ n (: domain gridworld -10 x10)\ n (: init (at c7 -2))\ n (: goal (at c2 -5))\ n)\ n β)...
... extract_problem returned β gridworld -10 x10 β
Calling extract_initial_state (β(define (problem gw - task -353)\ n (: domain gridworld -10 x10)\ n (: init (at c7 -2))\ n (: goal (at c2 -5))\ n)\ n β)...
... extract_initial_state returned (7, 2)
Calling extract_goal (β(define (problem gw - task -353)\ n (: domain gridworld -10 x10)\ n (: init (at c7 -2))\ n (: goal (at c2 -5))\ n)\ n β)...
... extract_goal returned (2, 5)
Calling at_goal ((7, 2), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((7, 2), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((7, 2), [β move - north β, β move - south β, β move - west β], (2, 5))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((7, 2), β move - west β, (2, 5))...
... apply_action returned (6, 2)
Calling at_goal ((6, 2), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((6, 2), (2, 5))...
... get_applicable_actions returned [β move - south β, β move - east β, β move - west β]
Calling get_optimal_actions ((6, 2), [β move - south β, β move - east β, β move - west β], (2, 5))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((6, 2), β move - west β, (2, 5))...
... apply_action returned (5, 2)
Calling at_goal ((5, 2), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((5, 2), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - east β]
Calling get_optimal_actions ((5, 2), [β move - north β, β move - south β, β move - east β], (2, 5))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((5, 2), β move - north β, (2, 5))...
... apply_action returned (5, 3)
Calling at_goal ((5, 3), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((5, 3), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((5, 3), [β move - north β, β move - south β, β move - west β], (2, 5))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((5, 3), β move - west β, (2, 5))...
... apply_action returned (4, 3)
Calling at_goal ((4, 3), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((4, 3), (2, 5))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((4, 3), [β move - east β, β move - west β], (2, 5))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((4, 3), β move - west β, (2, 5))...
... apply_action returned (3, 3)
Calling at_goal ((3, 3), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((3, 3), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - east β]
Calling get_optimal_actions ((3, 3), [β move - north β, β move - south β, β move - east β], (2, 5))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((3, 3), β move - north β, (2, 5))...
... apply_action returned (3, 4)
Calling at_goal ((3, 4), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((3, 4), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β]
Calling get_optimal_actions ((3, 4), [β move - north β, β move - south β], (2, 5))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((3, 4), β move - north β, (2, 5))...
... apply_action returned (3, 5)
Calling at_goal ((3, 5), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((3, 5), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((3, 5), [β move - north β, β move - south β, β move - west β], (2, 5))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((3, 5), β move - west β, (2, 5))...
... apply_action returned (2, 5)
Calling at_goal ((2, 5), (2, 5))...
... at_goal returned True
Final answer: move - west move - west move - north move - west move - west move - north move - north move - west
[β move - west β, β move - west β, β move - north β, β move - west β, β move - west β, β move - north β, β move - north β, β move - west β]
"""
...
F.2 Baseline Method Prompts
The following prompts are used for baseline comparison methods. All baselines receive the same problem information but use different reasoning frameworks.
F.2.1 Zero-Shot CoT / RAG-CoT Prompt
The base prompt used for Zero-Shot Chain-of-Thought and RAG-CoT baselines. For RAG-CoT, dynamically retrieved examples are inserted in the {examples} section.
β¬
You are an expert at navigating gridworld environments. You will
solve navigation problems where an agent must find the optimal
path from a start position to a goal position while avoiding
walls and obstacles.
IMPORTANT:
You are an agent navigating a {grid_size} gridworld.
The grid has {num_walls} walls that block movement.
** Grid Layout:**
{ascii_grid}
# Task Description
In each problem, you are given:
- A gridworld of specific dimensions
- A start position (row, column)
- A goal position (row, column)
- Wall locations that block movement
Your task is to find the shortest path from start to goal using
these actions:
- ** move - north **: Move one cell north (increase row by 1)
- ** move - south **: Move one cell south (decrease row by 1)
- ** move - east **: Move one cell east (increase column by 1)
- ** move - west **: Move one cell west (decrease column by 1)
# Solution Strategy
For each problem, follow this reasoning process:
1. ** Analyze the Grid **: Identify the start position, goal
position, and obstacles
2. ** Plan the Route **: Determine if a direct path exists or if
you need to navigate around obstacles
3. ** Step - by - Step Reasoning **: For each move, explain why it
brings you closer to the goal
4. ** Verify the Path **: Ensure the path is valid and optimal
# Example Problems
{examples}
# Problem to Solve
Start: {start_position}
Goal: {goal_position}
{deadzone_warning}
Please solve this problem step - by - step and provide your answer.
** Your Solution:**
First, provide your step - by - step reasoning:
1. Identify the start position, goal position, and any obstacles
2. Reason through each step of your path
3. Verify your path is valid and optimal
Then, provide your final answer EXACTLY in this format:
** Final Action Sequence:** move - direction1, move - direction2, ...
IMPORTANT: You MUST include the line starting with
" Final Action Sequence:" followed by your comma - separated list
of actions.
F.2.2 Self-Consistency Prompt
Self-Consistency uses the same base prompt as CoT, with a sample annotation appended to each independent call:
β¬
{base_cot_prompt}
<!-- Self - Consistency Sample {k}/{total}: Treat this run
independently and produce a complete plan -->
Each sample is generated with temperature $>0$ for diversity. The final answer is selected via majority voting over the k samples.
F.2.3 Self-Refine Refinement Prompt
After the initial attempt, if refinement rounds remain, the model receives its previous response with reflection instructions:
β¬
{base_cot_prompt}
### Self - Refinement Attempt {attempt_number}
You previously produced the following reasoning and plan:
{previous_response}
Proposed action sequence: {previous_actions}
Carefully re - read the task description and your earlier steps.
Without running code or simulations, check for potential
mistakes:
- Did any move leave the grid or pass through a wall?
- Does the sequence actually reach the goal cell?
- Is there a shorter valid route?
If issues are found, explain them briefly and provide a
corrected plan. If you believe the plan is correct and needs
no further refinement, explicitly state:
β** No further refinement needed.**β and then restate the
action sequence.
Always finish with a line of the form:
** Final Action Sequence:** move -*, move -*, ...
Refined solution:
F.2.4 ReAct (Prompt-Only) Prompt
ReAct uses an alternating Thought/Action trace format:
β¬
You are an expert gridworld planner. Solve using ReAct style
trace.
IMPORTANT:
You are an agent navigating a {grid_size} gridworld.
The grid has {num_walls} walls that block movement.
** Grid Layout:**
{ascii_grid}
## Valid Actions
- ** move - north **: Move one cell up (increase y by 1)
- ** move - south **: Move one cell down (decrease y by 1)
- ** move - east **: Move one cell right (increase x by 1)
- ** move - west **: Move one cell left (decrease x by 1)
## Movement Constraints
- You cannot move through walls
- You cannot move outside the grid boundaries
- Each action moves exactly one cell
# Example
Start: (2,1), Goal: (5,4)
Thought: I am at (2,1) and need to reach (5,4). I should move
north and east while checking for obstacles.
Action: move - north
Thought: Now at (2,2). Continue moving toward the goal.
Action: move - north
...
Final Thought: Reached the goal at (5,4).
** Final Action Sequence:** move - north, move - north, move - east, ...
# Problem to Solve
Start: {start_position}
Goal: {goal_position}
Guidelines:
- Alternate between β Thought:β and β Action:β
- Keep moves consistent with grid layout
- Avoid illegal steps (walls, boundaries)
- End with β Final Thought:β and β** Final Action Sequence:**β
F.2.5 Tree-of-Thoughts Expansion Prompt
ToT uses a structured expansion prompt requesting JSON-formatted candidates:
β¬
{reference_examples}
Gridworld planning problem:
Start: {start_position}
Goal: {goal_position}
Current depth: {depth}/{max_depth}
Actions chosen so far: {action_prefix}
Thoughts considered so far:
{thought_history}
Generate up to 5 candidate expansions as JSON. Each must include:
- " thought ": a short description of the idea
- " proposed_actions ": list of up to 8 moves continuing the plan
- " confidence ": integer 0-100 for promise of success
- " is_terminal ": true if plan should stop after these actions
- " final_plan ": optional full action list if terminal
Moves must stay within bounds and avoid walls.
Return ONLY the JSON array; no commentary.
F.2.6 ReAct (+Oracle) Feedback Prompt
When the oracle detects errors, it provides specific feedback:
β¬
{original_prompt}
---
Your previous response:
{previous_response}
---
ORACLE FEEDBACK: Your plan has a PROBLEM at step {step_number}.
The action β{failed_action}β at position {position} is INVALID
because {reason}.
Please find an alternative path that avoids this issue.
** Corrected Final Action Sequence:**
Feedback Types.
- Invalid Move: βThe action βmove-Xβ at position $(x,y)$ is INVALID because it would move into a wall or out of bounds.β
- Deadzone Entry: βThe action βmove-Xβ leads to position $(x,y)$ which is a DEADZONE. You should avoid deadzones.β
- Incomplete Path: βYour plan is INCOMPLETE. After executing all actions, you ended at $(x,y)$ but did not reach the goal.β
F.3 L-ICL Correction Format
L-ICL corrections are formatted as doctest-style input-output examples inserted into subroutine documentation. This format leverages Pythonβs doctest convention, which is well-represented in LLM training data.
Correction Structure.
β¬
>>> {function_name}({input_args})
{correct_output}
Example Corrections by Subroutine.
Applicability Correction (when LLM proposes invalid action):
β¬
>>> get_applicable_actions (state =(3, 4), goal =(7, 8))
{β move_north β, β move_south β, β move_west β}
Optimality Correction (when LLM proposes suboptimal action):
β¬
>>> get_optimal_actions (state =(5, 2), goal =(8, 7))
{β move_north β, β move_east β}
BlocksWorld Action Correction:
β¬
>>> get_recommended_actions (
... state ={β on β: [(β A β,β B β)], β on - table β: [β B β,β C β],
... β clear β: [β A β,β C β]},
... goal ={β on β: [(β B β,β C β)], β on - table β: [β A β,β C β],
... β clear β: [β A β,β B β]}
... )
{β move - b - to - t (A, B)β}
F.4 Action Parsing Patterns
All methods use the same action sequence parser that handles multiple output formats:
β¬
PATTERNS = [
rβ\*\*Final β£ Action β£ Sequence:\*\*\s*(.+?)(?:\n|$)β,
rβFinal β£ Action β£ Sequence:\s*(.+?)(?:\n|$)β,
rβ\*\*Action β£ Sequence:\*\*\s*(.+?)(?:\n|$)β,
rβAction β£ Sequence:\s*(.+?)(?:\n|$)β,
rβOptimal β£ path:\s*(.+?)(?:\n|$)β,
rβPlan:\s*(.+?)(?:\n|$)β,
]
VALID_ACTIONS = {
βmove-northβ, βmove-southβ, βmove-eastβ, βmove-westβ,
βmove_northβ, βmove_southβ, βmove_eastβ, βmove_westβ,
βpush-northβ, βpush-southβ, βpush-eastβ, βpush-westβ,
}
# Normalize action format
ACTION_ALIASES = {
βnorthβ: βmove-northβ, βsouthβ: βmove-southβ,
βeastβ: βmove-eastβ, βwestβ: βmove-westβ,
βupβ: βmove-northβ, βdownβ: βmove-southβ,
βrightβ: βmove-eastβ, βleftβ: βmove-westβ,
}
F.5 Prompt Variations by Domain
8 $Γ$ 8 Two-Room Gridworld.
Uses the standard L-ICL prompt with ASCII grid showing two rooms separated by a wall with a doorway.
10 $Γ$ 10 Maze.
Uses the L-ICL prompt with ASCII grid showing procedurally generated maze corridors. Wall density is higher, creating narrow passages.
Sokoban-Style Gridworld.
Uses the standard L-ICL prompt with Sokoban-style ASCII layouts but no pushable boxes.
Full Sokoban.
Extends the action space to include push actions:
β¬
Valid actions:
- Movement: move_north, move_south, move_east, move_west
- Pushing: push_north, push_south, push_east, push_west
BlocksWorld.
Uses the UBW algorithm prompt (Section F.1.3) with relational state representation instead of spatial coordinates.
F.6 Hyperparameter Settings by Method
Table 8 summarizes the key hyperparameters used for each prompting method.
Table 8: Hyperparameter settings for each prompting method.
| Zero-Shot CoT | 1.0 | 32,000 | 1 |
| --- | --- | --- | --- |
| RAG-CoT | 1.0 | 32,000 | 1 |
| Self-Consistency | 1.0 | 32,000 | $k=5$ |
| Self-Refine | 1.0 | 32,000 | $N=5$ |
| ReAct (Prompt) | 1.0 | 32,000 | 1 |
| ToT (Prompt) | 1.0 | 32,000 | $b=5,d=3$ |
| ReAct (+Oracle) | 0.3 | 32,000 | 1β2 |
| L-ICL | 1.0 | 32,000 | 1 |
L-ICL Training Configuration.
- Training examples: up to 240 problems
- Corrections per problem: 1 (first failure only) in Sokoban and BlocksWorld or up to 2 (first optimality correction and first validity correction, or just first validity correction) in gridworld problems
- Correction accumulation: batch update after 10 training examples