2602.00276v1
Model: gemini-2.0-flash
# Localizing and Correcting Errors for LLM-based Planners
**Authors**: Aditya Kumar, William Cohen
Abstract
Large language models (LLMs) have demonstrated strong reasoning capabilities on math and coding, but frequently fail on symbolic classical planning tasks. Our studies, as well as prior work, show that LLM-generated plans routinely violate domain constraints given in their instructions (e.g., walking through walls). To address this failure, we propose iteratively augmenting instructions with Localized In-Context Learning (L-ICL) demonstrations: targeted corrections for specific failing steps. Specifically, L-ICL identifies the first constraint violation in a trace and injects a minimal input-output example giving the correct behavior for the failing step. Our proposed technique of L-ICL is much effective than explicit instructions or traditional ICL, which adds complete problem-solving trajectories, and many other baselines. For example, on an 8Γ8 gridworld, L-ICL produces valid plans 89% of the time with only 60 training examples, compared to 59% for the best baseline, an increase of 30%. L-ICL also shows dramatic improvements in other domains (gridworld navigation, mazes, Sokoban, and BlocksWorld), and on several LLM architectures.
Machine Learning, ICML
1 Introduction
Large language models (LLMs) and agentic systems reason and plan effectively in domains such as mathematics, coding, and question answering (Khattab et al., 2023; Yao et al., 2023a), suggesting that modern LLMs possess strong general planning capabilities. However, studies on classical planning benchmarks reveals a more nuanced picture: LLMs frequently fail, even on simple planning tasks that symbolic planners solve easily (Valmeekam et al., 2023; Stechly et al., 2024). Past researchers have analyzed plans produced by LLMs such as SearchFormer (Lehnert et al., 2024), which are fine-tuned to generate structured reasoning chains that can be parsed, and shown that LLMs frequently violate domain constraints given in their instructions (Stechly et al., 2024). For example, LLMs might propose plans that walk through a wall in a maze, or pick up a block when the robotβs gripper is already occupied.
Table 1: Performance on an 8 $Γ$ 8 two-room gridworld using DeepSeek V3. Paths start in one room and end in the other. Valid plans never leave the grid or cross walls; Successful plans reach their goals; and Optimal plans are successful and use the minimum number of steps. L-ICL[ $m$ ] denotes our method trained on $m$ examples, with the corresponding character count of L-ICL examples provided. All experiments are provided with an ASCII representation of the grid.
| Zero-Shot | 16 | 0 | 0 |
| --- | --- | --- | --- |
| RAG-ICL [10k chars] | 20 | 6 | 6 |
| RAG-ICL [20k chars] | 21 | 9 | 9 |
| ReAct | 48 | 41 | 37 |
| Self-Consistency ( $k{=}5$ ) | 59 | 45 | 43 |
| Self-Refine ( $k{=}5$ ) | 51 | 44 | 38 |
| PTP PTP, introduced in (Cohen and Cohen, 2024) describes a method to prompt LLMs with partially specified programs / L-ICL [ $m=0$ ] | 40 | 33 | 28 |
| L-ICL [ours, $m=60$ , 2k chars] | 89 | 89 | 77 |
Table 1 demonstrates this on a very simple 8 $Γ$ 8 two-room gridworld navigation task. Despite receiving complete information about the domain (grid layout and obstacles) no baseline method produces valid plans even 60% of the time. Agentic and test-time-scaling approaches perform better, but still produce many invalid plans. We conjecture that LLMs cannot build valid plans for this task because they fail to access the necessary domain-specific knowledge in the prompt consistently. This hypothesis is consistent with the failure of LLMs in these domains, and with their success in math and coding, where the necessary knowledge is general, and hence learnable in pre-training or fine-tuning.
In-context learning (ICL) is a natural remedy. However, complete solution trajectories demonstrate that plans work, not why individual steps are validβleaving constraints implicit. As Table 1 shows, even 20,000 characters of retrieved trajectories RAG-ICL, retrieving demonstrations for tasks with similar start and end goals. yield only 9% success. The rules must still be inferred, and inference fails.
L-ICL escapes this trap by letting failures reveal which constraints need explicit specification. Rather than full trajectories, we augment prompts with localized examples that demonstrate correct behavior on individual steps where models err. We call this approach Localized In-Context Learning (L-ICL). This approach achieves higher performance with much less context: 2,000 characters of targeted corrections outperforms 20,000 characters of trajectories. Generating L-ICL examples requires analyzing and correcting reasoning traces at training time, which we enable by prompting models to produce structured reasoning traces, and then correcting the traces with a symbolic planner. Thus, L-ICL might be viewed as distilling domain knowledge from a symbolic system into an LLM.
Figure 1 summarizes our approach, which builds on Program Trace Prompting (PTP) (Cohen and Cohen, 2024). PTP recasts reasoning as producing a βprogram traceβ for a partially specified program. A PTP prompt includes, for each type of reasoning βstepβ, documentation (but not code) for a corresponding subroutine, along with (optional) example inputs and outputs. For instance, a gridworld navigation task might include a subroutine get_applicable_actions(cell) that returns the set of obstacle-free cells adjacent to the input cell. Because no executable code is provided in PTP, just documentation, the LLM must infer how to perform the reasoning step: e.g., in gridworld navigation, the LLM must infer which moves are valid for a task. PTPβs prompting scheme provides a natural insertion point for localized corrections: when a subroutine call fails, we locally augment that subroutineβs documentation by adding a new input/output example. The input/output examples use Pythonβs doctest syntax, a format well-represented in LLM training data, so readily understandable by code-trained LLMs.
<details>
<summary>graphs/corrected/Research_Presentation.png Details</summary>

### Visual Description
## Diagram: Prompt Template and Error Analysis Flow
### Overview
The image presents a diagram illustrating a prompt template used in conjunction with program analysis and error localization. It outlines the process of using a program fragment with hidden implementation details, tracing its execution, and analyzing the results to identify errors. The diagram includes code snippets, example calls, and a symbolic planner component.
### Components/Axes
The diagram consists of the following components:
1. **Prompt Template (Top-Left)**:
* Text: "Prompt Template"
* Text: "Consider the program fragment below. This program fragment is incomplete, with key parts of the implementation hidden, by replacing them with "..." markers."
* Label: "[MAZE]" (connected by a red arrow to a maze representation)
* Label: "PROGRAM: [PROGRAM FRAGMENT + TRACES]" (connected by a blue arrow to program execution traces)
* Text: "QUESTION: What is the output of the program for the input((1,2),(6,5)) ?"
2. **Maze Representation (Top-Center)**:
* A grid-like structure with "..." and "X" characters.
* Appears to represent a maze or grid-based environment.
* The grid contains the following elements:
* `+` characters at the corners.
* `-` characters forming the top and bottom borders.
* `|` characters forming the left and right borders.
* `.` and `X` characters within the grid.
* The grid appears to be 3 rows by 5 columns.
* Row 1: `...+...`
* Row 2: `|.X.X.|`
* Row 3: `|.X.X.|`
* Row 4: `...+...`
3. **Code Snippet (Top-Right)**:
* `def get_applicable_actions(state: PlanningState) -> Set[Action]:`
* `"""Get all applicable actions in the current state."""`
4. **Execution Traces (Center-Right)**:
* `>>> Calling get_applicable_actions((1,2))...`
* `...get_applicable_actions returned ['move_north', 'move_east']`
* `Calling get_optimal_actions(...`
* `...and many more steps...`
* `Calling at_goal((5,4), (6,5))...`
* `...at_goal returned False`
5. **"Structured trace" (Center)**:
* A rectangular box with the text "Structured trace" inside.
* Connected by a downward arrow from the "Prompt Template" box.
6. **"Analyze and localize errors" (Bottom-Left)**:
* A rectangular box with the text "Analyze and localize errors" inside.
* Connected by a downward arrow from the "Structured trace" box.
* "Symbolic Planner" is written below the box.
7. **In-context Example (Bottom-Right)**:
* `>>> get_applicable_actions((5,4))`
* `["move_east", "move_west"]`
* `In-context example for one step (a "doctest"/ L-ICL)`
### Detailed Analysis or ### Content Details
* **Prompt Template**: The prompt template defines the structure of the input provided to the system. It includes a program fragment with missing parts (indicated by "..." markers), a maze representation, and a question about the program's output for a given input.
* **Maze Representation**: The maze representation provides a context or environment for the program to operate within. The 'X' characters likely represent obstacles or walls.
* **Code Snippet**: The code snippet defines a function `get_applicable_actions` that takes a `PlanningState` as input and returns a set of possible actions.
* **Execution Traces**: The execution traces show the sequence of function calls and return values during the program's execution. It includes calls to `get_applicable_actions` and `at_goal`, along with intermediate steps.
* **Structured Trace**: This represents the structured representation of the program's execution, which is used for error analysis.
* **Analyze and localize errors**: This step involves analyzing the structured trace to identify and locate errors in the program.
* **Symbolic Planner**: The symbolic planner is a component used to reason about the program's behavior and identify potential errors.
* **In-context Example**: This provides a specific example of how the `get_applicable_actions` function is used in a particular state.
### Key Observations
* The diagram illustrates a process for analyzing and localizing errors in a program by using a prompt template, tracing the program's execution, and analyzing the results.
* The maze representation provides a context for the program to operate within.
* The execution traces show the sequence of function calls and return values during the program's execution.
* The symbolic planner is used to reason about the program's behavior and identify potential errors.
### Interpretation
The diagram describes a system for automated program debugging. The "Prompt Template" provides a structured input to the system, including a program fragment, a maze environment, and a question. The system then executes the program fragment, generating "Traces" of its execution. These traces are converted into a "Structured Trace," which is then analyzed to "localize errors." The "Symbolic Planner" likely uses symbolic execution or other formal methods to reason about the program's behavior and identify discrepancies between the expected and actual behavior. The "In-context example" provides a concrete illustration of how the `get_applicable_actions` function works, which can be used to guide the error localization process. The overall goal is to automate the process of finding and fixing bugs in programs.
</details>
Figure 1: Overview of L-ICL. The prompt template follows PTP: it includes documentation for each subroutine but no executable code. Prompting an LLM produces a trace that follows the format of the $k$ provided example traces. The trace is parsed to find the first failing step, and the failing input is passed to an oracle that returns the correct output. This yields a localized example (e.g., $x{=}\texttt{(5,4)}$ , $y{=}\texttt{['move\_east','move\_west']}$ ) that is inserted into the subroutineβs documentation. This process iterates over training instances to accumulate examples in a failure-driven manner.
Given a planning task, we first prompt the LLM to generate a trace using the PTP format. We then analyze this trace programmatically to identify the first failing step, i.e., the first subroutine call whose output violates domain constraints. An oracle (a symbolic simulator or verifier) provides the correct output for that input, yielding a localized correction. This correction is then inserted into the prompt. For instance, if the LLMβs first invalid move is from cell $(3,4)$ , we L-ICL will add to the prompt an example showing get_applicable_actions((3,4)) should return [βmove_northβ, βmove_southβ]. This localized correction directly addresses the failure, and of course can also be generalized by the LLM to other similar cases.
This process iterates over multiple training instances, accumulating a bank of targeted examples that progressively refine the modelβs understanding of domain constraints. Crucially, the oracle is required only during training.
Experimentally, prompt augmentation with L-ICL dramatically reduces domain violations, and thus improves LLM planning performance across multiple domains. Beyond the results of Table 1 and other gridworld tasks, we evaluate on classical planning benchmarks like BlocksWorld and Sokoban, seeing similar gains. L-ICL is also remarkably sample-efficient: peak performance is typically achieved with only 30β60 training examples. L-ICL works on multiple LLM architectures (DeepSeek V3, DeepSeek V3.1, Claude Haiku 4.5, Claude Sonnet 4.5), and learned constraints can transfer across problem sizes (see Appendix B).
To summarize our contributions: (1) Using the PTP variant of semi-structured reasoning, we precisely measure constraint violation rates in LLM-generated plans across multiple planning domains, revealing that such violations are the dominant failure mode. (2) We introduce L-ICL, a method that improves planning validity through localized, failure-driven corrections, and show that targeted examples outperform retrieval of complete trajectories even when the latter uses 10 $Γ$ more context. (3) We demonstrate consistent improvements across multiple planning domains and four LLM architectures. (4) We release our benchmark suite and code to facilitate future research on LLM planning.
2 Related Work
2.1 LLM Planning: Capabilities and Limitations
The planning capabilities of LLMs remain contested. One line of work reports strong performance on some planning tasks when LLMs are augmented with appropriate scaffolding: e.g., Tree of Thoughts achieves 74% on Game of 24 versus 4% for chain-of-thought (Yao et al., 2023a), RAP-MCTS reaches 100% on Blocksworld instances requiring 6 or fewer steps (Hao et al., 2023), and ReAct improves interactive decision-making by 34% over baselines (Yao et al., 2023b). However, systematic evaluation on classical planning benchmarks reveals persistent failures. Valmeekam et al. (2023) show GPT-4 achieves only 12% success on International Planning Competition (IPC) domains; and Stechly et al. (2024) demonstrate that chain-of-thought improvements are brittle and fail to generalize beyond surface patterns. The LLM-Modulo framework (Kambhampati et al., 2024) argues that LLMs function as approximate knowledge sources rather than autonomous planners, achieving strong results only when paired with external verifiers. Kaesberg et al. (2025) also documented that LLMs are challenged by 2D navigation tasks, similar to ones we study here. Most recently, Shojaee et al. (2025) identify a βcomplexity collapseβ phenomenon: reasoning modelsβ performance degrades sharply beyond certain problem complexities, with accuracy dropping to zero on harder instances even when token budgets remain available.
We follow Stechly et al. (2024) in working to diagnose why LLMs violate constraints using structured reasoning chains; however, we work with PTP as a prompting scheme, rather than models fine-tuned to produce structured reasoning chains, allowing us to consider more kinds of models, and more powerful ones. With L-ICL, we also propose a practical method to reduce these violations. Our work confirms that constraint violations are a common failure mode, and shows that targeted corrections outperform both agentic scaffolding and retrieval-based ICL approaches.
2.2 Approaches to Improve LLM Reasoning
Prior work addresses LLM reasoning limitations through three main strategies: structured output formats, test-time compute scaling, and in-context learning. Structured Reasoning. Chain-of-thought prompting (Wei et al., 2022) improves performance by eliciting intermediate steps, though explanations may be unfaithful to actual computation (Turpin et al., 2023). PTP (Cohen and Cohen, 2024) offers interpretable traces: prompts specify subroutine signatures without implementations, and the LLM produces structured outputs that can be parsed and verified (Leng et al., 2025). We build on PTP because its explicit subroutine structure provides natural insertion points for localized corrections. Test-Time Compute. Several methods improve reasoning by expending more computation at inference. Self-Consistency (Wang et al., 2023) aggregates multiple sampled paths via majority voting; Tree of Thoughts (Yao et al., 2023a) explores branching reasoning trajectories; and Self-Refine (Madaan et al., 2023) iteratively improves outputs through self-critique. Tool-augmented approaches interleave reasoning with execution: Program of Thoughts (Chen et al., 2022), PAL (Gao et al., 2023), and Chain of Code (Li et al., 2023) generate executable code, while ReAct (Yao et al., 2023b) interleaves reasoning with tool calls. These methods require multiple LLM calls or external tools at inference. Critically, Stechly et al. (2025) show that LLM self-verification is unreliable, making self-critique ineffective for planning. In-Context Learning. ICL enables task adaptation through examples (Brown et al., 2020), with effectiveness depending on example selection (Liu et al., 2022) and format (Min et al., 2022). For planning, a natural approach is retrieving complete solution trajectories (RAG-ICL). However, we find this ineffective: 20,000 characters of retrieved trajectories yield only 9% success on our gridworld benchmark. Complete trajectories demonstrate that solutions work but leave implicit why individual steps are valid. L-ICL addresses this by providing localized input-output pairs that directly encode constraints. Table 2 summarizes how L-ICL relates to prior approaches.
Table 2: Comparison of L-ICL with related approaches. L-ICL uniquely combines example-based training with localized feedback while requiring only single-pass inference.
| Self-Refine Tree of Thoughts Self-Consistency | none none none | many many many | none none none |
| --- | --- | --- | --- |
| ReAct | none | many | none |
| ReAct + oracle f/b | none | many | yes |
| Fine-tuning | trajectory | one | none |
| RAG-ICL | trajectory | one | none |
| L-ICL (ours) | one step | one | train only |
3 Method
We first describe Program Trace Prompting (PTP), the structured reasoning framework underlying our approach. We then introduce Localized In-Context Learning (L-ICL), our method for iteratively injecting domain constraints into the prompt. Finally, we describe our experimental domains and evaluation setup.
3.1 Background: Program Trace Prompting
Program Trace Prompting (PTP) (Cohen and Cohen, 2024) recasts reasoning as producing an execution trace for a partially specified program. A PTP prompt contains documentation for each subroutine (function name, typed arguments, return type, and a natural language description of its purpose), a small number of example traces showing how subroutines are called, and the query problem to solve. Crucially, subroutine implementations are withheld; the LLM must infer correct behavior from context. For planning tasks, we define subroutines corresponding to planning primitives. For instance, a gridworld navigation task includes a subroutine that returns applicable actions from a given state (those that stay in bounds and avoid walls), a subroutine that returns the resulting state after executing an action, and a subroutine that checks whether the current state satisfies the goal. The LLM generates a trace by repeatedly invoking these subroutines, producing outputs consistent with the documentation and examples. Because the trace follows a predictable structure, we can parse it programmatically and verify each step against a ground-truth oracle. This explicit subroutine structure provides natural insertion points for corrections: when a specific subroutine call fails, we can augment that subroutineβs documentation without modifying the rest of the prompt. Full subroutine specifications for each domain appear in Appendix E.
3.2 Localized In-Context Learning (L-ICL)
The key insight behind L-ICL is that domain constraints can be taught more effectively through targeted examples than through complete solution trajectories. When an LLM violates a constraint (e.g., proposing to move through a wall), traditional approaches either reject the entire plan or provide feedback on the final outcome. L-ICL instead identifies the precise point of failure and injects a minimal correction for that specific subroutine call. First Failure Identification. Given an LLM-generated trace, we parse each subroutine call and verify its output against an oracle. Let $c_{1},c_{2},...,c_{n}$ denote the sequence of subroutine calls in the trace. We identify the first failing call $c_{i^{*}}$ such that the LLMβs output differs from the oracleβs:
$$
i^{*}=\min\{i:\text{LLM}(c_{i})\neq\text{Oracle}(c_{i})\}
$$
Focusing on the first failure is deliberate. Planning errors cascade: an invalid move at step $k$ renders all subsequent state representations incorrect, making later βerrorsβ artifacts of the initial mistake rather than independent failures. Correcting the root cause addresses multiple downstream errors simultaneously. Localized Correction. For the failing call $c_{i^{*}}$ with input $x$ and incorrect output $\hat{y}$ , we query the oracle to obtain the correct output $y^{*}=\text{Oracle}(x)$ . This yields a correction tuple $(f,x,y^{*})$ where $f$ is the subroutine name. We format this correction as a doctest-style example and insert it into the documentation for subroutine $f$ , augmenting the original description with an additional input-output pair. This format, drawn from Pythonβs widely used doctest convention, is well-represented in LLM training data. Appendix E.3 provides concrete examples of the correction format. Iterative Accumulation. L-ICL iterates over a set of training problems $\{P_{1},P_{2},...,P_{m}\}$ . For each problem, we generate a trace using the current prompt, identify the first failing subroutine call (if any), and add the corresponding correction to the prompt. Corrections accumulate across training problems, progressively βhardeningβ the prompt to avoid constraint violations. Algorithm 1 provides pseudocode. L-ICL converges quickly: we see diminishing returns after only 30β60 training examples on our benchmark tasks (see Section 4).
Algorithm 1 Localized In-Context Learning (L-ICL)
0: Base prompt $\mathcal{P}_{0}$ with PTP structure, training problems $\{P_{1},...,P_{m}\}$ , oracle $\mathcal{O}$
0: Augmented prompt $\mathcal{P}$
$\mathcal{P}β\mathcal{P}_{0}$
$\mathcal{C}β\emptyset$ $\triangleright$ Correction set
for $j=1$ to $m$ do
$\tauβ\textsc{GenerateTrace}(\mathcal{P}_{0},P_{j})$
$\{c_{1},...,c_{n}\}β\textsc{ParseCalls}(\tau)$
for $i=1$ to $n$ do
$(f,x,\hat{y})β c_{i}$
$y^{*}β\mathcal{O}(f,x)$
if $\hat{y}β y^{*}$ then
$\mathcal{C}β\mathcal{C}\cup\{(f,x,y^{*})\}$ $\triangleright$ Record first failure
break
end if
end for
end for
$\mathcal{P}β\textsc{InsertCorrections}(\mathcal{P}_{0},\mathcal{C})$ $\triangleright$ Batch update
return $\mathcal{P}$
3.3 Experimental Domains
We design our experimental domains as a progressive ablation study that isolates different facets of planning difficulty. Starting from simple navigation, we incrementally add complexity along several axes: spatial structure, action diversity, state tracking requirements, and strategic reasoning. Table 3 summarizes how each domain isolates specific challenges.
Table 3: Progressive ablation across experimental domains. Each domain adds complexity along one or more axes while controlling others.
| 8 $Γ$ 8 Grid | Simple | 4 | 1 | No |
| --- | --- | --- | --- | --- |
| 10 $Γ$ 10 Maze | Complex | 4 | 1 | No |
| Sokoban Grid | Complex | 4 | 1 | No |
| Full Sokoban | Complex | 8 | 3 | Yes |
| BlocksWorld | None | 2 | 5 | No |
The 8 $Γ$ 8 Two-Room Gridworld is our simplest setting, testing basic spatial reasoning: an agent must navigate between two rooms connected by a single doorway. The 10 $Γ$ 10 Maze increases spatial complexity with narrow corridors and dead ends, requiring longer plans (typically 15β25 steps versus 8β12 for the gridworld). Full Sokoban introduces the critical challenge of multi-object state tracking (an agent and a box), where the agent must coordinate its position with multiple box positions, and where certain pushes lead to irreversible trap states. Sokoban-Style Gridworld ablates Sokoban by removing pushable boxes, but keeping the spatial layout and action semantics, isolating the effect of richer environment structure. Finally, BlocksWorld differs qualitatively from navigation: every object (block) is dynamic, constraints depend on relational configurations rather than spatial positions, and we provide an algorithmic sketch to test whether L-ICL can improve adherence to prescribed planning strategies. Full domain specifications appear in Appendix C.
3.4 Baselines and Metrics
We compare L-ICL against several approaches spanning prompting strategies, agentic methods, and retrieval. Zero-Shot. The LLM receives the problem description and instructions with no in-context examples, measuring baseline capability without demonstration. RAG-ICL. We retrieve complete CoT-formatter solution trajectories for similar problems based on start/goal similarity, and evaluate at 10k and 20k character budgets. ReAct. The LLM is instructed to interleave reasoning and action selection in its output, following the prompt format specified in Appendix F.2. We evaluate a prompt-only version and an oracle-augmented version that queries a verifier during planning. Self-Consistency. Majority voting with $k{=}5$ reasoning paths sampled at temperature 0.7. Self-Refine. The LLM generates a solution, then critiques and refines it, based on its own feedback, for $k{=}5$ iterations. Tree-of-Thoughts. The LLM explores a tree of intermediate steps, evaluating and pruning branches (prompt-only, no external search). Crucially, ReAct (Oracle) queries the verifier at test time for each proposed action, while L-ICL uses the oracle only during training. At inference, L-ICL requires a single forward pass with no external dependencies. For L-ICL, we report results with different numbers of training examples $m$ (denoted L-ICL[ $m$ ]) to assess sample efficiency.
We evaluate plans along three axes that form a natural hierarchy. A plan is valid if it violates no domain constraints (e.g., no wall collisions). A plan is successful if it is valid and reaches the goal state. A plan is optimal if it is successful and uses the minimum number of steps. Hence, a large valid-to-success gap indicates the model follows rules but fails to reach goals, and a large success-to-optimal gap indicates inefficient but functional plans.
3.5 Experimental Setup
Our primary experiments use DeepSeek V3 (DeepSeek-AI, 2024), with additional evaluation on DeepSeek V3.1, Claude 4.5 Haiku, and Claude Sonnet 4.5 (Anthropic, 2025) to assess cross-architecture generalization. For each domain, we generate 100 test problems with random start and goal configurations. Training problems for L-ICL are drawn from a disjoint pool of 250 instances. For domains other than blocks world, prompts use a textual state representation, as suggested in Figure 1, and unless stated otherwise, use an ASCII representation of the grid. Oracles are domain-specific: simple simulators for gridworlds and mazes, and the Fast Downward planner (Helmert, 2006) and tools like the K-Star Planner (Katz and Lee, 2023; Lee et al., 2023) for Sokoban and BlocksWorld. We use temperature 1 for optimal model performance (DeepSeek-AI, 2024) unless stated. L-ICL is trained on up to 240 examples.
4 Results
We evaluate L-ICL across our domain suite, demonstrating that localized corrections dramatically improve constraint adherence while remaining sample-efficient. We ask four key questions about L-ICL: (1) Does it learn domain constraints? (2) Is it more efficient than retrieval-based ICL? (3) Does it require explicit spatial representations? (4) Does it generalize across LLM architectures?
4.1 L-ICL Learns Domain Constraints
Table 4 presents our main results across all domains. L-ICL consistently outperforms all baselines, often by substantial margins. Beyond raw performance gains, the pattern of results across our progressive domain suite reveals which aspects of planning L-ICL addresses effectively.
Table 4: Main results across all domains. We report %(V)alid and %(S)uccessful. All baselines receive ASCII grid representations. L-ICL[ $m$ ] denotes training on $m$ examples. Best results in bold, second-best underlined. $\dagger$ ReAct (Oracle f/b) receives oracle feedback at inference time. β L-ICL (no grid) methods are handicapped: they receive no ASCII grid, and rely purely on L-ICL to infer structure.
| | 8 $Γ$ 8 Grid | 10 $Γ$ 10 Maze | Sokoban Grid | Full Sokoban | BlocksWorld | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Method | V | S | V | S | V | S | V | S | V | S |
| Zero-Shot | 16 | 0 | 3 | 0 | 15 | 0 | 1 | 0 | 10 | 10 |
| RAG-ICL (10k chars) | 20 | 6 | 7 | 1 | 17 | 4 | 31 | 11 | 25 | 25 |
| RAG-ICL (20k chars) | 21 | 9 | 7 | 4 | 25 | 10 | 36 | 15 | 32 | 32 |
| ReAct (Prompt-Only) | 48 | 41 | 6 | 5 | 19 | 12 | 1 | 0 | 46 | 45 |
| Self-Consistency ( $k{=}5$ ) | 59 | 45 | 3 | 3 | 10 | 5 | 2 | 1 | 31 | 31 |
| Self-Refine ( $k{=}5$ ) | 51 | 44 | 3 | 1 | 13 | 8 | 0 | 0 | 49 | 49 |
| ToT (Prompt-Only) | 33 | 12 | 1 | 0 | 3 | 2 | 0 | 0 | 50 | 40 |
| ReAct (Oracle f/b) β | 55 | 45 | 6 | 5 | 21 | 13 | 3 | 0 | 51 | 51 |
| L-ICL[ $m{=}0$ ] (ours) | 40 | 33 | 20 | 16 | 21 | 17 | 19 | 13 | 50 | 48 |
| L-ICL[ $m{=}60$ ] (ours) | 89 | 89 | 40 | 21 | 63 | 49 | 46 | 20 | 68 | 66 |
| L-ICL[ $m{=}0$ ] β (ours) | 19 | 12 | 7 | 6 | 10 | 8 | 12 | 9 | 50 | 48 |
| L-ICL[ $m{=}60$ ] β (ours) | 73 | 63 | 57 | 27 | 62 | 44 | 42 | 14 | 68 | 66 |
8 $Γ$ 8 Gridworld. The complete failure of zero-shot prompting (0%) on this simple two-room task is striking: the model receives full information about walls, start, and goal, yet fails completely. This reveals that the bottleneck is not knowledge but application. L-ICL achieves 63% success, demonstrating that localized corrections bridge this gap. Figure 2 shows rapid improvement in the first 30 examples, with continued gains for $β$ 160 examples before plateauing. 10 $Γ$ 10 Maze. The mazeβs narrow corridors and longer optimal paths (15β25 steps) challenge all methods. L-ICL reaches 27% success where baselines achieve at most 5%. Notably, valid rates reach 57%, indicating that most L-ICL plans respect maze constraints even when they fail to reach the goal. This valid-to-success gap suggests that constraint satisfaction and goal-directed search are separable challenges; L-ICL addresses the former effectively. Sokoban Grid. Despite adopting Sokobanβs richer spatial structure, this domain (without pushable boxes) yields results intermediate between the prior domains: L-ICL achieves 49% success versus 13% for the best baseline. The similarity suggests that spatial complexity, not action vocabulary, dominates difficulty in navigation tasks. Full Sokoban. Introducing pushable boxes causes the sharpest performance degradation across all methods. L-ICL improves success from 13% to only 20%, yet increases valid action rates from 19% to 46%. This dissociation isolates multi-object state tracking as a distinct challenge: L-ICL teaches which pushes are legal, but coordinating agent and box positions toward the goal requires capabilities beyond constraint satisfaction, furhter analyzed in Appendix A. BlocksWorld. This domain differs qualitatively: constraints are relational (βblock A is on block Bβ) rather than spatial, and every object is dynamic. L-ICL still improves success from 48% to 66%, demonstrating that localized corrections generalize beyond navigation.
<details>
<summary>graphs/misc/8x8_nogrid_success_optimal_combined.png Details</summary>

### Visual Description
## Line Chart: 8x8 Gridworld: Success vs Optimal Rate
### Overview
The image is a line chart comparing the success rate and optimal rate of two methods, "Best Baseline" and "L-ICL," across varying numbers of training examples in an 8x8 Gridworld environment. The chart displays the performance of each method, along with shaded regions indicating variability or confidence intervals.
### Components/Axes
* **Title:** 8x8 Gridworld: Success vs Optimal Rate
* **X-axis:** Training Examples, with markers at 0, 30, 60, 90, 120, 150, 180, 210, and 240.
* **Y-axis:** Rate (%), with markers at 0, 10, 20, 30, 40, 50, 60, 70, 80, and 90.
* **Legend:** Located at the bottom of the chart.
* Best Baseline Success (Self-Consistency) - Dashed Blue Line
* Best Baseline Optimal (Self-Consistency) - Dashed Orange Line
* L-ICL Success - Solid Blue Line
* L-ICL Optimal - Solid Orange Line
### Detailed Analysis
* **Best Baseline Success (Self-Consistency):** Represented by a dashed blue line. The line is approximately flat at a rate of 45%.
* **Best Baseline Optimal (Self-Consistency):** Represented by a dashed orange line. The line is approximately flat at a rate of 45%.
* **L-ICL Success:** Represented by a solid blue line.
* Starts at approximately 10% at 0 Training Examples.
* Rises sharply to approximately 46% at 30 Training Examples.
* Increases to approximately 63% at 60 Training Examples.
* Decreases slightly to approximately 59% at 90 Training Examples.
* Increases to approximately 63% at 120 Training Examples.
* Increases to approximately 69% at 150 Training Examples.
* Increases to approximately 77% at 180 Training Examples.
* Decreases slightly to approximately 73% at 210 Training Examples.
* Increases to approximately 74% at 240 Training Examples.
* **L-ICL Optimal:** Represented by a solid orange line.
* Starts at approximately 10% at 0 Training Examples.
* Rises sharply to approximately 46% at 30 Training Examples.
* Decreases slightly to approximately 51% at 60 Training Examples.
* Increases to approximately 63% at 90 Training Examples.
* Increases to approximately 65% at 120 Training Examples.
* Increases to approximately 69% at 150 Training Examples.
* Decreases to approximately 67% at 180 Training Examples.
* Increases to approximately 78% at 210 Training Examples.
* Decreases slightly to approximately 71% at 240 Training Examples.
### Key Observations
* The "Best Baseline" methods (both Success and Optimal) remain relatively constant across all training examples, hovering around 45%.
* The "L-ICL" methods (both Success and Optimal) show a significant increase in rate as the number of training examples increases, particularly in the early stages.
* The "L-ICL Success" rate is generally higher than the "L-ICL Optimal" rate, especially after 60 training examples.
* Both "L-ICL" lines show some fluctuation, but generally trend upwards.
### Interpretation
The data suggests that the "L-ICL" methods are more effective than the "Best Baseline" methods in the 8x8 Gridworld environment, as they achieve higher success and optimal rates with increasing training examples. The "Best Baseline" methods appear to have a fixed performance level, regardless of the number of training examples. The fluctuations in the "L-ICL" lines could be due to the learning process, where the model adjusts its strategy based on the training data. The shaded regions around the lines likely represent the variance in the results across multiple runs or experiments, indicating the reliability of the observed trends.
</details>
Figure 2: 8 $Γ$ 8 Gridworld learning curves. Success and Optimal rates vs. training examples. L-ICL (without being given the ASCII grid) improves rapidly in the first 30β60 examples, substantially outperforming all baselines, which are given access to the ASCII grid (horizontal line shows best baseline).
4.2 L-ICL Is More Efficient Than Retrieval-Based ICL
A key advantage of L-ICL is sample efficiency: localized corrections convey more information per token than complete solution trajectories. Figure 3 compares L-ICL and RAG-ICL as a function of context size. RAG-ICL with 20,000 characters of retrieved trajectories achieves 16% success. L-ICL matches this performance with approximately 5,000 characters and reaches 63% success with 7,000 characters. At matched context size, L-ICL outperforms RAG-ICL by 40+ percentage points. This efficiency stems from the compression achieved by localized examples. A complete trajectory demonstrates that a solution works but leaves implicit why individual steps are valid. A local example like get_applicable_actions((3,4)) -> [βmove_northβ,βmove_southβ] directly encodes that eastward movement from (3,4) is blocked.
<details>
<summary>graphs/efficiency/8x8_grid_nogrid_efficiency.png Details</summary>

### Visual Description
## Line Chart: 8x8 Gridworld: Sample Efficiency
### Overview
The image is a line chart comparing the sample efficiency of two methods, RAG-CoT and L-ICL, in an 8x8 Gridworld environment. The chart plots the success rate (in percentage) against the context size (in characters). Both lines have shaded regions around them, indicating a confidence interval or standard deviation.
### Components/Axes
* **Title:** 8x8 Gridworld: Sample Efficiency
* **X-axis:** Context Size (chars), with markers at 0, 5k, 10k, 15k, and 20k.
* **Y-axis:** Success Rate (%), with markers at 0, 10, 20, 30, 40, 50, 60, 70, 80, and 90.
* **Legend:** Located at the bottom of the chart.
* RAG-CoT (orange line with square markers)
* L-ICL (blue line with circle markers)
### Detailed Analysis
* **RAG-CoT (orange line):**
* The line starts at approximately 12% success rate at 0 context size.
* It dips slightly to around 11% at 5k context size.
* It then increases to approximately 20% at 10k context size.
* Continues to increase to approximately 23% at 15k context size.
* Reaches approximately 31% at 20k context size.
* The trend is generally upward, indicating increasing success rate with larger context size.
* **L-ICL (blue line):**
* The line starts at approximately 12% success rate at 0 context size.
* It increases sharply to approximately 46% at 5k context size.
* It plateaus around 60-65% between 5k and 10k context size.
* It fluctuates between 70% and 80% between 10k and 15k context size.
* It ends at approximately 74% at 20k context size.
* The trend is initially sharply upward, then plateaus with some fluctuations, indicating a higher success rate compared to RAG-CoT, especially with larger context sizes.
### Key Observations
* L-ICL consistently outperforms RAG-CoT across all context sizes.
* L-ICL shows a significant initial improvement in success rate with increasing context size, while RAG-CoT's improvement is more gradual.
* The shaded regions around the lines suggest that the variance in success rate is higher for L-ICL than for RAG-CoT, especially at larger context sizes.
### Interpretation
The chart demonstrates that L-ICL is more sample efficient than RAG-CoT in the 8x8 Gridworld environment. This means that L-ICL achieves a higher success rate with the same amount of context. The initial sharp increase in L-ICL's success rate suggests that it benefits more from the initial context provided, while RAG-CoT requires a larger context size to achieve comparable performance. The fluctuations in L-ICL's success rate at larger context sizes could indicate that it is more sensitive to the specific context provided, leading to higher variance in performance. The data suggests that L-ICL is a better choice for this task, especially when context size is limited.
</details>
Figure 3: Sample efficiency: L-ICL vs. RAG-ICL. Success rate vs. context size (characters) on 8 $Γ$ 8 Gridworld. L-ICL achieves higher performance with substantially less context.
4.3 L-ICL Does Not Need Full Domain Knowledge
In Table 4, in the tasks aside from BlocksWorld, all prompting schemes use an ASCII grid visualization of the gridworld to be explored (preliminary experiments suggested this approach was most effective for these tasks.) Since L-ICL learns to correct domain violations, a natural question is whether the ASCII grid is actually necessary for it: can it learn the domain from examples alone?
Figure 4 shows the learning curve for L-ICL on the 10x10 grid task with and without the ASCII visualization of the grid. The visualization accelerates performance early on (21% at $m{=}30$ with grid vs. 15% without), but peak performance is comparable (39% vs. 37%). Thus, L-ICL does not require visual scaffolding, although the grid provides useful inductive bias during early training. However, to obtain the full benefit of such scaffolding, the LLM requires some L-ICL training; with more examples being needed for more complex domains. Thus, the 8 $Γ$ 8 grid almost immediately benefits, whereas all harder domains only display the benefit of the scaffolded version over the non-scaffolded version later on in their training, as seen in the figure.
<details>
<summary>graphs/misc/10x10_maze_grid_ablation.png Details</summary>

### Visual Description
## Line Chart: 10x10 Maze: Grid Ablation
### Overview
The image is a line chart comparing the success rate (%) of different models (L-ICL and Best Baseline) with and without a grid, across varying numbers of training examples. The x-axis represents the number of training examples, and the y-axis represents the success rate in percentage. Shaded regions around the lines indicate variability or confidence intervals.
### Components/Axes
* **Title:** 10x10 Maze: Grid Ablation
* **X-axis:**
* **Label:** Training Examples
* **Scale:** 0, 30, 60, 90, 120, 150, 180, 210, 240
* **Y-axis:**
* **Label:** Success Rate (%)
* **Scale:** 0, 10, 20, 30, 40, 50
* **Legend:** Located at the bottom of the chart.
* Best Baseline (With Grid) - solid blue line
* Best Baseline (No Grid) - solid orange line
* L-ICL (With Grid) - blue line with circular markers
* L-ICL (No Grid) - orange line with circular markers
### Detailed Analysis
* **Best Baseline (With Grid):** (solid blue line)
* This line is nearly flat, hovering around a success rate of approximately 4%.
* Training Examples: 0, Success Rate: ~4%
* Training Examples: 240, Success Rate: ~4%
* **Best Baseline (No Grid):** (solid orange line)
* This line is also nearly flat, hovering around a success rate of approximately 2%.
* Training Examples: 0, Success Rate: ~2%
* Training Examples: 240, Success Rate: ~2%
* **L-ICL (With Grid):** (blue line with circular markers)
* The line starts at approximately 16% and generally increases, peaking around 150 training examples, then slightly decreasing and stabilizing.
* Training Examples: 0, Success Rate: ~16%
* Training Examples: 30, Success Rate: ~19%
* Training Examples: 60, Success Rate: ~21%
* Training Examples: 90, Success Rate: ~18%
* Training Examples: 120, Success Rate: ~22%
* Training Examples: 150, Success Rate: ~42%
* Training Examples: 180, Success Rate: ~39%
* Training Examples: 210, Success Rate: ~38%
* Training Examples: 240, Success Rate: ~39%
* **L-ICL (No Grid):** (orange line with circular markers)
* The line starts low, increases sharply until 120 training examples, then decreases slightly before increasing again and stabilizing.
* Training Examples: 0, Success Rate: ~6%
* Training Examples: 30, Success Rate: ~14%
* Training Examples: 60, Success Rate: ~27%
* Training Examples: 90, Success Rate: ~32%
* Training Examples: 120, Success Rate: ~37%
* Training Examples: 150, Success Rate: ~30%
* Training Examples: 180, Success Rate: ~31%
* Training Examples: 210, Success Rate: ~36%
* Training Examples: 240, Success Rate: ~34%
### Key Observations
* The "Best Baseline" models (both with and without grid) perform significantly worse than the "L-ICL" models.
* The "L-ICL (With Grid)" model shows a higher peak success rate compared to the "L-ICL (No Grid)" model.
* Both "L-ICL" models show a general increasing trend in success rate as the number of training examples increases, although there are fluctuations.
* The shaded regions around the "L-ICL" lines indicate variability in performance, which is more pronounced at certain training example counts.
### Interpretation
The chart suggests that using L-ICL significantly improves the success rate in the 10x10 Maze task compared to the "Best Baseline" models. The presence of a grid further enhances the performance of the L-ICL model, especially around 150 training examples. The variability indicated by the shaded regions suggests that the performance of the L-ICL models can fluctuate, possibly due to the stochastic nature of the training process or the complexity of the maze environment. The flat lines of the "Best Baseline" models indicate that they are not learning effectively from the training examples.
</details>
Figure 4: Grid representation ablation on 10 $Γ$ 10 Maze. The ASCII grid accelerates early learning but does not change peak performance. Without L-ICL, the grid provides little benefit.
4.4 L-ICL Works On Many LLM Architectures
To assess whether L-ICLβs benefits are architecture-specific, we evaluate on three additional models: DeepSeek V3.1, Claude 4.5 Haiku, and Claude Sonnet 4.5. Figure 5 shows results on the 10 $Γ$ 10 Maze. All models improve substantially with L-ICL. Claude Sonnet 4.5 shows the strongest gains (10% to 74%), followed by DeepSeek V3.1 (2% to 47%) and Claude 4.5 Haiku (1% to 39%). The relative ordering changes with training: at $m{=}0$ models are comparable, but by $m{=}120$ Claude Sonnet 4.5 leads substantially. This suggests stronger models leverage accumulated corrections more effectively, though all models benefit.
<details>
<summary>graphs/misc/llm_ablation_success.png Details</summary>

### Visual Description
## Line Chart: 10x10 Maze: L-ICL Performance Across LLMs
### Overview
The image is a line chart comparing the performance of four different Large Language Models (LLMs) on a 10x10 maze task. The chart plots the success rate (%) of each model against the number of training examples. The models compared are DeepSeek V3, DeepSeek V3.1, Claude Haiku 4.5, and Claude Sonnet 4.5. The chart includes shaded regions around each line, representing the uncertainty or variance in the performance.
### Components/Axes
* **Title:** 10x10 Maze: L-ICL Performance Across LLMs
* **X-axis:** Training Examples
* Scale: 0 to 240, with markers at 0, 30, 60, 90, 120, 150, 180, 210, and 240.
* **Y-axis:** Success Rate (%)
* Scale: 0 to 90, with markers at 0, 10, 20, 30, 40, 50, 60, 70, 80, and 90.
* **Legend:** Located at the bottom of the chart.
* DeepSeek V3 (Blue)
* DeepSeek V3.1 (Orange)
* Claude Haiku 4.5 (Green)
* Claude Sonnet 4.5 (Red)
### Detailed Analysis
* **DeepSeek V3 (Blue):**
* Trend: Generally increasing, but plateaus and slightly decreases towards the end.
* Data Points:
* 0 Training Examples: ~2%
* 30 Training Examples: ~10%
* 60 Training Examples: ~27%
* 90 Training Examples: ~33%
* 120 Training Examples: ~37%
* 150 Training Examples: ~30%
* 180 Training Examples: ~32%
* 210 Training Examples: ~38%
* 240 Training Examples: ~35%
* **DeepSeek V3.1 (Orange):**
* Trend: Increasing, peaks around 210 training examples, then decreases.
* Data Points:
* 0 Training Examples: ~3%
* 30 Training Examples: ~12%
* 60 Training Examples: ~30%
* 90 Training Examples: ~33%
* 120 Training Examples: ~35%
* 150 Training Examples: ~38%
* 180 Training Examples: ~43%
* 210 Training Examples: ~47%
* 240 Training Examples: ~38%
* **Claude Haiku 4.5 (Green):**
* Trend: Increasing, plateaus, and slightly decreases towards the end.
* Data Points:
* 0 Training Examples: ~5%
* 30 Training Examples: ~15%
* 60 Training Examples: ~22%
* 90 Training Examples: ~18%
* 120 Training Examples: ~32%
* 150 Training Examples: ~35%
* 180 Training Examples: ~38%
* 210 Training Examples: ~35%
* 240 Training Examples: ~32%
* **Claude Sonnet 4.5 (Red):**
* Trend: Rapidly increasing initially, plateaus, and then slightly decreases.
* Data Points:
* 0 Training Examples: ~7%
* 30 Training Examples: ~43%
* 60 Training Examples: ~61%
* 90 Training Examples: ~58%
* 120 Training Examples: ~74%
* 150 Training Examples: ~69%
* 180 Training Examples: ~68%
* 210 Training Examples: ~76%
* 240 Training Examples: ~69%
### Key Observations
* Claude Sonnet 4.5 (Red) significantly outperforms the other models, achieving a much higher success rate with fewer training examples.
* DeepSeek V3 (Blue) has the lowest overall performance.
* DeepSeek V3.1 (Orange) and Claude Haiku 4.5 (Green) have similar performance trends, with DeepSeek V3.1 generally performing slightly better.
* All models show diminishing returns with increased training examples, with performance plateauing or even decreasing after a certain point.
* The shaded regions indicate variability in performance, with Claude Sonnet 4.5 showing the widest range of variability.
### Interpretation
The chart demonstrates the effectiveness of different LLMs in solving a 10x10 maze task through In-Context Learning (ICL). Claude Sonnet 4.5 exhibits superior learning capabilities, achieving high success rates with fewer training examples, suggesting a more efficient learning algorithm or a better-suited architecture for this specific task. The plateauing or decreasing performance after a certain number of training examples suggests that the models may be overfitting to the training data or reaching the limits of what can be learned through ICL for this particular maze complexity. The variability in performance, as indicated by the shaded regions, highlights the instability or sensitivity of the models to different training sets or initial conditions. The data suggests that model selection and optimization of training examples are crucial for maximizing performance in ICL tasks.
</details>
Figure 5: L-ICL across LLM architectures. Success rate on 10 $Γ$ 10 Maze for four models. All improve substantially; Claude Sonnet 4.5 shows the largest gains (10% $β$ 74%).
4.5 Summary of Findings
(1) L-ICL dramatically improves constraint adherence, achieving consistently higher success rates than baselines across all domains. (2) L-ICL is sample-efficient: 30β90 training examples typically suffice, and L-ICL outperforms RAG-ICL while using 4 $Γ$ less context. (3) Explicit spatial representations are not required: ASCII grids accelerate early learning but do not change peak performance. (4) L-ICL generalizes across architectures: four LLMs from different families all benefit substantially. (5) Multi-object tracking and strategic planning remain challenging: the valid-to-success gap in Sokoban and BlocksWorld indicates that localized corrections address constraint violations but do not fully solve long-horizon coordination (see Appendix A).
5 Discussion
Our experiments demonstrate that L-ICL consistently improves LLM planning performance, often by substantial margins. Beyond raw performance gains, these results support a specific conceptual interpretation that clarifies both what L-ICL achieves and where challenges remain.
5.1 L-ICL as In-Context Unit Testing
In software engineering, unit testing is a means of βhardeningβ code subroutines (i.e., making them more reliable and predictable), and it is considered good practice to use unit tests even when end-to-end tests exist. ICL demonstrations instruct a model as to desired behavior, rather than confirming that it has a desired behavior; modulo this important difference, however, L-ICL demonstrations are analogous to unit tests, and traditional ICL demonstrations are analogous to end-to-end tests. L-ICL demonstrations can be viewed as a technique for βhardeningβ individual reasoning steps, in that they makes an LLMβs instruction-following behavior more reliable and consistent.
Full-trajectory demonstrations are more like end-to-end tests; in software engineering, these tests have a different role than unit tests, confirming that individual modules interact correctly: in LLM terms, they encourage process correctness, and only incidentally encourage step correctness. In planning tasks, an invalid plan may have many correctly perform steps and only a single invalidly performed step, so adding a full-trajectory demonstration is at best an inefficient way to improve performance, in terms of the useful information per prompt token, relative to accumulating local demonstrations in a failure-driven way.
5.2 Qualitative Evidence: From Guessing to Navigation
Figure 6 provides visual evidence of L-ICLβs effect. At $m{=}0$ , the model proposes moves without regard for walls, quickly entering invalid states. By $m{=}60$ , it produces a coherent start-to-goal path respecting all walls. Crucially, this improvement occurs without the model ever seeing the ASCII grid. The doctests encode constraints implicitly through input-output pairs, and the model learns to satisfy them. This demonstrates that L-ICL induces a transferable constraint prior rather than memorizing specific layouts.
<details>
<summary>graphs/misc/maze_pictures/0_final.png Details</summary>

### Visual Description
## Maze Diagram: Pathfinding
### Overview
The image is a diagram of a maze on a grid. The maze consists of black blocks representing walls and white blocks representing open paths. A green circle labeled "S" marks the start point, and a red circle labeled "G" marks the goal point. A blue line segment extends from "S", and a red dashed line indicates a possible path from the end of the blue line to "G".
### Components/Axes
* **Grid:** The maze is laid out on a square grid.
* **Walls:** Black blocks represent the walls of the maze.
* **Paths:** White blocks represent the open paths within the maze.
* **Start Point (S):** A green circle labeled "S" indicates the starting location.
* **Goal Point (G):** A red circle labeled "G" indicates the goal location.
* **Initial Path:** A short blue line segment extends from the start point "S".
* **Proposed Path:** A red dashed line shows a possible path from the end of the blue line to the goal point "G".
### Detailed Analysis
* **Start Point (S):** Located approximately at grid coordinates (2, 3) from the top-left corner.
* **Goal Point (G):** Located approximately at grid coordinates (8, 2) from the top-left corner.
* **Initial Path:** The blue line extends one grid unit to the right from "S".
* **Proposed Path:** The red dashed line starts from the end of the blue line, moves horizontally to the right, then upwards, then horizontally to the right again, and finally downwards to reach "G". The path avoids the black wall blocks.
### Key Observations
* The maze has a complex structure with multiple dead ends and turns.
* The proposed path is not a straight line and requires navigating around the walls.
* The initial blue line segment indicates a starting direction.
### Interpretation
The diagram illustrates a pathfinding problem within a maze. The goal is to find a path from the start point "S" to the goal point "G" while avoiding the walls. The red dashed line represents a possible solution, demonstrating a route that navigates the maze's obstacles. The initial blue line segment could represent an initial move or direction chosen by a pathfinding algorithm. The diagram is a visual representation of a common problem in computer science and robotics.
</details>
<details>
<summary>graphs/misc/maze_pictures/60_final.png Details</summary>

### Visual Description
## Maze Navigation Diagram
### Overview
The image depicts a maze with a solution path highlighted. The maze consists of black blocks representing walls and white blocks representing open space. A blue line indicates the path from the starting point (marked with a green "S") to the goal (marked with a red "G"). The maze is overlaid on a light gray grid.
### Components/Axes
* **Maze:** A grid of black and white squares representing walls and open paths, respectively.
* **Start Point (S):** A green circle containing the letter "S", located on the left side of the maze.
* **Goal Point (G):** A red circle containing the letter "G", located on the right side of the maze.
* **Solution Path:** A blue line indicating the path from the start point to the goal point.
* **Grid:** A light gray grid providing a visual reference for the maze structure.
### Detailed Analysis
The maze is approximately 10x10 grid cells. The start point "S" is located at approximately (2,5) in grid coordinates, where (0,0) is the top-left corner. The goal point "G" is located at approximately (8,2).
The blue solution path starts at "S" and moves:
1. Down one cell.
2. Right two cells.
3. Up four cells.
4. Right two cells.
5. Down two cells.
6. Right one cell.
7. Up two cells.
8. Right one cell to reach "G".
The black blocks form a complex arrangement of walls, creating a non-trivial path between "S" and "G".
### Key Observations
* The solution path is not a straight line; it requires multiple turns to navigate the maze.
* The maze has several dead ends and blocked pathways.
* The solution path appears to be the shortest possible route through the maze.
### Interpretation
The diagram illustrates a pathfinding problem and a possible solution. The maze represents a complex environment, and the blue line represents a successful navigation strategy. The presence of the start and goal points, along with the solution path, suggests a problem-solving scenario where an agent needs to find the optimal route between two points in a constrained environment. The diagram could be used to demonstrate pathfinding algorithms or to visualize the complexity of navigating a maze.
</details>
Figure 6: From blind guessing to structured navigation. Two rollouts on the same held-out maze as training examples $m$ increase. At $m{=}0$ (left), the model ignores walls entirely. By $m{=}60$ (right), the model produces a valid trajectory without ever seeing the grid representation, demonstrating that L-ICL induces transferable constraint knowledge.
5.3 Limitations and Scope
One limitation is that L-ICL requires an oracle that can verify constraint satisfaction and provide correct outputs during training; however, this oracle is needed only during training βat test time, L-ICL requires a single forward pass with no external dependencies, distinguishing it from methods like ReAct with oracle feedback that require verification at inference. Extending to domains without formal specifications may require weaker supervision (learned verifiers, stronger models) that could introduce noise.
A second limitation of this work is that we have only addressed one problem for LLM planners: their difficulty in correctly applying domain knowledge. LLM planners also struggle with strategic reasoning, i.e., performing valid actions in a way that quickly reaches the goal. While L-ICL excels improving validity, this does not always lead to good strategic reasoning, as shown by the valid-to-success gap in Sokoban (46% valid, 20% success). We leave to future work the question of whether localized corrections, or some extension of them, can also correct strategic failures, which seem to require multi-step lookahead, or whether L-ICL must be combined with complementary approaches such as search or value functions.
A third limitation of this paper is that we consider only formally-describable planning benchmarks from the LLM planning literature. Transfer to open-ended natural-language tasks is not studied.
6 Conclusion
We began with a puzzle: LLMs receive complete specifications of domain constraints yet routinely violate them. For example, stating that an agent cannot walk through walls is insufficient, because models do not consistently apply that information at test time. L-ICL addresses this issue in a simple way: when a constraint is violated, we add a minimal input-output example correcting that error, hence putting additional emphasis on the precise knowledge that was not applied. These minimal corrections are accumulating during training, progressively distilling behavioral knowledge from an oracle symbolic system into the prompt. The improvement is remarkable: on an 8 $Γ$ 8 gridworld where zero-shot prompting achieves 0% success, L-ICL reaches 89% with only 60 training examples, and L-ICL consistently outperforms other baselines across domains.
One key finding is that demonstration structure matters more than quantity. L-ICL achieves higher performance with 2,000 characters of targeted corrections than RAG-ICL achieves with 20,000 characters of complete trajectories. Complete solutions demonstrate that a plan works; localized examples demonstrate why individual steps are valid. This compression explains L-ICLβs sample efficiency and suggests a broader principle: LLM reliability can be improved by making implicit knowledge explicit at the point of application. This also reduces prompt engineering burden: rather than exhaustively specifying every constraint upfront, practitioners can let L-ICL discover them through failure-driven corrections.
L-ICL does not solve planning. The valid-to-success gap in Sokoban shows that respecting domain constraints is necessary but not sufficient; strategic reasoning remains challenging in this domain. We view this not as a limitation but as a clarification of scope. L-ICL provides a procedural hardening layer: a reliable foundation of constraint-satisfying primitives on which higher-level reasoning can build. Just as unit tests do not write the program but ensure its components behave correctly, L-ICL does not plan but ensures that proposed actions respect domain physics. We hope this decomposition proves useful for future work on LLM reasoning systems.
Impact Statement
This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
References
- Anthropic (2025) Claude 4.5 model family. Note: https://www.anthropic.com/claude Sonnet 4.5 released September 2025; Haiku 4.5 released October 2025 Cited by: Β§3.5.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. (2020) Language models are few-shot learners. In Advances in Neural Information Processing Systems, Vol. 33, pp. 1877β1901. Cited by: Β§2.2.
- W. Chen, X. Ma, X. Wang, and W. W. Cohen (2022) Program of thoughts prompting: disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588. Cited by: Β§2.2.
- C. A. Cohen and W. W. Cohen (2024) Watch your steps: observable and modular chains of thought. arXiv preprint arXiv:2409.15359. Cited by: Β§1, Β§2.2, Β§3.1, footnote 1.
- DeepSeek-AI (2024) DeepSeek-V3 technical report. arXiv preprint arXiv:2412.19437. Cited by: Β§3.5.
- G. FrancΓ©s, M. Ramirez, and Collaborators (2018) Tarski: an AI planning modeling framework. GitHub. Note: https://github.com/aig-upf/tarski Cited by: Β§E.5.
- L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig (2023) PAL: program-aided language models. In International Conference on Machine Learning, pp. 10764β10799. Cited by: Β§2.2.
- S. Hao, Y. Gu, H. Ma, J. J. Hong, Z. Wang, D. Z. Wang, and Z. Hu (2023) Reasoning with language model is planning with world model. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 8154β8173. Cited by: Β§2.1.
- M. Helmert (2006) The fast downward planning system. In Journal of Artificial Intelligence Research, Vol. 26, pp. 191β246. Cited by: Β§E.4.2, Β§E.5, Β§3.5.
- R. Howey, D. Long, and M. Fox (2004) VAL: automatic plan validation, continuous effects and mixed initiative planning using pddl. In 16th IEEE International Conference on Tools with Artificial Intelligence, Vol. , pp. 294β301. External Links: Document Cited by: Β§E.4.1.
- L. B. Kaesberg, J. P. Wahle, T. Ruas, and B. Gipp (2025) SPaRC: a spatial pathfinding reasoning challenge. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, C. Christodoulopoulos, T. Chakraborty, C. Rose, and V. Peng (Eds.), Suzhou, China, pp. 10359β10390. External Links: Link, Document, ISBN 979-8-89176-332-6 Cited by: Β§2.1.
- S. Kambhampati, K. Valmeekam, L. Guan, M. Verma, K. Stechly, S. Bhambri, L. Saldyt, and A. Murthy (2024) Position: llms canβt plan, but can help planning in llm-modulo frameworks. In Proceedings of the 41st International Conference on Machine Learning, ICMLβ24. Cited by: Β§2.1.
- M. Katz and J. Lee (2023) K* search over orbit space for top-k planning. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023), Cited by: Β§E.5, Β§3.5.
- O. Khattab, A. Singhvi, P. Maheshwari, Z. Zhang, K. Santhanam, S. Vardhamanan, S. Haq, A. Sharma, T. T. Joshi, H. Moazam, H. Miller, M. Zaharia, and C. Potts (2023) DSPy: compiling declarative language model calls into self-improving pipelines. External Links: 2310.03714, Link Cited by: Β§1.
- J. Lee, M. Katz, and S. Sohrabi (2023) On k* search for top-k planning. In Symposium on Combinatorial Search, External Links: Link Cited by: Β§3.5.
- L. Lehnert, S. Sukhbaatar, D. Su, Q. Zheng, P. McVay, M. Rabbat, and Y. Tian (2024) Beyond A*: better planning with transformers via search dynamics bootstrapping. arXiv preprint arXiv:2402.14083. Cited by: Β§1.
- J. Leng, C. A. Cohen, Z. Zhang, C. Xiong, and W. W. Cohen (2025) Semi-structured llm reasoners can be rigorously audited. External Links: 2505.24217, Link Cited by: Β§2.2.
- C. Li, J. Liang, A. Zeng, X. Chen, K. Hausman, D. Sadigh, S. Levine, L. Fei-Fei, F. Xia, and B. Ichter (2023) Chain of code: reasoning with a language model-augmented code emulator. arXiv preprint arXiv:2312.04474. Cited by: Β§2.2.
- J. Liu, D. Shen, Y. Zhang, B. Dolan, L. Carin, and W. Chen (2022) What makes good in-context examples for GPT-3?. In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, E. Agirre, M. Apidianaki, and I. VuliΔ (Eds.), Dublin, Ireland and Online, pp. 100β114. External Links: Link, Document Cited by: Β§2.2.
- A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wiegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang, S. Gupta, B. P. Majumder, K. Hermann, S. Welleck, A. Yazdanbakhsh, and P. Clark (2023) Self-refine: iterative refinement with self-feedback. In Advances in Neural Information Processing Systems, Vol. 36. Cited by: Β§D.1.4, Β§2.2.
- S. Min, X. Lyu, A. Holtzman, M. Arber, M. Lewis, H. Hajishirzi, and L. Zettlemoyer (2022) Rethinking the role of demonstrations: what makes in-context learning work?. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 11048β11064. Cited by: Β§2.2.
- P. Shojaee, I. Mirzadeh, K. Alizadeh, M. Horton, S. Bengio, and M. Farajtabar (2025) The illusion of thinking: understanding the strengths and limitations of reasoning models via the lens of problem complexity. In Advances in Neural Information Processing Systems, Vol. 38. Cited by: Β§2.1.
- K. Stechly, K. Valmeekam, and S. Kambhampati (2024) Chain of thoughtlessness? an analysis of cot in planning. In Advances in Neural Information Processing Systems, Vol. 37. Cited by: 3rd item, Β§F.1.3, Β§1, Β§2.1, Β§2.1.
- K. Stechly, K. Valmeekam, and S. Kambhampati (2025) On the self-verification limitations of large language models on reasoning and planning tasks. In The Thirteenth International Conference on Learning Representations, External Links: Link Cited by: Β§D.1.4, Β§2.2.
- M. Turpin, J. Michael, E. Perez, and S. R. Bowman (2023) Language models donβt always say what they think: unfaithful explanations in chain-of-thought prompting. In Thirty-seventh Conference on Neural Information Processing Systems, External Links: Link Cited by: Β§2.2.
- K. Valmeekam, M. Marquez, S. Sreedharan, and S. Kambhampati (2023) On the planning abilities of large language modelsβa critical investigation. In Advances in Neural Information Processing Systems, Vol. 36. Cited by: Β§1, Β§2.1.
- X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou (2023) Self-consistency improves chain of thought reasoning in language models. In International Conference on Learning Representations, Cited by: Β§D.1.3, Β§2.2.
- J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou (2022) Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, Vol. 35, pp. 24824β24837. Cited by: Β§2.2.
- S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao, and K. Narasimhan (2023a) Tree of thoughts: deliberate problem solving with large language models. In Advances in Neural Information Processing Systems, Vol. 36. Cited by: Β§D.1.6, Β§1, Β§2.1, Β§2.2.
- S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao (2023b) ReAct: synergizing reasoning and acting in language models. In International Conference on Learning Representations, Cited by: Β§D.1.5, Β§2.1, Β§2.2.
Appendix A Analysis: The Valid-to-Success Gap
While L-ICL dramatically improves constraint adherence, a gap often remains between validity and success. This gap is most pronounced in Full Sokoban, where L-ICL achieves 46% valid plans but only 20% success (Table 4). Understanding this gap illuminates both L-ICLβs strengths and its limitations.
A.1 Trap Rate Analysis
In Sokoban, certain states are traps: configurations from which the goal is unreachable regardless of future actions (e.g., a box pushed into a corner). We measure the adjusted trap rate: among valid plans, what fraction enters a trap state?
Figure 7 shows that L-ICL reduces trap rates. On Sokoban Grid, the adjusted trap rate drops from 50% at $m{=}0$ to 10% at $m{=}210$ . This indicates that L-ICL teaches not only immediate constraint satisfaction but also some degree of trap avoidance.
However, the absolute trap rate remains non-negligible, and the valid-to-success gap persists. We hypothesize that trap avoidance requires multi-step lookahead that localized corrections cannot fully provide. A correction like βpushing box B east from (3,4) is validβ does not encode that this push leads to an unsolvable configuration three moves later. Addressing this limitation may require complementary approaches such as search or learned value functions.
<details>
<summary>graphs/misc/sokoban_trap_rate_adjusted.png Details</summary>

### Visual Description
## Line Chart: Sokoban Gridworld: Adjusted Trap Rate
### Overview
The image is a line chart comparing the adjusted trap rate (%) in a Sokoban Gridworld environment with and without a grid, as a function of the number of training examples. The chart displays two lines, one representing "Without Grid" (orange) and the other representing "With Grid" (blue). Shaded regions around each line indicate the uncertainty or variability in the data.
### Components/Axes
* **Title:** Sokoban Gridworld: Adjusted Trap Rate
* **X-axis:** Training Examples, with markers at 0, 30, 60, 90, 120, 150, 180, 210, and 240.
* **Y-axis:** Adjusted Trap Rate (%), with markers at 0, 10, 20, 30, 40, 50, 60, 70, and 80.
* **Legend:** Located at the bottom of the chart.
* Orange line: Without Grid
* Blue line: With Grid
### Detailed Analysis
**Without Grid (Orange Line):**
* **Trend:** The line starts high, drops sharply, then increases, and then decreases again, showing a fluctuating pattern.
* **Data Points:**
* 0 Training Examples: Approximately 68%
* 30 Training Examples: Approximately 0%
* 60 Training Examples: Approximately 11%
* 90 Training Examples: Approximately 25%
* 120 Training Examples: Approximately 23%
* 150 Training Examples: Approximately 8%
* 180 Training Examples: Approximately 23%
* 210 Training Examples: Approximately 21%
* 240 Training Examples: Approximately 19%
**With Grid (Blue Line):**
* **Trend:** The line starts high, drops, then increases, and then decreases again, showing a fluctuating pattern.
* **Data Points:**
* 0 Training Examples: Approximately 50%
* 30 Training Examples: Approximately 38%
* 60 Training Examples: Approximately 14%
* 90 Training Examples: Approximately 35%
* 120 Training Examples: Approximately 18%
* 150 Training Examples: Approximately 15%
* 180 Training Examples: Approximately 26%
* 210 Training Examples: Approximately 15%
* 240 Training Examples: Approximately 9%
### Key Observations
* Initially, the "Without Grid" trap rate is higher than the "With Grid" trap rate.
* Both lines show a significant drop in the trap rate between 0 and 60 training examples.
* Both lines fluctuate, indicating that the adjusted trap rate varies with the number of training examples.
* The shaded regions around the lines suggest variability in the trap rate for both conditions.
### Interpretation
The chart compares the adjusted trap rate in a Sokoban Gridworld environment with and without a grid. The data suggests that initially, the absence of a grid leads to a higher trap rate. However, as the number of training examples increases, both conditions exhibit fluctuating trap rates. The variability indicated by the shaded regions suggests that the performance is not consistent and may depend on other factors not explicitly represented in the chart. The "With Grid" line generally shows a lower trap rate after the initial drop, suggesting that the grid provides some benefit in reducing the likelihood of traps.
</details>
Figure 7: Trap rate decreases with L-ICL. Adjusted trap rate (fraction of valid plans entering unsolvable states) on Sokoban Grid. L-ICL reduces trap rates from 50% to 10%, indicating partial learning of strategic constraints.
A.2 Multi-Object State Tracking
Comparing Sokoban Grid (no boxes) to Full Sokoban reveals the cost of multi-object tracking. With identical spatial layouts, Sokoban Grid achieves 49% success while Full Sokoban reaches only 20%. The difference lies in state complexity: Full Sokoban requires tracking the agent position and all box positions, with constraints that depend on their joint configuration.
This difficulty is also evident in BlocksWorld, where every object is dynamic. L-ICL improves BlocksWorld success from 48% to 66%, but a gap remains between validity (68%) and success. The pattern suggests that relational constraint learning, while improved by L-ICL, remains more challenging than spatial constraint learning.
A.3 Decomposing Planning Difficulty
The valid-to-success gap reveals a clean decomposition of planning difficulty:
1. Constraint satisfaction: Generating actions that respect domain physics. L-ICL addresses this effectively across all domains.
1. Strategic selection: Among valid actions, choosing those that lead toward the goal without entering traps. This requires multi-step reasoning that localized corrections do not directly provide.
This decomposition suggests a practical architecture: use L-ICL to harden constraint satisfaction, then layer strategic reasoning (search, learned policies, or hierarchical planning) on top. The hardened base ensures that any action proposed by the strategic layer is physically valid, separating concerns and simplifying both components.
Table 5 summarizes the valid-to-success gaps across domains, highlighting where strategic failures dominate.
Table 5: Valid-to-success gap analysis across domains with L-ICL[ $m{=}60$ ]. Larger gaps indicate that constraint satisfaction alone is insufficientβstrategic reasoning is the bottleneck.
| 8 $Γ$ 8 Grid 10 $Γ$ 10 Maze Sokoban Grid | 89 57 63 | 89 27 49 | 0 30 14 |
| --- | --- | --- | --- |
| Full Sokoban | 46 | 20 | 26 |
| BlocksWorld | 68 | 66 | 2 |
The 8 $Γ$ 8 gridworld shows no gap: once constraints are satisfied, the simple structure makes goal-reaching straightforward. The 10 $Γ$ 10 maze and Full Sokoban show the largest gaps, reflecting the strategic complexity of navigating dead ends and avoiding irreversible trap states, respectively. BlocksWorld shows a small gap, suggesting that while relational constraints are harder to learn, once learned they suffice for task completion in our 5-block instances.
Appendix B Out-of-Distribution Generalization
<details>
<summary>graphs/domains/gridworld-10x10-sokoban_grid.png Details</summary>

### Visual Description
## Grid-Based Path Diagram
### Overview
The image is a 10x10 grid diagram with a path marked by dark blue squares. The path forms a closed loop with a small branch in the middle. The grid has numerical labels from 1 to 10 on both the x and y axes.
### Components/Axes
* **X-axis:** Numerical labels from 1 to 10, representing the horizontal position.
* **Y-axis:** Numerical labels from 1 to 10, representing the vertical position.
* **Path:** A series of connected dark blue squares forming a path. The rest of the squares are white.
### Detailed Analysis
The path can be described as follows:
1. **Outer Loop:**
* Starts at (1,1) and extends horizontally to (10,1).
* Extends vertically from (10,1) to (10,10).
* Extends horizontally from (10,10) to (1,10).
* Extends vertically from (1,10) to (1,2).
2. **Inner Branch:**
* From (1,2), the path moves to (1,10).
* From (1,7), the path extends horizontally to (9,7).
* From (6,7), the path extends vertically to (6,8).
* From (6,8), the path extends horizontally to (9,8).
3. **Central Feature:**
* A vertical segment exists at x=5, from y=4 to y=6.
* A horizontal segment exists at y=5, from x=4 to x=5.
Specific coordinates of the dark blue squares:
* Row 1: (1,1), (2,1), (3,1), (4,1), (5,1), (6,1), (7,1), (8,1), (9,1), (10,1)
* Row 2: (1,2), (10,2)
* Row 3: (1,3), (10,3)
* Row 4: (1,4), (5,4), (10,4)
* Row 5: (1,5), (4,5), (5,5), (10,5)
* Row 6: (1,6), (5,6), (10,6)
* Row 7: (1,7), (6,7), (7,7), (8,7), (9,7), (10,7)
* Row 8: (6,8), (7,8), (8,8), (9,8)
* Row 9: (1,9), (10,9)
* Row 10: (1,10), (2,10), (3,10), (4,10), (5,10), (6,10), (7,10), (8,10), (9,10), (10,10)
### Key Observations
* The path forms a closed loop around the perimeter of the grid.
* There is a small branch extending inwards from the top edge.
* The path is made up of discrete, square segments.
### Interpretation
The diagram likely represents a simple maze or a path-finding problem on a grid. The dark blue squares indicate the allowed path, while the white squares represent obstacles or empty space. The branch in the middle could represent a choice point or a detour in the path. The central feature could be a point of interest.
</details>
(a) 10 $Γ$ 10 maze (training distribution)
<details>
<summary>graphs/domains/gridworld-15x15-sokoban_grid.png Details</summary>

### Visual Description
## Pixel Grid with Shape
### Overview
The image is a 15x15 grid where some cells are filled with a dark blue color, forming a specific shape. The filled cells create a border around a rectangular area, with additional filled cells inside the border forming a "T" shape and a short horizontal line.
### Components/Axes
* **Grid:** 15 rows and 15 columns.
* **X-axis:** Labeled 1 to 15.
* **Y-axis:** Labeled 1 to 15.
* **Filled Cells:** Dark blue cells forming the shape.
### Detailed Analysis
* **Border:** A rectangular border is formed by filled cells along the edges of the grid.
* Row 1: Cells 1 to 15 are filled.
* Row 15: Cells 1 to 15 are filled.
* Column 1: Cells 2 to 14 are filled.
* Column 15: Cells 2 to 14 are filled.
* **"T" Shape:** A "T" shape is formed by filled cells located in the center-left area of the grid.
* Cell (4, 5) is filled.
* Cell (5, 4) is filled.
* Cell (5, 5) is filled.
* Cell (5, 6) is filled.
* **Horizontal Line:** A short horizontal line is formed by filled cells located in the upper-center area of the grid.
* Cell (8, 6) is filled.
* Cell (8, 7) is filled.
* Cell (8, 8) is filled.
* Cell (8, 9) is filled.
### Key Observations
* The border creates a clear rectangular boundary within the grid.
* The "T" shape and horizontal line are distinct elements within the bordered area.
* The grid provides a structured framework for the placement of the filled cells.
### Interpretation
The image appears to be a simple representation of a shape or pattern within a grid. The combination of the border, "T" shape, and horizontal line creates a visually distinct arrangement. This could be a representation of a game board, a simple graphic design, or an abstract pattern. The placement of the shapes within the grid suggests a deliberate composition.
</details>
(b) 15 $Γ$ 15 maze (OOD evaluation)
Figure 8: Out-of-distribution generalization setup. L-ICL corrections are accumulated on 10 $Γ$ 10 mazes (left) and evaluated on 15 $Γ$ 15 mazes (right). The larger mazes contain positions not seen during training, yet corrections transfer substantially, despite the penalty for boundary violations differing.
A key question for any learning-based approach is whether acquired knowledge transfers beyond the training distribution. For L-ICL, this translates to: do corrections learned on smaller problem instances improve performance on larger, unseen instances? We investigate this by training L-ICL on 10 $Γ$ 10 mazes and evaluating on 15 $Γ$ 15 mazes, showin in Figure 8.
B.1 Experimental Setup
We accumulate L-ICL corrections using the standard training procedure on 10 $Γ$ 10 maze instances (Section 3.5). We then evaluate the resulting prompts on a held-out test set of 100 15 $Γ$ 15 mazes. The larger mazes are generated using the same procedural algorithm (randomized depth-first search) with proportionally scaled wall density, but contain positions and path structures never seen during training.
B.2 Results
Figure 9 shows that L-ICL corrections provide substantial transfer to larger instances. At $m{=}0$ (no corrections), the 15 $Γ$ 15 maze achieves only 9% successβcomparable to the 10 $Γ$ 10 baseline without corrections. With corrections accumulated from 10 $Γ$ 10 training instances, 15 $Γ$ 15 success improves to 49% at $m{=}120$ , representing a 5 $Γ$ improvement over the no-correction baseline.
<details>
<summary>graphs/misc/ood_10x10_to_15x15.png Details</summary>

### Visual Description
## Line Chart: OOD Generalization: 10x10 -> 15x15 Transfer
### Overview
The image is a line chart comparing the success rate of two models: one trained and tested on 10x10 data (in-distribution) and another trained on 10x10 data but tested on 15x15 data (OOD transfer). The x-axis represents the number of training examples, and the y-axis represents the success rate in percentage. The chart shows how the success rate changes with increasing training examples for both models. Each line has a shaded region around it, representing the uncertainty or variance in the success rate.
### Components/Axes
* **Title:** OOD Generalization: 10x10 -> 15x15 Transfer
* **X-axis:**
* **Label:** Training Examples
* **Scale:** 0, 30, 60, 90, 120, 150, 180, 210, 240
* **Y-axis:**
* **Label:** Success Rate (%)
* **Scale:** 0, 10, 20, 30, 40, 50, 60, 70
* **Legend:** Located at the bottom of the chart.
* Blue line with circle marker: 10x10 (In-Distribution)
* Orange line with circle marker: 15x15 (OOD Transfer)
### Detailed Analysis
* **10x10 (In-Distribution) - Blue Line:**
* **Trend:** The line starts at approximately 25% and increases sharply to around 52% at 30 training examples. It then plateaus around 58% until 150 training examples. It peaks at approximately 61% at 180 training examples, then decreases to approximately 54% at 240 training examples.
* **Data Points:**
* 0 Training Examples: ~25%
* 30 Training Examples: ~52%
* 60 Training Examples: ~58%
* 90 Training Examples: ~58%
* 120 Training Examples: ~57%
* 150 Training Examples: ~57%
* 180 Training Examples: ~61%
* 210 Training Examples: ~56%
* 240 Training Examples: ~54%
* **15x15 (OOD Transfer) - Orange Line:**
* **Trend:** The line starts at approximately 10% and increases to approximately 38% at 60 training examples. It then decreases to approximately 30% at 90 training examples, before peaking at approximately 50% at 120 training examples. It then decreases to approximately 35% at 150 training examples, before decreasing again to approximately 32% at 180 training examples. Finally, it increases to approximately 44% at 210 and 240 training examples.
* **Data Points:**
* 0 Training Examples: ~10%
* 30 Training Examples: ~24%
* 60 Training Examples: ~38%
* 90 Training Examples: ~30%
* 120 Training Examples: ~50%
* 150 Training Examples: ~35%
* 180 Training Examples: ~32%
* 210 Training Examples: ~44%
* 240 Training Examples: ~44%
### Key Observations
* The in-distribution model (10x10) consistently outperforms the OOD transfer model (15x15) across all training example counts.
* The in-distribution model (10x10) reaches a higher success rate and plateaus earlier than the OOD transfer model (15x15).
* The OOD transfer model (15x15) shows more fluctuation in success rate as the number of training examples increases.
* The shaded regions around the lines indicate the variance or uncertainty in the success rates. The OOD transfer model (15x15) generally has a wider shaded region, indicating higher variance.
### Interpretation
The chart demonstrates the performance difference between a model trained and tested on the same distribution (10x10) and a model trained on one distribution (10x10) but tested on a different distribution (15x15). The in-distribution model achieves a higher success rate, indicating that it generalizes better to data similar to what it was trained on. The OOD transfer model, on the other hand, struggles to generalize to the 15x15 data, resulting in a lower success rate and higher variance. This highlights the challenge of out-of-distribution generalization and the importance of training data that is representative of the data the model will encounter in real-world scenarios. The fluctuations in the OOD transfer model's performance suggest that it may be more sensitive to the specific training examples used.
</details>
Figure 9: Out-of-distribution transfer: 10 $Γ$ 10 $β$ 15 $Γ$ 15. Corrections learned on 10 $Γ$ 10 mazes transfer to larger instances, improving success from 9% to 49%. A gap remains compared to in-distribution performance (57% on 10 $Γ$ 10), but transfer is substantial.
Table 6 summarizes the transfer results at key checkpoints.
Table 6: Out-of-distribution generalization: corrections trained on 10 $Γ$ 10 mazes evaluated on 15 $Γ$ 15 mazes. We report success rate (%) and compare to in-distribution 10 $Γ$ 10 performance.
| Training Examples $m=0$ $m=30$ | 10 $Γ$ 10 (in-dist.) 16 21 | 15 $Γ$ 15 (OOD) 9 18 |
| --- | --- | --- |
| $m=60$ | 27 | 31 |
| $m=120$ | 57 | 49 |
B.3 Why Does Transfer Work?
The transfer is notable because 15 $Γ$ 15 mazes contain positions (e.g., $(12,14)$ ) and wall configurations that never appear in 10 $Γ$ 10 training instances. We hypothesize that corrections transfer because they encode constraint types rather than specific positions.
Consider a correction like: >>> get_applicable_actions((3, 4)) [βmove_northβ, βmove_southβ]
While this example specifies position $(3,4)$ , it implicitly teaches a general principle: when east and west are blocked (by walls or boundaries), only north and south are valid. The LLM can generalize this pattern to novel positions in larger grids.
This interpretation is supported by the observation that transfer improves with more corrections ( $m$ ). Early corrections address common constraint patterns (boundary violations, simple wall configurations); as $m$ increases, rarer patterns are covered, and the accumulated examples provide a richer specification that generalizes more robustly.
B.4 Transfer Gap Analysis
While transfer is substantial, a gap remains between in-distribution and OOD performance (57% vs. 49% at $m{=}120$ ). We identify two contributing factors:
1. Unseen spatial configurations: Larger mazes contain junction types and corridor patterns that may not appear in smaller instances. Some constraint violations specific to these configurations are not addressed by 10 $Γ$ 10 training.
1. Longer planning horizons: 15 $Γ$ 15 mazes require longer plans, providing more opportunities for errors to accumulate. Even with improved per-step validity, the probability of completing an error-free trajectory decreases with plan length.
These findings suggest that for maximum OOD performance, practitioners should either (a) train on a mixture of problem sizes, or (b) accept a modest performance gap when deploying to larger instances than those seen during training.
B.5 Cross-Domain Transfer
We also conducted preliminary experiments on cross-domain transfer: using corrections from one domain (e.g., 8 $Γ$ 8 gridworld) to improve another (e.g., 10 $Γ$ 10 maze). Results were mixedβcorrections for basic movement constraints (boundary checking) transferred, but domain-specific spatial structures (two-room vs. maze corridors) did not. This suggests that L-ICL learns a combination of general procedural knowledge and domain-specific constraint instantiations, with only the former transferring across domains.
Appendix C Domain Specifications
This appendix provides detailed specifications of the experimental domains used in our evaluation. For each domain, we describe the state representation, action space, constraints and goal conditions.
C.1 8 $Γ$ 8 Two-Room Gridworld
State Space.
The state consists of the agentβs $(x,y)$ position on an 8 $Γ$ 8 grid. Coordinates range from $(1,1)$ at the bottom-left to $(8,8)$ at the top-right.
Environment Structure.
The grid is divided into two rooms by a vertical wall running through column 5, with a single doorway allowing passage between rooms (doorway position varies by instance). Start positions are randomly sampled from one room, and goal positions from the other, ensuring all paths must traverse the doorway.
Action Space.
Four actions: move_north $(+y)$ , move_south $(-y)$ , move_east $(+x)$ , and move_west $(-x)$ .
Constraints.
An action is valid if and only if:
1. The resulting position remains within grid bounds.
1. The movement does not cross a wall segment.
Goal Condition.
The agentβs position equals the goal position.
Optimal Solution.
The shortest path between start and goal, computed via breadth-first search. Optimal paths typically require 8β12 steps.
<details>
<summary>graphs/domains/gridworld-8x8_grid.png Details</summary>

### Visual Description
## Heatmap: Simple Horizontal Bar
### Overview
The image is a simple heatmap consisting of an 8x8 grid. A horizontal bar, colored dark blue, spans from x=1 to x=6 at y=3. The rest of the grid is white.
### Components/Axes
* **X-axis:** Numerical scale from 1 to 8.
* **Y-axis:** Numerical scale from 1 to 8.
* **Data:** A horizontal bar at y=3, spanning from x=1 to x=6, colored dark blue.
### Detailed Analysis
The dark blue bar occupies the following grid cells:
* (1, 3)
* (2, 3)
* (3, 3)
* (4, 3)
* (5, 3)
* (6, 3)
### Key Observations
The heatmap shows a constant value (represented by the dark blue color) along the y=3 axis from x=1 to x=6.
### Interpretation
The heatmap visually represents a constant value or state across a specific range. The dark blue bar indicates a consistent condition or measurement within the defined x-axis range at the y=3 level. The absence of other colored cells suggests that the value is only present within this specific range and level.
</details>
Figure 10: Example 8 $Γ$ 8 two-room gridworld instance. Walls are shown as filled cells.
C.2 10 $Γ$ 10 Maze
State Space.
The state consists of the agentβs $(x,y)$ position on a 10 $Γ$ 10 grid. Coordinates range from $(1,1)$ to $(10,10)$ .
Environment Structure.
Mazes are procedurally generated using a randomized depth-first search algorithm, producing a spanning tree of corridors with exactly one path between any two open cells. This ensures unique shortest paths and creates narrow corridors with dead ends that require backtracking if the agent makes suboptimal choices.
Action Space.
Four actions: move_north, move_south, move_east, move_west.
Constraints.
Identical to the 8 $Γ$ 8 gridworld: actions must keep the agent in bounds and cannot cross walls.
Goal Condition.
The agentβs position equals the goal position.
Optimal Solution.
The unique shortest path through the maze.
<details>
<summary>graphs/domains/maze_grid.png Details</summary>

### Visual Description
## Grid Plot: Pixel Pattern
### Overview
The image is a grid plot, 10x10, where some cells are filled with a dark blue color, creating a pattern. The axes are numbered from 1 to 10.
### Components/Axes
* **X-axis:** Numerical scale from 1 to 10, incrementing by 1.
* **Y-axis:** Numerical scale from 1 to 10, incrementing by 1.
* **Grid:** 10x10 grid with light gray lines.
* **Data:** Dark blue filled cells forming a pattern.
### Detailed Analysis
The grid contains a pattern of filled (dark blue) and empty (white) cells. The filled cells are located at the following coordinates:
* (2, 2), (2, 4), (2, 6), (2, 8)
* (3, 3), (3, 9)
* (4, 2), (4, 4), (4, 6)
* (5, 9)
* (6, 3), (6, 5)
* (7, 9)
* (8, 4), (8, 6)
* (9, 7), (9, 9)
### Key Observations
The pattern formed by the filled cells does not appear to have any immediate symmetry or easily discernible structure.
### Interpretation
The image represents a binary matrix or a simple pixel-based graphic. Without additional context, the meaning of the pattern is unclear. It could represent a character, a symbol, or a random distribution. The data is discrete and spatial, with each cell either "on" (filled) or "off" (empty).
</details>
Figure 11: Example 10 $Γ$ 10 maze instance. The maze structure creates narrow corridors and dead ends, requiring longer plans than the two-room gridworld.
C.3 Sokoban-Style Gridworld
State Space.
The state consists of the agentβs $(x,y)$ position on a grid that uses Sokoban-style layouts. Coordinates are 1-indexed.
Environment Structure.
We use grid layouts from standard Sokoban benchmarks but remove all pushable boxes. The layouts retain walls, open floor cells, and the spatial structure of Sokoban puzzles, including irregular room shapes and narrow passages. This domain serves as an ablation to isolate the effect of Sokobanβs spatial complexity from the challenge of multi-object state tracking.
Action Space.
Four actions: move_north, move_south, move_east, move_west.
Constraints.
Actions must keep the agent within the walkable floor area and cannot cross walls.
Goal Condition.
The agent reaches a designated goal cell.
<details>
<summary>graphs/domains/gridworld_sokoban_noDZ_final_grid.png Details</summary>

### Visual Description
## Grid Diagram: Simple Maze
### Overview
The image is a grid diagram representing a simple maze. The grid is 10x10, with the x-axis and y-axis labeled from 1 to 10. Dark blue squares represent walls, and white squares represent open paths. The maze has an outer wall and an internal wall segment.
### Components/Axes
* **X-axis:** Labeled 1 to 10, representing the horizontal position on the grid.
* **Y-axis:** Labeled 1 to 10, representing the vertical position on the grid.
* **Walls:** Represented by dark blue squares.
* **Paths:** Represented by white squares.
### Detailed Analysis
* **Outer Walls:** The maze is enclosed by walls along the perimeter.
* Row 1 (y=1): Walls from x=1 to x=10.
* Row 10 (y=10): Walls from x=1 to x=10.
* Column 1 (x=1): Walls from y=2 to y=9.
* Column 10 (x=10): Walls from y=2 to y=9.
* **Internal Walls:** There is an internal wall segment.
* Column 5 (x=5): Walls from y=3 to y=6.
* Row 8 (y=8): Walls from x=6 to x=9.
### Key Observations
* The maze has a single entrance/exit point on the left side, between y=1 and y=2.
* The internal walls create a small obstacle in the upper-right quadrant of the maze.
### Interpretation
The diagram represents a simple maze layout. The dark blue squares define the boundaries and obstacles within the maze, while the white squares indicate the traversable paths. The internal walls add a minor level of complexity to the maze. The maze appears to be solvable, with a clear path from the entrance to the exit, although the exact path is not explicitly shown.
</details>
Figure 12: Example Sokoban-style gridworld instance. The layout is derived from a Sokoban puzzle but contains no pushable boxes, isolating spatial navigation from object manipulation.
C.4 Full Sokoban
State Space.
The state consists of:
- The agentβs $(x,y)$ position (1-indexed).
- The box position $(x,y)$ .
Our instances use 1 box.
Environment Structure.
Standard Sokoban puzzle layouts from established benchmarks, including walls, floor cells, and designated target locations where boxes must be placed.
Action Space.
Eight actions:
- Movement: move_north, move_south, move_east, move_west βmove the agent one cell in the specified direction if the destination is empty floor.
- Pushing: push_north, push_south, push_east, push_west βmove the agent into a cell containing a box, pushing the box one cell further in the same direction.
Constraints.
An action is valid if and only if:
1. Movement: The destination cell is within bounds, is not a wall, and does not contain a box.
1. Pushing: The cell adjacent to the agent contains a box, and the cell beyond the box (in the push direction) is within bounds, is not a wall, and does not contain another box.
Irreversibility.
Unlike navigation domains, Sokoban contains trap states βconfigurations from which the goal is unreachable. Common traps include:
- Pushing a box into a corner (cannot be retrieved).
- Pushing a box against a wall such that it cannot reach any target.
Goal Condition.
The box occupies the shown target position.
<details>
<summary>graphs/domains/sokoban_final_grid.png Details</summary>

### Visual Description
## Maze Diagram: Simple Grid Maze
### Overview
The image is a diagram of a simple maze on a 10x10 grid. The maze is defined by dark blue squares representing walls, and white squares representing open paths. A green diamond is located at grid position (4,5), representing a starting point or a marker within the maze.
### Components/Axes
* **Grid:** A 10x10 grid forms the base of the maze.
* **X-axis:** Numbered 1 to 10 from left to right.
* **Y-axis:** Numbered 1 to 10 from bottom to top.
* **Walls:** Dark blue squares represent the walls of the maze.
* **Paths:** White squares represent the open paths within the maze.
* **Marker:** A green diamond is located at the grid position (4,5).
### Detailed Analysis
* **Maze Structure:** The maze has a defined outer boundary of walls. There is a gap in the top wall between columns 6 and 10. There is a vertical wall extending from row 2 to row 6 in column 5.
* **Starting Point:** The green diamond is located at the coordinates (4,5).
* **Grid Coordinates:** The grid is numbered from 1 to 10 on both axes.
### Key Observations
* The maze is relatively simple, with a clear path from the starting point (4,5) to the top-right corner.
* The vertical wall in column 5 creates a partial barrier, requiring movement to the left or right to navigate around it.
### Interpretation
The diagram represents a basic maze layout. The green diamond likely indicates a starting position or a point of interest within the maze. The maze's structure suggests a simple pathfinding challenge, potentially used for illustrating basic navigation algorithms or problem-solving strategies. The maze is bounded by walls on all sides except for the top, where there is an opening.
</details>
Figure 13: Example Sokoban instance. The agent must push the box onto the target location without creating deadlocks.
C.5 BlocksWorld
State Space.
The state consists of a configuration of $n$ uniquely labeled blocks (we use $n=5$ in our experiments). Each block is either:
- On the table, or
- On top of exactly one other block.
A block is clear if no other block is on top of it. The table has unlimited capacity.
Action Space.
Three actions, described in natural language:
1. Move block from block to block (move-b-to-b): Pick up a block that is currently sitting on top of another block and place it onto a third block. This requires that the block being moved has nothing on top of it (is clear) and that the destination block also has nothing on top of it (is clear). After the move, the block that was underneath the moved block becomes clear.
1. Move block from block to table (move-b-to-t): Pick up a block that is currently sitting on top of another block and place it on the table. This requires that the block being moved has nothing on top of it (is clear). After the move, the block that was underneath becomes clear, and the moved block is now on the table.
1. Move block from table to block (move-t-to-b): Pick up a block that is currently on the table and place it onto another block. This requires that both the block being moved and the destination block have nothing on top of them (are clear). After the move, the destination block is no longer clear.
Constraints.
The preconditions for each action are:
- move-b-to-b( $b_{m}$ , $b_{f}$ , $b_{t}$ ): Block $b_{m}$ is clear, block $b_{t}$ is clear, $b_{m}$ is currently on $b_{f}$ , and $b_{m}β b_{t}$ .
- move-b-to-t( $b_{m}$ , $b_{f}$ ): Block $b_{m}$ is clear and $b_{m}$ is currently on $b_{f}$ .
- move-t-to-b( $b_{m}$ , $b_{t}$ ): Block $b_{m}$ is clear, block $b_{t}$ is clear, $b_{m}$ is currently on the table, and $b_{m}β b_{t}$ .
Goal Condition.
The block configuration matches a target specification, typically given as a set of on( $b_{1}$ , $b_{2}$ ) predicates describing which blocks must be stacked on which.
Differences from Navigation Domains.
BlocksWorld differs qualitatively from the grid-based domains:
- No spatial structure: Constraints are purely relational (βblock A is on block Bβ) rather than geometric.
- All objects are dynamic: Every block can be moved, unlike navigation where only the agent moves.
- Algorithmic solutions: We additionally provide an algorithmic sketch (the Universal Blockswordl Algorithm (Stechly et al., 2024)) to test whether L-ICL can improve adherence to prescribed planning strategies.
Appendix D Baseline Method Implementations
This appendix provides detailed specifications of all baseline methods evaluated in our experiments. All baselines operate on the same task: given a problem description with start position, goal position, walls, and (optionally) deadzones, produce an action sequence to navigate from start to goal. We organize baselines into two categories: prompt-only methods that rely solely on LLM reasoning, and oracle methods that receive feedback from a ground-truth simulator.
D.1 Prompt-Only Baselines
D.1.1 Zero-Shot Chain-of-Thought (Zero-Shot CoT)
The simplest baseline provides the model with task instructions, and asks it to reason step-by-step to produce a navigation plan.
Implementation.
The prompt includes: (1) a task description explaining gridworld navigation, valid actions, and movement constraints; (2) an ASCII representation of the problem (if applicable); (3) the query problem with start/goal coordinates; and (4) output format instructions requiring **Final Action Sequence:** action1, action2, .... We use temperature 1.0 for all experiments unless otherwise noted.
D.1.2 RAG-CoT (Retrieval-Augmented Chain-of-Thought)
This baseline extends Zero-Shot CoT with dynamic example selection based on similarity to the query problem. We retrieve the most relevant training examples within a character budget (10,000 or 20,000 characters in our experiments).
Similarity Metric.
We compute similarity based on Manhattan distance between start-goal pairs:
$$
\text{similarity}(q,c)=\frac{1}{1+|d_{q}-d_{c}|} \tag{1}
$$
where $d=|g_{x}-s_{x}|+|g_{y}-s_{y}|$ is the Manhattan distance from start $s$ to goal $g$ . This metric prefers training examples with similar navigation distances, under the assumption that problems with similar start-to-goal distances share structural similarities.
Retrieval Modes.
We evaluate three retrieval strategies:
- Strict: Add examples until the budget would be exceeded (conservative).
- Generous: Add examples until the budget is just crossed (permissive).
- Fixed: Include fixed examples plus retrieved examples up to the remaining budget.
Our main experiments use the generous mode.
D.1.3 Self-Consistency
Self-Consistency (Wang et al., 2023) generates multiple independent reasoning trajectories and selects the final answer via majority voting.
Implementation.
We sample $k=5$ independent CoT traces using temperature 1.0 for diversity. Each sample uses the same prompt with an annotation indicating the sample number (e.g., βSample 3/5β). We parse action sequences from each sample, count votes for each unique plan (exact sequence match), and select the plan with the highest vote count. For tie-breaking, we use an additional LLM call to evaluate candidates based on their self-critique annotations.
Self-Critique.
Each sample includes a self-critique section where the model evaluates its own reasoning, providing confidence estimates and noting potential issues. This information is used only for tie-breaking.
D.1.4 Self-Refine
Self-Refine (Madaan et al., 2023) allows the model to iteratively review and improve its own solutions without external feedback.
Implementation.
The model generates an initial attempt, then receives up to $N=5$ refinement opportunities. In each refinement round, the model sees its previous response and is instructed to check for potential mistakes: boundary violations, wall collisions, goal reachability, deadzone avoidance, and path optimality. The model may either provide a corrected plan or explicitly state β **No further refinement needed.** β
Termination Conditions.
Refinement stops when: (1) the model explicitly states satisfaction or (2) maximum refinement steps are reached
Key Distinction.
Unlike oracle baselines, Self-Refine receives no external feedback about plan validity. The model must introspect on its own reasoning, which prior work has shown to be unreliable for planning tasks (Stechly et al., 2025).
D.1.5 ReAct (Prompt-Only)
ReAct (Yao et al., 2023b) interleaves reasoning and action selection in a textual trace format.
Implementation.
The model alternates between Thought: steps (reasoning about current state and next move) and Action: steps (single movement action). All reasoning and actions are generated in a single LLM callβno external tool execution occurs. The prompt includes guidelines to keep moves consistent with the grid layout, avoid illegal steps, provide reasoning before each action, and end with an explicit final action sequence.
Trace Format.
β¬
Thought: [analyze current state and next move]
Action: move - direction
Thought: [continue reasoning]
Action: move - direction
...
Final Thought: [summarize path to goal]
** Final Action Sequence:** action1, action2, ...
D.1.6 Tree-of-Thoughts (Prompt-Only)
Tree-of-Thoughts (ToT) (Yao et al., 2023a) explores multiple reasoning paths through iterative expansion and scoring.
Implementation.
We use a prompt-only variant with breadth-first tree search. Parameters: breadth $b=5$ (nodes per level), depth $d=3$ (expansion rounds), max step actions $m=8$ (actions per candidate). At each depth level, the model generates candidate continuations as JSON, including a thought description, proposed actions, confidence score (0β100), terminal flag, and optional final plan. We keep the top- $k$ nodes by confidence (beam search) and finalize by selecting the best terminal node or completing the top non-terminal node.
Scoring.
Without oracle access, scoring uses only LLM self-assessed confidence and plan length:
$$
\text{score}=(\text{confidence},-\text{plan\_length}) \tag{2}
$$
Higher confidence is preferred; shorter plans are preferred as a tiebreaker.
D.2 Oracle Baselines
Oracle baselines have access to a ground-truth environment simulator that provides feedback on plan validity. This represents an upper bound on what prompt-only methods could achieve with perfect self-verification.
D.2.1 ReAct (+Oracle f/b)
This two-step approach allows the model to receive targeted feedback about errors and produce a corrected plan.
Implementation.
Step 1: Generate an initial CoT plan. Step 2: If errors are detected by the oracle, provide specific feedback and request correction. Maximum 2 LLM calls per problem. We use temperature 0.3 for more deterministic outputs in the correction step.
Feedback Types.
The oracle provides two types of feedback:
1. Invalid Move: βYour plan has an ERROR at step $N$ . The action βmove-Xβ at position $(x,y)$ is INVALID because it would move into a wall or out of bounds.β
1. Incomplete Path: βYour plan is INCOMPLETE. After executing all $N$ actions, you ended at position $(x,y)$ but did not reach the goal.β
Key Distinction.
ReAct (+Oracle f/b) queries the verifier at test time for each proposed plan, while L-ICL uses the oracle only during training. At inference, L-ICL requires a single forward pass with no external dependencies.
D.3 Evaluation Infrastructure
Action Parsing.
All baselines use the same action sequence parser that handles multiple output formats. We search for explicit patterns (e.g., **Final Action Sequence:**), fall back to lines with comma-separated actions, and as a last resort extract all move-* actions from the response. Actions are normalized to canonical form (e.g., βnorthβ $β$ βmove-northβ).
Plan Validation.
Plan validity is determined by simulating the action sequence:
1. Parse problem to extract start/goal positions and obstacles.
1. Execute actions sequentially, checking bounds and wall collisions.
1. Verify goal is reached.
1. Compare plan length to BFS optimal.
D.4 Summary of Baseline Characteristics
Table 7 summarizes the key characteristics distinguishing each baseline.
Table 7: Summary of baseline characteristics. βExamplesβ indicates whether the method uses in-context examples; βLLM Callsβ indicates calls per problem; βToolsβ indicates whether external tools are used.
$$
k 1 N O(b\cdot d) \tag{3}
$$
Appendix E Implementation Details
This appendix provides detailed implementation specifications for L-ICL, including the system architecture, correction generation process, and evaluation pipeline. We provide sufficient detail for reproducing our experiments.
E.1 System Architecture
Our implementation consists of four main components that work together to execute the L-ICL training loop:
1. Partial Program Generator: Constructs PTP-style prompts with subroutine documentation and accumulated corrections formatted as input-output examples.
1. LLM Interface: Sends prompts to language models and parses structured traces from responses.
1. Evaluation Engine: Validates generated plans using external tools and step-by-step simulation, identifying the first point of failure.
1. Correction Accumulator: Extracts corrections from evaluation mismatches and injects them into subsequent prompts.
Figure 14 illustrates the data flow between these components during L-ICL training.
Partial Program Generator
LLM Interface
Evaluation Engine
Correction Accumulator prompt trace corrections update
Figure 14: System architecture for L-ICL training. The loop iterates over training problems, accumulating corrections that progressively refine the prompt.
E.2 Subroutine Specifications by Domain
Each domain defines a set of planning primitives that the LLM must βimplementβ through trace generation. We describe the subroutines for each domain, including their signatures, semantics, and the constraints they encode.
E.2.1 Subroutines)
We use the following subroutines in our experiments:
State Extraction.
- extract_initial_state(problem) $β$ State: Parses the problem description to extract the agentβs starting position and environment structure.
- extract_goal(problem) $β$ State: Parses the goal specification from the problem.
Action Generation.
- get_applicable_actions(state, goal) $β$ Set[Action]: Returns the set of actions that can be legally executed from the current state. For navigation, this filters the four cardinal directions to exclude moves that would exit the grid or collide with walls.
- get_optimal_actions(state, goal) $β$ Set[Action]: Returns the subset of applicable actions that lie on an optimal path to the goal. This is computed using shortest-path algorithms/ a planner. For BlocksWorld, we replace this with get_recommended_actions(state, goal) $β$ Set[Action], which returns the set of actions prescribed by the Universal Blocksworld Algorithm.
State Transition and Goal Test.
- apply_action(state, action) $β$ State: Returns the state resulting from executing the action. For navigation, this updates the agentβs coordinates.
- at_goal(state, goal) $β$ bool: Returns whether the current state satisfies the goal condition.
E.3 Correction Format and Integration
L-ICL corrections are formatted as doctest-style input-output examples that are injected into subroutine documentation. This format is well-represented in LLM training data, facilitating generalization.
Correction Structure.
Each correction consists of three components:
1. Function identifier: Which subroutine the correction applies to.
1. Input: The arguments that triggered the mismatch.
1. Correct output: The oracle-provided ground truth.
Example Correction.
Consider an LLM that incorrectly proposes moving east from position $(3,4)$ when a wall blocks that direction. The evaluation detects that move_east is not in the set of applicable actions. L-ICL generates a correction:
β¬
>>> get_applicable_actions (state =(3,4), walls ={(3,5)})
{β move_north β, β move_south β, β move_west β}
This correction is inserted into the documentation for get_applicable_actions, providing an explicit example that eastward movement from $(3,4)$ is invalid.
Correction Accumulation.
Corrections accumulate across training problems. When a new correction duplicates an existing one (same function and inputs), we retain only one copy. This prevents prompt bloat while ensuring coverage of diverse failure cases. The accumulated corrections are batch-inserted into the prompt template before each evaluation iteration.
Additional details are in Section F.3.
E.4 Evaluation Pipeline
Plan evaluation proceeds through multiple validation stages, each providing increasingly detailed feedback.
E.4.1 External Plan Validation
We use the VAL validator (Howey et al., 2004), the standard tool for PDDL plan validation, to verify plan correctness. Given a domain specification, problem instance, and proposed action sequence, VAL checks:
- Each actionβs preconditions are satisfied when it is executed.
- The final state after executing all actions satisfies the goal.
VAL provides a binary validity judgment and, for invalid plans, identifies the first action whose preconditions fail.
E.4.2 Optimality Verification
To assess plan quality, we compute optimal solutions using the Fast Downward planning system (Helmert, 2006). Fast Downward is a state-of-the-art classical planner that guarantees optimal solutions when configured with admissible heuristics. We use the A* search algorithm with the LM-cut heuristic.
For each problem, we:
1. Run Fast Downward to obtain the optimal plan length.
1. Compare the LLMβs plan length against this baseline.
1. Mark plans as optimal if lengths match and the plan is valid.
E.4.3 Step-by-Step Simulation
Beyond end-to-end validation, we simulate plan execution step-by-step using proxy implementations of each subroutine. This enables:
1. First-failure identification: We identify the exact step where the LLMβs trace first diverges from ground truth, enabling localized correction generation.
1. Fine-grained error categorization: We distinguish between:
- Applicability errors: Proposing an action not in the applicable set.
- Optimality errors: Proposing an applicable but suboptimal action.
1. Correction generation: For each error type, we generate the corresponding correction by querying the oracle for the correct output.
Algorithm 2 provides pseudocode for the step-by-step evaluation procedure.
Algorithm 2 Step-by-Step Plan Evaluation
0: Domain $\mathcal{D}$ , problem $P$ , predicted actions $[a_{1},...,a_{n}]$ , oracle $\mathcal{O}$
0: Evaluation result with corrections
$sβ\mathcal{O}.\text{extract\_initial\_state}(P)$
$gβ\mathcal{O}.\text{extract\_goal}(P)$
corrections $β[]$
first_invalid $β$ null
first_suboptimal $β$ null
for $i=1$ to $n$ do
$A_{\text{applicable}}β\mathcal{O}.\text{get\_applicable\_actions}(s,g)$
$A_{\text{optimal}}β\mathcal{O}.\text{get\_optimal\_actions}(s,g)$
if $a_{i}β A_{\text{applicable}}$ and first_invalid is null then
first_invalid $β i$
corrections.append $(\text{``get\_applicable\_actions''},(s,g),A_{\text{applicable}})$
break $\triangleright$ Stop at first invalid action
else if $a_{i}β A_{\text{optimal}}$ and first_suboptimal is null then
first_suboptimal $β i$
corrections.append $(\text{``get\_optimal\_actions''},(s,g),A_{\text{optimal}})$
end if
$sβ\mathcal{O}.\text{apply\_action}(s,a_{i})$
end for
goal_reached $β\mathcal{O}.\text{at\_goal}(s,g)$
return {first_invalid, first_suboptimal, corrections, goal_reached}
E.5 Oracle Implementation
The oracle provides ground-truth outputs for each subroutine. We implement oracles using a combination of external planning tools and logic-based simulation.
Planning Tools.
We use Fast Downward (Helmert, 2006) for optimal plan computation and action applicability. For domains requiring multiple optimal plans (to compute optimal action sets), we additionally use the K* planner (Katz and Lee, 2023), which enumerates the top- $k$ shortest plans.
State Simulation.
Action effects are computed using the Tarski planning library (FrancΓ©s et al., 2018), which provides PDDL parsing and grounded action simulation. Given a PDDL domain and problem, Tarski computes:
- The set of ground actions applicable in any state.
- The successor state resulting from applying an action.
- Whether a state satisfies a goal formula.
Optimality Computation.
Computing optimal actions (those on some optimal path) requires enumerating multiple optimal plans. We use K* to generate all plans of optimal length, then take the union of first actions across these plans. For efficiency, we cache optimal action sets for frequently-queried states.
E.6 Prompt Construction
The final prompt sent to the LLM consists of four components assembled in sequence:
1. Task description: Natural language explanation of the planning domain, valid actions, and objective.
1. Subroutine documentation: For each subroutine, we include:
- Function signature with typed arguments and return type.
- Natural language description of the subroutineβs purpose.
- Accumulated L-ICL corrections as doctest examples.
1. Example traces: A small number ( $k=2$ β $3$ ) of complete reasoning traces showing how subroutines are invoked to solve example problems.
1. Query problem: The problem instance to solve, formatted consistently with the examples, followed by instructions to produce a trace.
State Representation.
For grid-based domains, we evaluate two state representations:
- Textual: Positions as coordinates (e.g., βagent at (3,4)β) with walls listed explicitly.
- ASCII: Visual grid representation where walls are marked characters and open cells are spaces.
Our ablation (Section 4.3) shows that L-ICL achieves comparable peak performance with either representation, though ASCII grids accelerate early learning.
E.7 Experimental Infrastructure
Hardware.
Experiments were conducted on a Linux workstation with 32GB RAM. External planning tools (Fast Downward, VAL) were run locally. LLM inference was performed via API calls.
LLM Services.
We evaluated models through their respective APIs:
- DeepSeek V3 and V3.1 via the DeepSeek API.
- Claude Haiku 4.5 and Claude Sonnet 4.5 via the Anthropic API.
Hyperparameters.
Unless otherwise specified:
- Temperature: 1.0 (following DeepSeek recommendations).
- Maximum generation length: 32000 tokens.
- Training examples per iteration: 1 (single problem per L-ICL update).
- Total training problems: up to 240.
- Thinking tokens for Sonnet 4.5: 10k
- Thinking tokens for Haiku 4.5: 5k
Timeout Handling.
Fast Downward was given a 60-second timeout per problem. Problems exceeding this limit were marked as having unknown optimal cost and excluded from optimality statistics (but included in validity statistics if the validator succeeded).
Appendix F Representative Prompts
This appendix provides representative prompts used in all experiments. We organize prompts into two categories: (1) L-ICL prompts based on Program Trace Prompting (PTP), and (2) baseline method prompts used for comparison approaches. All prompts use template variables (denoted with curly braces, e.g., {partial_program}) that are replaced with problem-specific content at runtime.
F.1 L-ICL Prompts (Program Trace Prompting)
L-ICL prompts follow the Program Trace Prompting (PTP) format, where the LLM is asked to predict the output of a partially specified program. The key insight is that by withholding subroutine implementations (replacing them with ββ¦β markers), the LLM must infer correct behavior from documentation and accumulated examples.
F.1.1 Base L-ICL Prompt (No Domain Visualization)
This is the minimal L-ICL prompt used for gridworld and Sokoban navigation tasks when no ASCII grid visualization is provided. The LLM must infer spatial constraints purely from accumulated L-ICL corrections.
β¬
Consider the program fragment below. This program fragment is
incomplete, with key parts of the implementation hidden, by
replacing them with "..." markers.
PROGRAM:
βββ python
{partial_program}
βββ
QUESTION: Predict what the output of the program above will be,
given the input shown below.
Respond with the FULL program output, and ONLY the expected
program output: you will be PENALIZED if you introduce any
additional explanatory text.
βββ
>>> {task_name}({input_str})
βββ
Template Variables.
- {partial_program}: The PTP-style program with subroutine signatures, documentation, doctest examples (including L-ICL corrections), and ββ¦β placeholders for implementations.
- {task_name}: The function name to invoke (e.g., solve_gridworld).
- {input_str}: The problem specification as a string (e.g., start position, goal position, wall locations).
F.1.2 L-ICL Prompt with ASCII Grid Visualization
When ASCII grid visualization is enabled, the prompt includes a visual representation of the environment. This provides spatial scaffolding that accelerates early learning, though L-ICL achieves comparable peak performance without it.
β¬
Consider the program fragment below. This program fragment is
incomplete, with key parts of the implementation hidden, by
replacing them with "..." markers.
IMPORTANT: You are an agent navigating a {grid_size} gridworld.
The grid has {num_walls} walls that block movement.
** Grid Layout:**
βββ
1 2 3 4 5 6 7 8 9 10
+---+---+---+---+---+---+---+---+---+---+
10 | . | . | . | . | . | . | . | . | . | . |
+---+---+---+---+---+---+---+---+---+---+
9 | . | . | # | # | # | # | # | # | # | . |
+---+---+---+---+---+---+---+---+---+---+
8 | . | # | . | # | . | # | . | . | . | . |
+---+---+---+---+---+---+---+---+---+---+
7 | . | # | . | . | . | # | . | # | # | . |
+---+---+---+---+---+---+---+---+---+---+
6 | . | # | . | # | . | . | . | # | . | . |
+---+---+---+---+---+---+---+---+---+---+
5 | . | . | . | # | . | # | . | # | . | . |
+---+---+---+---+---+---+---+---+---+---+
4 | . | # | . | # | . | # | . | . | # | . |
+---+---+---+---+---+---+---+---+---+---+
3 | . | # | . | . | . | # | . | # | . | . |
+---+---+---+---+---+---+---+---+---+---+
2 | . | # | . | # | . | . | . | # | . | . |
+---+---+---+---+---+---+---+---+---+---+
1 | . | . | . | . | . | . | . | . | . | . |
+---+---+---+---+---+---+---+---+---+---+
βββ
PROGRAM:
βββ python
{partial_program}
βββ
QUESTION: Predict what the output of the program above will be,
given the input shown below.
Respond with the FULL program output, and ONLY the expected
program output: you will be PENALIZED if you introduce any
additional explanatory text.
βββ
>>> {task_name}({input_str})
βββ
Grid Symbols.
- . β Open cell (traversable)
- # β Wall (impassable)
- $ β Box (in Sokoban)
F.1.3 L-ICL BlocksWorld Prompt with UBW Algorithm
For BlocksWorld, we additionally provide algorithmic guidance based on the Universal Blocks World (UBW) algorithm (Stechly et al., 2024). This tests whether L-ICL can improve adherence to prescribed planning strategies beyond simple constraint satisfaction.
β¬
Consider the program fragment below. This program fragment
implements the Universal Blocks World (UBW) algorithm, which is
a systematic two - phase approach for solving blocks world planning
problems. The implementation is incomplete, with key parts
replaced by "..." markers.
UNIVERSAL BLOCKS WORLD ALGORITHM OVERVIEW:
The UBW algorithm works in two distinct phases to efficiently
solve any blocks world configuration:
PHASE 1: STRATEGIC UNSTACKING
- Unstack ALL blocks that are stacked on top of others
- Work from top to bottom, unstacking clear blocks first
- Move incorrectly positioned blocks to the table
PHASE 2: SYSTEMATIC REASSEMBLY
- Build goal configurations from bottom up
- Process blocks in dependency order (place supporting blocks
before supported blocks)
- Only place a block when its target is ready (clear and in
final position)
- Ensure structural integrity throughout construction
KEY HEURISTICS FOR IMPLEMENTATION:
1. STATE ANALYSIS:
- Parse predicates into on (), on - table (), and clear ()
relationships
- Build dependency graphs: what should be on what
- Identify bottom blocks (blocks that should be on table
in goal)
2. UNSTACKING STRATEGY:
- Check each on (X, Y) relationship in current state
- If (X, Y) is NOT in goal relationships, consider
unstacking X
- Only unstack if X is clear (no blocks on top)
- Priority: unstack blocks that block other necessary moves
3. REASSEMBLY STRATEGY:
- For each goal on (X, Y), check if X can be placed on Y
- X must be: clear AND on - table
- Y must be: clear AND in its final position
- Y is in final position if: Y should be on table OR Y is
already correctly placed on its target
4. ACTION SELECTION LOGIC:
βββ
For unstacking: if on (X, Y) in current state AND clear (X):
return move - b - to - t (X, Y)
For assembly: if goal requires on (X, Y) AND
can_place_block (X, Y):
return move - t - to - b (X, Y)
βββ
5. CORRECTNESS VERIFICATION:
- Always verify preconditions before suggesting actions
- Check that actions don β t break existing correct
configurations
- Ensure goal - directed progress in every move during
assembly phase
DETAILED TRACE GUIDANCE:
When implementing the UBW algorithm, provide step - by - step
reasoning inside reasoning () calls if required, which is your
scratchpad.
1. State the current configuration clearly
2. Identify which phase you β re in (unstacking vs assembly)
3. Explain WHY each action is chosen based on UBW principles
4. Show how the action advances toward the goal
5. Verify preconditions are satisfied
6. Update state representation after each action
PROGRAM:
βββ python
{partial_program}
βββ
QUESTION: Predict what the output of the program above will be,
given the input shown below.
IMPLEMENTATION REQUIREMENTS:
- Follow the UBW algorithm phases strictly
- Provide detailed reasoning for each action selection
- Show state analysis and dependency tracking
- Explain how each move contributes to the overall strategy
- Demonstrate understanding of when to unstack vs when to build
- Verify that all actions follow UBW heuristics
Respond with the FULL program output, including detailed
algorithmic traces that demonstrate proper UBW implementation.
Your trace should show:
- Clear identification of current phase (unstacking / assembly)
- Specific reasoning for each action choice
- State updates and goal progress tracking
- Verification that actions follow UBW principles
Under no circumstance must you skip steps in the program output.
You CAN decide to go back and choose different actions if you
feel that you have made a mistake, but the FINAL PLAN must show
the COMPLETE CORRECT PATH ONLY.
βββ
>>> {task_name}({input_str})
βββ
F.1.4 Example Partial Program Structure
The {partial_program} template variable is replaced with a PTP-style program containing subroutine documentation and accumulated L-ICL corrections. Below is a representative example for gridworld navigation:
β¬
import collections
from typing import Dict, List, Set, Tuple, Union, Optional, Any, FrozenSet
PlanningState = Any
Action = Any
@traced
def extract_problem (input_str: str) -> str:
""" Extract a standardized problem description from input.
"""
...
@traced
def extract_initial_state (problem_str: str) -> PlanningState:
""" Extract the initial state from a problem description.
"""
...
@traced
def extract_goal (problem_str: str) -> PlanningState:
""" Extract the goal from a problem description.
"""
...
@traced
def at_goal (state: PlanningState, goal: PlanningState) -> bool:
""" Check if current state satisfies goal conditions.
"""
...
@traced
def get_applicable_actions (state: PlanningState, goal: PlanningState) -> Set [Action]:
""" Get all applicable actions in the current state.
"""
...
@traced
def get_optimal_actions (state: PlanningState, applicable_actions: List [Action],
goal: PlanningState) -> Set [Action]:
""" Get actions that are part of an optimal plan.
"""
...
@traced
def apply_action (state: PlanningState, action: Action, goal: PlanningState) -> PlanningState:
""" Apply an action to a state, returning the resulting state.
"""
...
def pddl_grid (input_str: str):
""" Solve a planning problem described in input_str.
This function processes a planning problem description by:
1. Extracting the initial state and goal
2. Iteratively applying actions until the goal is reached
3. Returning the sequence of actions as a plan
>>> pddl_grid (β(define (problem gw - task -351)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -5))\ n (: goal (at c5 -10))\ n)\ n β)
Calling extract_problem (β(define (problem gw - task -351)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -5))\ n (: goal (at c5 -10))\ n)\ n β)...
... extract_problem returned β gridworld -10 x10 β
Calling extract_initial_state (β(define (problem gw - task -351)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -5))\ n (: goal (at c5 -10))\ n)\ n β)...
... extract_initial_state returned (9, 5)
Calling extract_goal (β(define (problem gw - task -351)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -5))\ n (: goal (at c5 -10))\ n)\ n β)...
... extract_goal returned (5, 10)
Calling at_goal ((9, 5), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((9, 5), (5, 10))...
... get_applicable_actions returned [β move - north β, β move - east β]
Calling get_optimal_actions ((9, 5), [β move - north β, β move - east β], (5, 10))...
... get_optimal_actions returned [β move - north β, β move - east β]
Calling apply_action ((9, 5), β move - north β, (5, 10))...
... apply_action returned (9, 6)
Calling at_goal ((9, 6), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((9, 6), (5, 10))...
... get_applicable_actions returned [β move - south β, β move - east β]
Calling get_optimal_actions ((9, 6), [β move - south β, β move - east β], (5, 10))...
... get_optimal_actions returned [β move - east β]
Calling apply_action ((9, 6), β move - east β, (5, 10))...
... apply_action returned (10, 6)
Calling at_goal ((10, 6), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((10, 6), (5, 10))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((10, 6), [β move - north β, β move - south β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((10, 6), β move - north β, (5, 10))...
... apply_action returned (10, 7)
Calling at_goal ((10, 7), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((10, 7), (5, 10))...
... get_applicable_actions returned [β move - north β, β move - south β]
Calling get_optimal_actions ((10, 7), [β move - north β, β move - south β], (5, 10))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((10, 7), β move - north β, (5, 10))...
... apply_action returned (10, 8)
Calling at_goal ((10, 8), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((10, 8), (5, 10))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((10, 8), [β move - north β, β move - south β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((10, 8), β move - north β, (5, 10))...
... apply_action returned (10, 9)
Calling at_goal ((10, 9), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((10, 9), (5, 10))...
... get_applicable_actions returned [β move - north β, β move - south β]
Calling get_optimal_actions ((10, 9), [β move - north β, β move - south β], (5, 10))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((10, 9), β move - north β, (5, 10))...
... apply_action returned (10, 10)
Calling at_goal ((10, 10), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((10, 10), (5, 10))...
... get_applicable_actions returned [β move - south β, β move - west β]
Calling get_optimal_actions ((10, 10), [β move - south β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((10, 10), β move - west β, (5, 10))...
... apply_action returned (9, 10)
Calling at_goal ((9, 10), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((9, 10), (5, 10))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((9, 10), [β move - east β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((9, 10), β move - west β, (5, 10))...
... apply_action returned (8, 10)
Calling at_goal ((8, 10), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((8, 10), (5, 10))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((8, 10), [β move - east β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((8, 10), β move - west β, (5, 10))...
... apply_action returned (7, 10)
Calling at_goal ((7, 10), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((7, 10), (5, 10))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((7, 10), [β move - east β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((7, 10), β move - west β, (5, 10))...
... apply_action returned (6, 10)
Calling at_goal ((6, 10), (5, 10))...
... at_goal returned False
Calling get_applicable_actions ((6, 10), (5, 10))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((6, 10), [β move - east β, β move - west β], (5, 10))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((6, 10), β move - west β, (5, 10))...
... apply_action returned (5, 10)
Calling at_goal ((5, 10), (5, 10))...
... at_goal returned True
Final answer: move - north move - east move - north move - north move - north move - north move - west move - west move - west move - west move - west
[β move - north β, β move - east β, β move - north β, β move - north β, β move - north β, β move - north β, β move - west β, β move - west β, β move - west β, β move - west β, β move - west β]
>>> pddl_grid (β(define (problem gw - task -352)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -3))\ n (: goal (at c7 -7))\ n)\ n β)
Calling extract_problem (β(define (problem gw - task -352)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -3))\ n (: goal (at c7 -7))\ n)\ n β)...
... extract_problem returned β gridworld -10 x10 β
Calling extract_initial_state (β(define (problem gw - task -352)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -3))\ n (: goal (at c7 -7))\ n)\ n β)...
... extract_initial_state returned (9, 3)
Calling extract_goal (β(define (problem gw - task -352)\ n (: domain gridworld -10 x10)\ n (: init (at c9 -3))\ n (: goal (at c7 -7))\ n)\ n β)...
... extract_goal returned (7, 7)
Calling at_goal ((9, 3), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((9, 3), (7, 7))...
... get_applicable_actions returned [β move - south β, β move - east β]
Calling get_optimal_actions ((9, 3), [β move - south β, β move - east β], (7, 7))...
... get_optimal_actions returned [β move - south β, β move - east β]
Calling apply_action ((9, 3), β move - south β, (7, 7))...
... apply_action returned (9, 2)
Calling at_goal ((9, 2), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((9, 2), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - east β]
Calling get_optimal_actions ((9, 2), [β move - north β, β move - south β, β move - east β], (7, 7))...
... get_optimal_actions returned [β move - south β]
Calling apply_action ((9, 2), β move - south β, (7, 7))...
... apply_action returned (9, 1)
Calling at_goal ((9, 1), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((9, 1), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - east β, β move - west β]
Calling get_optimal_actions ((9, 1), [β move - north β, β move - east β, β move - west β], (7, 7))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((9, 1), β move - west β, (7, 7))...
... apply_action returned (8, 1)
Calling at_goal ((8, 1), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((8, 1), (7, 7))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((8, 1), [β move - east β, β move - west β], (7, 7))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((8, 1), β move - west β, (7, 7))...
... apply_action returned (7, 1)
Calling at_goal ((7, 1), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 1), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - east β, β move - west β]
Calling get_optimal_actions ((7, 1), [β move - north β, β move - east β, β move - west β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 1), β move - north β, (7, 7))...
... apply_action returned (7, 2)
Calling at_goal ((7, 2), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 2), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((7, 2), [β move - north β, β move - south β, β move - west β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 2), β move - north β, (7, 7))...
... apply_action returned (7, 3)
Calling at_goal ((7, 3), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 3), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β]
Calling get_optimal_actions ((7, 3), [β move - north β, β move - south β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 3), β move - north β, (7, 7))...
... apply_action returned (7, 4)
Calling at_goal ((7, 4), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 4), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - east β]
Calling get_optimal_actions ((7, 4), [β move - north β, β move - south β, β move - east β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 4), β move - north β, (7, 7))...
... apply_action returned (7, 5)
Calling at_goal ((7, 5), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 5), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β]
Calling get_optimal_actions ((7, 5), [β move - north β, β move - south β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 5), β move - north β, (7, 7))...
... apply_action returned (7, 6)
Calling at_goal ((7, 6), (7, 7))...
... at_goal returned False
Calling get_applicable_actions ((7, 6), (7, 7))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((7, 6), [β move - north β, β move - south β, β move - west β], (7, 7))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((7, 6), β move - north β, (7, 7))...
... apply_action returned (7, 7)
Calling at_goal ((7, 7), (7, 7))...
... at_goal returned True
Final answer: move - south move - south move - west move - west move - north move - north move - north move - north move - north move - north
[β move - south β, β move - south β, β move - west β, β move - west β, β move - north β, β move - north β, β move - north β, β move - north β, β move - north β, β move - north β]
>>> pddl_grid (β(define (problem gw - task -353)\ n (: domain gridworld -10 x10)\ n (: init (at c7 -2))\ n (: goal (at c2 -5))\ n)\ n β)
Calling extract_problem (β(define (problem gw - task -353)\ n (: domain gridworld -10 x10)\ n (: init (at c7 -2))\ n (: goal (at c2 -5))\ n)\ n β)...
... extract_problem returned β gridworld -10 x10 β
Calling extract_initial_state (β(define (problem gw - task -353)\ n (: domain gridworld -10 x10)\ n (: init (at c7 -2))\ n (: goal (at c2 -5))\ n)\ n β)...
... extract_initial_state returned (7, 2)
Calling extract_goal (β(define (problem gw - task -353)\ n (: domain gridworld -10 x10)\ n (: init (at c7 -2))\ n (: goal (at c2 -5))\ n)\ n β)...
... extract_goal returned (2, 5)
Calling at_goal ((7, 2), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((7, 2), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((7, 2), [β move - north β, β move - south β, β move - west β], (2, 5))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((7, 2), β move - west β, (2, 5))...
... apply_action returned (6, 2)
Calling at_goal ((6, 2), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((6, 2), (2, 5))...
... get_applicable_actions returned [β move - south β, β move - east β, β move - west β]
Calling get_optimal_actions ((6, 2), [β move - south β, β move - east β, β move - west β], (2, 5))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((6, 2), β move - west β, (2, 5))...
... apply_action returned (5, 2)
Calling at_goal ((5, 2), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((5, 2), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - east β]
Calling get_optimal_actions ((5, 2), [β move - north β, β move - south β, β move - east β], (2, 5))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((5, 2), β move - north β, (2, 5))...
... apply_action returned (5, 3)
Calling at_goal ((5, 3), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((5, 3), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((5, 3), [β move - north β, β move - south β, β move - west β], (2, 5))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((5, 3), β move - west β, (2, 5))...
... apply_action returned (4, 3)
Calling at_goal ((4, 3), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((4, 3), (2, 5))...
... get_applicable_actions returned [β move - east β, β move - west β]
Calling get_optimal_actions ((4, 3), [β move - east β, β move - west β], (2, 5))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((4, 3), β move - west β, (2, 5))...
... apply_action returned (3, 3)
Calling at_goal ((3, 3), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((3, 3), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - east β]
Calling get_optimal_actions ((3, 3), [β move - north β, β move - south β, β move - east β], (2, 5))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((3, 3), β move - north β, (2, 5))...
... apply_action returned (3, 4)
Calling at_goal ((3, 4), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((3, 4), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β]
Calling get_optimal_actions ((3, 4), [β move - north β, β move - south β], (2, 5))...
... get_optimal_actions returned [β move - north β]
Calling apply_action ((3, 4), β move - north β, (2, 5))...
... apply_action returned (3, 5)
Calling at_goal ((3, 5), (2, 5))...
... at_goal returned False
Calling get_applicable_actions ((3, 5), (2, 5))...
... get_applicable_actions returned [β move - north β, β move - south β, β move - west β]
Calling get_optimal_actions ((3, 5), [β move - north β, β move - south β, β move - west β], (2, 5))...
... get_optimal_actions returned [β move - west β]
Calling apply_action ((3, 5), β move - west β, (2, 5))...
... apply_action returned (2, 5)
Calling at_goal ((2, 5), (2, 5))...
... at_goal returned True
Final answer: move - west move - west move - north move - west move - west move - north move - north move - west
[β move - west β, β move - west β, β move - north β, β move - west β, β move - west β, β move - north β, β move - north β, β move - west β]
"""
...
F.2 Baseline Method Prompts
The following prompts are used for baseline comparison methods. All baselines receive the same problem information but use different reasoning frameworks.
F.2.1 Zero-Shot CoT / RAG-CoT Prompt
The base prompt used for Zero-Shot Chain-of-Thought and RAG-CoT baselines. For RAG-CoT, dynamically retrieved examples are inserted in the {examples} section.
β¬
You are an expert at navigating gridworld environments. You will
solve navigation problems where an agent must find the optimal
path from a start position to a goal position while avoiding
walls and obstacles.
IMPORTANT:
You are an agent navigating a {grid_size} gridworld.
The grid has {num_walls} walls that block movement.
** Grid Layout:**
{ascii_grid}
# Task Description
In each problem, you are given:
- A gridworld of specific dimensions
- A start position (row, column)
- A goal position (row, column)
- Wall locations that block movement
Your task is to find the shortest path from start to goal using
these actions:
- ** move - north **: Move one cell north (increase row by 1)
- ** move - south **: Move one cell south (decrease row by 1)
- ** move - east **: Move one cell east (increase column by 1)
- ** move - west **: Move one cell west (decrease column by 1)
# Solution Strategy
For each problem, follow this reasoning process:
1. ** Analyze the Grid **: Identify the start position, goal
position, and obstacles
2. ** Plan the Route **: Determine if a direct path exists or if
you need to navigate around obstacles
3. ** Step - by - Step Reasoning **: For each move, explain why it
brings you closer to the goal
4. ** Verify the Path **: Ensure the path is valid and optimal
# Example Problems
{examples}
# Problem to Solve
Start: {start_position}
Goal: {goal_position}
{deadzone_warning}
Please solve this problem step - by - step and provide your answer.
** Your Solution:**
First, provide your step - by - step reasoning:
1. Identify the start position, goal position, and any obstacles
2. Reason through each step of your path
3. Verify your path is valid and optimal
Then, provide your final answer EXACTLY in this format:
** Final Action Sequence:** move - direction1, move - direction2, ...
IMPORTANT: You MUST include the line starting with
" Final Action Sequence:" followed by your comma - separated list
of actions.
F.2.2 Self-Consistency Prompt
Self-Consistency uses the same base prompt as CoT, with a sample annotation appended to each independent call:
β¬
{base_cot_prompt}
<!-- Self - Consistency Sample {k}/{total}: Treat this run
independently and produce a complete plan -->
Each sample is generated with temperature $>0$ for diversity. The final answer is selected via majority voting over the k samples.
F.2.3 Self-Refine Refinement Prompt
After the initial attempt, if refinement rounds remain, the model receives its previous response with reflection instructions:
β¬
{base_cot_prompt}
### Self - Refinement Attempt {attempt_number}
You previously produced the following reasoning and plan:
{previous_response}
Proposed action sequence: {previous_actions}
Carefully re - read the task description and your earlier steps.
Without running code or simulations, check for potential
mistakes:
- Did any move leave the grid or pass through a wall?
- Does the sequence actually reach the goal cell?
- Is there a shorter valid route?
If issues are found, explain them briefly and provide a
corrected plan. If you believe the plan is correct and needs
no further refinement, explicitly state:
β** No further refinement needed.**β and then restate the
action sequence.
Always finish with a line of the form:
** Final Action Sequence:** move -*, move -*, ...
Refined solution:
F.2.4 ReAct (Prompt-Only) Prompt
ReAct uses an alternating Thought/Action trace format:
β¬
You are an expert gridworld planner. Solve using ReAct style
trace.
IMPORTANT:
You are an agent navigating a {grid_size} gridworld.
The grid has {num_walls} walls that block movement.
** Grid Layout:**
{ascii_grid}
## Valid Actions
- ** move - north **: Move one cell up (increase y by 1)
- ** move - south **: Move one cell down (decrease y by 1)
- ** move - east **: Move one cell right (increase x by 1)
- ** move - west **: Move one cell left (decrease x by 1)
## Movement Constraints
- You cannot move through walls
- You cannot move outside the grid boundaries
- Each action moves exactly one cell
# Example
Start: (2,1), Goal: (5,4)
Thought: I am at (2,1) and need to reach (5,4). I should move
north and east while checking for obstacles.
Action: move - north
Thought: Now at (2,2). Continue moving toward the goal.
Action: move - north
...
Final Thought: Reached the goal at (5,4).
** Final Action Sequence:** move - north, move - north, move - east, ...
# Problem to Solve
Start: {start_position}
Goal: {goal_position}
Guidelines:
- Alternate between β Thought:β and β Action:β
- Keep moves consistent with grid layout
- Avoid illegal steps (walls, boundaries)
- End with β Final Thought:β and β** Final Action Sequence:**β
F.2.5 Tree-of-Thoughts Expansion Prompt
ToT uses a structured expansion prompt requesting JSON-formatted candidates:
β¬
{reference_examples}
Gridworld planning problem:
Start: {start_position}
Goal: {goal_position}
Current depth: {depth}/{max_depth}
Actions chosen so far: {action_prefix}
Thoughts considered so far:
{thought_history}
Generate up to 5 candidate expansions as JSON. Each must include:
- " thought ": a short description of the idea
- " proposed_actions ": list of up to 8 moves continuing the plan
- " confidence ": integer 0-100 for promise of success
- " is_terminal ": true if plan should stop after these actions
- " final_plan ": optional full action list if terminal
Moves must stay within bounds and avoid walls.
Return ONLY the JSON array; no commentary.
F.2.6 ReAct (+Oracle) Feedback Prompt
When the oracle detects errors, it provides specific feedback:
β¬
{original_prompt}
---
Your previous response:
{previous_response}
---
ORACLE FEEDBACK: Your plan has a PROBLEM at step {step_number}.
The action β{failed_action}β at position {position} is INVALID
because {reason}.
Please find an alternative path that avoids this issue.
** Corrected Final Action Sequence:**
Feedback Types.
- Invalid Move: βThe action βmove-Xβ at position $(x,y)$ is INVALID because it would move into a wall or out of bounds.β
- Deadzone Entry: βThe action βmove-Xβ leads to position $(x,y)$ which is a DEADZONE. You should avoid deadzones.β
- Incomplete Path: βYour plan is INCOMPLETE. After executing all actions, you ended at $(x,y)$ but did not reach the goal.β
F.3 L-ICL Correction Format
L-ICL corrections are formatted as doctest-style input-output examples inserted into subroutine documentation. This format leverages Pythonβs doctest convention, which is well-represented in LLM training data.
Correction Structure.
β¬
>>> {function_name}({input_args})
{correct_output}
Example Corrections by Subroutine.
Applicability Correction (when LLM proposes invalid action):
β¬
>>> get_applicable_actions (state =(3, 4), goal =(7, 8))
{β move_north β, β move_south β, β move_west β}
Optimality Correction (when LLM proposes suboptimal action):
β¬
>>> get_optimal_actions (state =(5, 2), goal =(8, 7))
{β move_north β, β move_east β}
BlocksWorld Action Correction:
β¬
>>> get_recommended_actions (
... state ={β on β: [(β A β,β B β)], β on - table β: [β B β,β C β],
... β clear β: [β A β,β C β]},
... goal ={β on β: [(β B β,β C β)], β on - table β: [β A β,β C β],
... β clear β: [β A β,β B β]}
... )
{β move - b - to - t (A, B)β}
F.4 Action Parsing Patterns
All methods use the same action sequence parser that handles multiple output formats:
β¬
PATTERNS = [
rβ\*\*Final β£ Action β£ Sequence:\*\*\s*(.+?)(?:\n|$)β,
rβFinal β£ Action β£ Sequence:\s*(.+?)(?:\n|$)β,
rβ\*\*Action β£ Sequence:\*\*\s*(.+?)(?:\n|$)β,
rβAction β£ Sequence:\s*(.+?)(?:\n|$)β,
rβOptimal β£ path:\s*(.+?)(?:\n|$)β,
rβPlan:\s*(.+?)(?:\n|$)β,
]
VALID_ACTIONS = {
βmove-northβ, βmove-southβ, βmove-eastβ, βmove-westβ,
βmove_northβ, βmove_southβ, βmove_eastβ, βmove_westβ,
βpush-northβ, βpush-southβ, βpush-eastβ, βpush-westβ,
}
# Normalize action format
ACTION_ALIASES = {
βnorthβ: βmove-northβ, βsouthβ: βmove-southβ,
βeastβ: βmove-eastβ, βwestβ: βmove-westβ,
βupβ: βmove-northβ, βdownβ: βmove-southβ,
βrightβ: βmove-eastβ, βleftβ: βmove-westβ,
}
F.5 Prompt Variations by Domain
8 $Γ$ 8 Two-Room Gridworld.
Uses the standard L-ICL prompt with ASCII grid showing two rooms separated by a wall with a doorway.
10 $Γ$ 10 Maze.
Uses the L-ICL prompt with ASCII grid showing procedurally generated maze corridors. Wall density is higher, creating narrow passages.
Sokoban-Style Gridworld.
Uses the standard L-ICL prompt with Sokoban-style ASCII layouts but no pushable boxes.
Full Sokoban.
Extends the action space to include push actions:
β¬
Valid actions:
- Movement: move_north, move_south, move_east, move_west
- Pushing: push_north, push_south, push_east, push_west
BlocksWorld.
Uses the UBW algorithm prompt (Section F.1.3) with relational state representation instead of spatial coordinates.
F.6 Hyperparameter Settings by Method
Table 8 summarizes the key hyperparameters used for each prompting method.
Table 8: Hyperparameter settings for each prompting method.
| Zero-Shot CoT | 1.0 | 32,000 | 1 |
| --- | --- | --- | --- |
| RAG-CoT | 1.0 | 32,000 | 1 |
| Self-Consistency | 1.0 | 32,000 | $k=5$ |
| Self-Refine | 1.0 | 32,000 | $N=5$ |
| ReAct (Prompt) | 1.0 | 32,000 | 1 |
| ToT (Prompt) | 1.0 | 32,000 | $b=5,d=3$ |
| ReAct (+Oracle) | 0.3 | 32,000 | 1β2 |
| L-ICL | 1.0 | 32,000 | 1 |
L-ICL Training Configuration.
- Training examples: up to 240 problems
- Corrections per problem: 1 (first failure only) in Sokoban and BlocksWorld or up to 2 (first optimality correction and first validity correction, or just first validity correction) in gridworld problems
- Correction accumulation: batch update after 10 training examples