## DOES SPATIAL COGNITION EMERGE IN FRONTIER MODELS?
Santhosh Kumar Ramakrishnan ∗ Erik Wijmans Philipp Kr¨ ahenb¨ uhl Vladlen Koltun
Apple
## ABSTRACT
Not yet. We present SPACE, a benchmark that systematically evaluates spatial cognition in frontier models. Our benchmark builds on decades of research in cognitive science. It evaluates large-scale mapping abilities that are brought to bear when an organism traverses physical environments, smaller-scale reasoning about object shapes and layouts, and cognitive infrastructure such as spatial attention and memory. For many tasks, we instantiate parallel presentations via text and images, allowing us to benchmark both large language models and large multimodal models. Results suggest that contemporary frontier models fall short of the spatial intelligence of animals, performing near chance level on a number of classic tests of animal cognition. Code and data are available: https://github.com/apple/ml-space-benchmark
## 1 INTRODUCTION
Frontier models have achieved impressive performance in mathematics, coding, general knowledge, and commonsense reasoning (Hendrycks et al., 2021a;b; Chen et al., 2021; Sakaguchi et al., 2021; Yue et al., 2024). This remarkable progress has inspired characterizations of frontier models as possessing the intelligence of a smart high schooler and predictions of the imminent arrival of superintelligence (Aschenbrenner, 2024). These characterizations are often underpinned by the premise that competence (or even mastery) in some aspects of cognition is symptomatic of broad cognitive competence. This is not self-evident. To quote Brooks's first law of artificial intelligence, 'When an AI system performs a task, human observers immediately estimate its general competence in areas that seem related. Usually that estimate is wildly overinflated.' (Brooks, 2024).
Our work focuses on spatial cognition, a foundational form of intelligence that is present in a broad spectrum of animals including humans (Marshall & Fink, 2001; Waller & Nadel, 2013; Mallot, 2024). Spatial cognition refers to the ability of animals to perceive and interact with their surroundings, build mental representations of objects and environments, and draw upon these representations to support navigation and manipulation. Decades of research in animal cognition have characterized the spatial cognition of mice, rats, bats, pigeons, corvids, dogs, wolves, elephants, marmosets, tamarins, howler monkeys, baboons, chimpanzees, and humans (Tolman, 1948; Menzel, 1973; Peters, 1974; Gillner & Mallot, 1998; Marshall & Fink, 2001; Noser & Byrne, 2007; Tommasi et al., 2012; Porter & Garber, 2013; Blaser et al., 2013; Geva-Sagiv et al., 2015; Presotto et al., 2019; de Guinea et al., 2021; Payne et al., 2021; Xu et al., 2024; Xavier et al., 2024; Welklin et al., 2024). Human infants already possess rudimentary spatial cognition, which subsequently improves along developmental schedules that have been characterized (Blades & Spencer, 1994; Newcombe, 2000; Vasilyeva & Lourenco, 2012). Spatial cognition is known to underpin more advanced cognitive abilities (Kozhevnikov et al., 2007; Newcombe, 2010; Young et al., 2018).
The emergence of spatial cognition has been linked to embodiment (Smith & Gasser, 2005; Jansen & Heil, 2010; Frick & M¨ ohring, 2016), without which the development of spatial cognition may be impaired (Foreman et al., 1990; Anderson et al., 2013). However, frontier models are typically trained in a disembodied manner on corpora of text, images, and video. Does spatial cognition emerge in disembodied frontier models? To study this question systematically, we develop SPACE,
∗ Corresponding author: s ramakrishnan@apple.com
Figure 1: SPACE: Spatial Perception And Cognition Evaluation. We design a suite of spatial cognition tasks based on the cognitive science literature. These are broadly classified into large-scale and small-scale spatial cognition. Large-scale tasks require understanding space at the level of environments and evaluate spatial orientation and cognitive mapping abilities. Small-scale tasks require understanding space at the level of objects or object arrangements and evaluate skills such as spatial visualization, spatial orientation, spatial perception, selective spatial attention and visuospatial working memory. These tasks can be multiple-choice question answering, or interactive games. We develop multimodal as well as purely textual presentations, which support evaluation of both large language models (LLMs) and vision-language models (VLMs).
<details>
<summary>Image 1 Details</summary>

### Visual Description
## Diagram: Large-scale vs. Small-scale Spatial Cognition Tasks
### Overview
The diagram compares tasks associated with **large-scale spatial cognition** (left) and **small-scale spatial cognition** (right), organized into color-coded categories. Each task is represented by a colored box with a label, and a legend at the bottom maps colors to cognitive categories.
---
### Components/Axes
#### Legend (Bottom)
- **Blue**: Spatial visualization
- **Green**: Spatial orientation
- **Orange**: Spatial perception
- **Red**: Visuospatial working memory
- **Purple**: Selective spatial attention
- **Gray**: Interactive task
#### Left Side (Large-scale Spatial Cognition)
- **Green Boxes (Spatial orientation)**:
- Distance estimation
- Map sketching
- Route retracing
- Maze completion
- Shortcut discovery
- **Blue Boxes (Spatial visualization)**:
- Direction estimation
- Perspective taking
- **Gray Boxes (Interactive task)**:
- Spatial orientation
- Maze completion
- Shortcut discovery
#### Right Side (Small-scale Spatial Cognition)
- **Blue Boxes (Spatial visualization)**:
- Perspective taking
- Mental rotation
- Minnesota paper form board
- **Orange Boxes (Spatial perception)**:
- Water level
- Judgement of line orientation
- **Red Boxes (Visuospatial working memory)**:
- Corsi block tapping
- Spatial addition
- Cambridge spatial working memory
- **Purple Box (Selective spatial attention)**:
- Selective attention
- **Gray Box (Interactive task)**:
- QA task
---
### Detailed Analysis
#### Left Side (Large-scale)
- **Spatial orientation (Green)**: Tasks like *Distance estimation* and *Map sketching* are grouped here, though the legend associates green with spatial orientation. However, the task *Spatial orientation* itself appears in a gray box (Interactive task), creating a potential inconsistency.
- **Spatial visualization (Blue)**: *Direction estimation* and *Perspective taking* align with spatial visualization.
- **Interactive tasks (Gray)**: *Spatial orientation*, *Maze completion*, and *Shortcut discovery* are labeled as interactive tasks, conflicting with the legend’s green assignment for spatial orientation.
#### Right Side (Small-scale)
- **Spatial visualization (Blue)**: Tasks like *Mental rotation* and *Minnesota paper form board* fit this category.
- **Spatial perception (Orange)**: *Water level* and *Judgement of line orientation* align with perception.
- **Visuospatial working memory (Red)**: *Corsi block tapping*, *Spatial addition*, and *Cambridge spatial working memory* are grouped here.
- **Selective spatial attention (Purple)**: *Selective attention* is correctly color-coded.
- **Interactive task (Gray)**: *QA task* matches the legend’s gray category.
---
### Key Observations
1. **Color-Label Mismatch**: The task *Spatial orientation* is placed in a gray box (Interactive task) despite the legend associating green with spatial orientation.
2. **Task Distribution**:
- Large-scale cognition emphasizes **spatial orientation** (green) and **interactive tasks** (gray).
- Small-scale cognition focuses on **spatial visualization** (blue), **perception** (orange), and **working memory** (red).
3. **Overlap**: *Maze completion* and *Shortcut discovery* appear in both large-scale (green/gray) and small-scale (gray) sections, suggesting potential overlap in task categorization.
---
### Interpretation
The diagram illustrates how spatial cognition tasks are partitioned into large- and small-scale domains, with color coding to denote cognitive categories. However, inconsistencies arise in the placement of *Spatial orientation* (green vs. gray) and the inclusion of *Maze completion* in both sections. This may reflect nuanced task complexity or overlapping cognitive demands. The use of interactive tasks (gray) in both sections suggests these tasks bridge large- and small-scale cognition, requiring dynamic spatial processing. The selective attention (purple) and QA task (gray) highlight metacognitive and problem-solving components, respectively.
</details>
a benchmark that builds on decades of research in cognitive science. Our benchmark comprises two broad classes of tasks, covering large-scale and small-scale spatial cognition (Hegarty et al., 2006; Meneghetti et al., 2022; Newcombe, 2024). See Figure 1 for an overview.
Large-scale spatial cognition has to do with a model's ability to understand its surroundings. In large-scale tasks, the model is familiarized with an environment and is then asked to estimate distances and directions to landmarks, sketch a map of the environment, retrace a known route, or discover a shortcut to a goal. Small-scale spatial cognition has to do with a model's ability to perceive, imagine, and mentally transform objects in two or three dimensions. Together, large-scale and small-scale tasks evaluate core cognitive abilities such as spatial perception, visualization, orientation, selective attention, and visuospatial memory (Lacroix et al., 2021; Meneghetti et al., 2022).
We design text-based and image-based presentations to evaluate both large language-only and vision-language models (LLMs and VLMs, respectively). Our results indicate that contemporary frontier models have not yet reached competency - let alone mastery - in spatial cognition. On key large-scale spatial cognition tasks, frontier multimodal models perform near chance level, even when presented with an allocentric (map) view of the environment. The strongest models exhibit much better performance on some small-scale tasks that evaluate selective spatial attention and visuospatial working memory, especially with purely textual presentations via character arrays, but perform near chance on other tasks such as mental rotation (Vandenberg & Kuse, 1978), perspective taking (Kozhevnikov & Hegarty, 2001), maze completion (Lacroix et al., 2021), or the classic Minnesota Paper Form Board test (Likert & Quasha, 1941; 1969).
## 2 RELATED WORK
Spatial cognition. Spatial cognition is a branch of cognitive science that seeks to understand how humans and animals perceive, interpret, represent, and interact with objects and environments (Marshall & Fink, 2001; Landau, 2002; Waller & Nadel, 2013; Mallot, 2024; Newcombe, 2024). This involves the perception of object sizes, shapes, and scales, as well as the relationships between objects and landmarks in the environment (including location, distance, direction, and orientation). Spatial cognition is broadly divided into two categories: large-scale and small-scale (Hegarty et al., 2006; Jansen, 2009; Meneghetti et al., 2022; Newcombe, 2024). Large-scale spatial cognition refers to the ability to build spatial representations of environments and use them effectively for navigation and spatial reasoning. Large-scale spatial cognition tasks typically involve egocentric spatial transformations, where the viewer's perspective changes with respect to the environment while the spatial relationships between parts of the environment remain constant (Wang et al., 2014). Smallscale spatial cognition refers to the ability to perceive, imagine, and mentally transform objects or shapes in 2D or 3D. This is typically evaluated using paper and pencil tasks that require allocentric spatial transformations of objects and shapes (Wang et al., 2014). While large-scale spatial cognition has been demonstrated in a wide range of animals (Tolman, 1948; Menzel, 1973; Peters, 1974;
O'Keefe & Nadel, 1978; Gillner & Mallot, 1998; Richardson et al., 1999; Geva-Sagiv et al., 2015; Toledo et al., 2020), the study of small-scale spatial cognition is specific to humans.
Emergent spatial representations. Several works have shown that spatial representations, a phenomenon similar to spatial cognition, can emerge in neural networks (Banino et al., 2018; Cueva & Wei, 2018; Wijmans et al., 2023; Sorscher et al., 2023). These works train a neural network from scratch for path integration or navigation tasks and analyze the model weights to identify spatial representations.
Spatial reasoning in large language models. PlanBench (Valmeekam et al., 2024) and CogEval (Momennejad et al., 2023) evaluate LLMs on text-based planning tasks such as navigation, delivery logistics and block stacking to evaluate cognitive mapping and planning. Yamada et al. (2024) evaluate spatial reasoning in LLMs by performing map traversals on different types of graphs and evaluate the model's self-localization ability. EWOK (Ivanova et al., 2024) studies spatial plausibility reasoning in LLMs. In comparison to these benchmarks, SPACE evaluates a broader array of cognitive abilities and implements multimodal presentations of classic animal cognition experiments.
Benchmarks for large multimodal models. The recent successes of multimodal models (OpenAI, 2024; Li et al., 2024a; Reid et al., 2024) have been facilitated by large-scale training on text and multimodal corpora (Rana, 2010; Together Computer, 2023; Chen et al., 2023; Laurenc ¸on et al., 2023; Gadre et al., 2023), followed by tuning on human preferences (Liu et al., 2023a; Awadalla et al., 2024; Ouyang et al., 2022; Rafailov et al., 2023). The remarkable advances in the capabilities of these models inspired a variety of benchmarks that evaluate their performance. Early multimodal benchmarks consisted of single-task datasets such as visual question answering (Antol et al., 2015; Goyal et al., 2019; Marino et al., 2019) and image captioning (Chen et al., 2015). However, due to the limited scope of early datasets and concerns regarding potential test-data leakage, newer benchmarks use diverse collections of tasks (Fu et al., 2023; Yu et al., 2024; Liu et al., 2023b; Yue et al., 2024; Lu et al., 2024; Ying et al., 2024). While these datasets primarily focus on image understanding, newer datasets that emphasize spatiotemporal reasoning have been proposed for video (Li et al., 2024b; Fu et al., 2024a; Majumdar et al., 2024).
Recent studies highlight a number of shortcomings of frontier multimodal models (Moskvichev et al., 2023; Tong et al., 2024; Chen et al., 2024a; Fu et al., 2024b). One such shortcoming is that models may not perceive the image in detail, often missing fine-grained details or ignoring the image entirely (Chen et al., 2024b; Guan et al., 2024; Tong et al., 2024). HallusionBench proposes a new dataset of image pairs, where tiny edits are made from one image to another that change the answer to the question (Guan et al., 2024). MMVP identifies issues with CLIP-based pretraining of visual encoders, which make current models blind to certain visual patterns, and proposes a benchmark of CLIP-blind image pairs where the same question has opposite answers (Tong et al., 2024). MMStar shows that many questions in multimodal benchmarks can be answered correctly without the image and proposes a new split of existing benchmarks that addresses this issue (Chen et al., 2024b).
Another shortcoming of existing models is their lack of spatial perception and reasoning (Chen et al., 2024a; Cheng et al., 2024). SpatialVLM proposes a VQA dataset that requires answering questions about relative spatial arrangements and metric relationships (Chen et al., 2024a). SpatialRGPT further includes region-level understanding (Cheng et al., 2024). MOCHI evaluates the ability of vision models to identify rotated versions of procedurally-generated objects (Bonnen et al., 2025). 'Perception test' aims to overcome shortcomings of standard video datasets by creating a diagnostic dataset where participants record videos while following complex scripts depicting interesting events (Patraucean et al., 2023). It evaluates fundamental perceptual skills (memory, abstraction, intuitive physics, and semantics) and various types of reasoning.
Another line of work considers skill acquisition (the ability to learn a skill and apply it to new scenarios). Prior work has studied this using visual analogical reasoning (Chollet, 2019; Moskvichev et al., 2023; Yiu et al., 2024). The ARC dataset contains samples consisting of a few examples of abstract grids and their transformations and one or more test inputs (Chollet, 2019). The objective is to understand the transformation performed using the examples and apply it to test inputs. The transformations have been further organized into specific concepts with varying degrees of difficulty in the ConceptARC dataset (Moskvichev et al., 2023). Inspired by ARC and developmental psy-
chology, the KiVA dataset studies visual analogies in the context of visually realistic 3D shapes with concepts like transformations in color, size, rotations, reflections, and counting (Yiu et al., 2024).
## 3 SPACE: A BENCHMARK FOR S PATIAL P ERCEPTION A ND C OGNITION E VALUATION
We develop a benchmark for evaluating the spatial cognition of frontier models. The benchmark comprises large-scale and small-scale tasks and is designed for compatibility with both text-only and multimodal models.
## 3.1 LARGE-SCALE SPATIAL COGNITION
In large-scale spatial cognition tasks, we evaluate the ability of models to build spatial representations of their surrounding environment, and whether they can use these representations to reason about and navigate in the environment. There are two stages to these tasks. First, we familiarize the model with an environment by showing a video walkthrough. 1 The model must build a mental representation of the environment that captures the locations of start, goal and landmark locations, and their spatial relationships. After the model is familiarized with the environment, we evaluate the model's spatial representation using five tasks derived from the cognitive science literature (Meneghetti et al., 2022). See Figure 2(top) and Figure 3 for an overview.
- 1 . Direction estimation. The goal is to determine the directions to other landmarks from a given landmark. The participant is asked to pretend that they are facing a landmark A, and then asked to estimate the direction (in degrees) to another landmark B. This is known as a pointing trial in the cognitive science literature (Allen et al., 1996; Hegarty et al., 2006; Pazzaglia & Taylor, 2007; Weisberg et al., 2014; Meneghetti et al., 2016). We formulate this as a multiple-choice QA task with four options for the direction (only one correct option).
- 2 . Distance estimation. The goal is to determine the straight-line distances from one landmark to all other landmarks (Allen et al., 1996; Hegarty et al., 2006). The participant is asked to pretend that they are facing a landmark A, and then asked to estimate the Euclidean distance to all the other landmarks. We pose this as a multiple-choice QA with four options for the list of distances to each landmark. Since current models are not good at estimating metric measurements (Chen et al., 2024a; Cheng et al., 2024), we generate incorrect options such that the ratios of distances between landmarks are not preserved, making it easier to identify the correct option.
- 3 . Map sketching. The goal is to draw a map of the environment that contains the start, goal and landmark positions (Allen et al., 1996; Hegarty et al., 2006; Pazzaglia & Taylor, 2007; Weisberg et al., 2014; Meneghetti et al., 2016; 2021). We formulate this as multiple-choice QA with four options for the map sketches. The correct option preserves the true spatial relationships between the different map elements, while the incorrect options skew the spatial relationships randomly.
- 4 . Route retracing. The goal is to retrace the route shown in the video from the start to the goal (Allen et al., 1996; Pazzaglia & Taylor, 2007; Meneghetti et al., 2016; 2021). This task evaluates the model's ability to remember landmarks seen in the route and the actions required along the route to reach the goal. We formulate this as an interactive task where the model receives the current observation, decides which action to take, and receives updated observations based on the actions taken. We measure performance using the SPL metric (success weighted by path length), which penalizes the model for taking unnecessary detours (Anderson et al., 2018). (The demonstrated route, which the model must retrace, is always the shortest path to the goal.)
- 5 . Shortcut discovery. The goal is to discover a shortcut (i.e., a route never observed before) from the start to the goal after observing a video walkthrough that takes detours to reach the goal (Tolman, 1948; Allen et al., 1996; Pazzaglia & Taylor, 2007; Meneghetti et al., 2016; 2021). The ability to discover shortcuts in familiar environments is a key indicator of cognitive mapping ability (Tolman, 1948). When designing environments and walkthrough paths, we ensured that a novel shortcut exists that the model can exploit. Similar to route retracing, we treat this as an interactive navigation task and measure performance using the SPL metric.
1 For text-only models, the 'video walkthrough' is a sequence of discrete map observations presented as arrays of characters, see Figure 3 for examples.
Ö¯
Ö¯
Figure 2: The tasks in SPACE. For all tasks (other than the water level test), we include multimodal as well as purely textual presentations, to support evaluating both large language models (LLMs) and vision-language models (VLMs). For large-scale tasks, we visualize examples from the egocentric image presentation here and visualize alternate presentations in Figure 3. For small-scale tasks, we visualize both visual and textual presentations here. Bolding of characters in the arrays is for illustration purposes only.
<details>
<summary>Image 2 Details</summary>

### Visual Description
## Diagram: Spatial Cognition Assessment Tasks
### Overview
The image depicts a structured diagram categorizing spatial cognition tasks into two primary domains: **Large-scale spatial cognition** and **Small-scale spatial cognition**, with additional specialized tasks. Each task includes a visual component, a question/instruction, and sometimes a legend or color-coded elements. The diagram is organized hierarchically, with tasks grouped by cognitive domain and complexity.
---
### Components/Axes
#### **Large-scale spatial cognition**
1. **Direction estimation**
- Visual: A room with a storage chest and a person facing it.
- Text: *"At what direction (in degrees) is the storage chest relative to you?"*
- Choices: A) -49°, B) 11°, C) -10°, D) 41°.
2. **Distance estimation**
- Visual: A room with a daisy, storage chest, and stove.
- Text: *"What are the Euclidean distances (in meters) to the storage chest and stove?"*
- Choices: A) 6.8, 0.2; B) 1.8, 5.0; C) 1.8, 2.8; D) 2.8, 1.8.
3. **Map sketching**
- Visual: A room with a daisy, storage chest, and stove.
- Text: *"Sketch the environment with the locations of the start, goal, and landmarks."*
- Choices: Four map options with varying object placements.
4. **Route retracing**
- Visual: A room with a storage chest and a person facing it.
- Text: *"Retrace the shortest path to the goal from the start location."*
- Visual: A dotted blue line indicating the route.
5. **Shortcut discovery**
- Visual: A room with a storage chest and a person facing it.
- Text: *"Find a shortcut to the goal from the start location."*
- Visual: A dotted green line indicating the shortcut.
#### **Small-scale spatial cognition**
1. **Perspective taking (PT)**
- Visual: A person facing a book with an apple and banana.
- Text: *"At what clockwise angle (in degrees) is the apple located relative to you?"*
- Choices: A) -35°, B) -15°, C) -115°, D) -55°.
2. **Maze completion (MCT)**
- Visual: A blue maze with a yellow agent and red goal.
- Text: *"Navigate to the goal using [↑ ↓ ↙ ↘]."*
- Grid: 5x5 maze with agent (yellow) and goal (red).
3. **Water level (WLT)**
- Visual: A tilted container with water.
- Text: *"What is the water level in the rotated container?"*
- Choices: A, B, C, D (visual representations of water levels).
#### **Specialized tasks**
1. **Mental rotation (MRT)**
- Visual: A 3D object (e.g., a zigzag shape) rotated in different orientations.
- Text: *"Which image shows the reference object rotated in 3D?"*
- Choices: A, B, C, D (rotated versions of the reference).
2. **Minnesota paper form board (MPFB)**
- Visual: Puzzle pieces labeled A, B, C, D.
- Text: *"How are these puzzle pieces put together? The pieces can be rotated but not flipped."*
3. **Judgment of line orientation (JLO)**
- Visual: Lines with angles labeled 1, 2, 3, 5, 10.
- Text: *"Which pair of lines below match the angle between the two lines above?"*
- Choices: A) 1 and 2; B) 1 and 10; C) 1 and 3; D) 1 and 5.
4. **Selective attention (SAtt)**
- Visual: A grid with red apples and blue boxes.
- Text: *"What are the locations of the top-left element of the grid (0, 0)?"*
- Grid: 3x3 grid with apples (red) and boxes (blue).
5. **Corsi block tapping (CBTT)**
- Visual: A sequence of blue boxes tapped in a specific order.
- Text: *"What is the sequence of taps? Use the box ids in the reference."*
- Choices: A) 1, 4, 2; B) 4, 2, 1; C) 4, 4, 1; D) 1, 2, 4.
6. **Spatial addition (SAdd)**
- Visual: Grids with blue and red circles.
- Text: *"What is the sum of the two arrays? Empty cells are 0s. Blue cells are 1s. Red cells are distractors."*
- Choices: A, B, C, D (summed values).
7. **Cambridge spatial working memory (CSWM)**
- Visual: A grid with blue boxes and yellow treasures.
- Text: *"Find all 6 treasures to win. After a treasure is found, it is moved to a new box."*
- Instructions: *"Remember boxes based on spatial positions."*
---
### Detailed Analysis
- **Large-scale tasks** focus on navigation, spatial reasoning, and environmental mapping (e.g., direction, distance, map sketching).
- **Small-scale tasks** emphasize object manipulation, perspective-taking, and memory (e.g., maze navigation, water level estimation).
- **Specialized tasks** test specific cognitive skills like mental rotation, puzzle assembly, and selective attention.
- **Color coding**: Blue (start), green (goal), red (distractors), yellow (treasures).
---
### Key Observations
1. **Task complexity**: Tasks progress from basic spatial estimation (direction/distance) to complex problem-solving (maze navigation, puzzle assembly).
2. **Visual-textual integration**: Each task combines a visual stimulus with a textual question, requiring cross-modal processing.
3. **Distractors**: Red elements (e.g., in SAdd) are explicitly labeled as distractors, testing attention control.
4. **Dynamic elements**: CSWM introduces a dynamic component where treasures move after being found.
---
### Interpretation
This diagram represents a comprehensive assessment tool for evaluating spatial cognition across multiple domains. The tasks are designed to measure:
- **Large-scale cognition**: Navigation, environmental mapping, and route planning.
- **Small-scale cognition**: Object manipulation, perspective-taking, and memory.
- **Specialized skills**: Mental rotation, attention, and working memory.
The inclusion of both visual and textual components suggests the tool assesses how individuals integrate spatial information across modalities. The dynamic nature of CSWM highlights the importance of adaptive spatial memory in real-world scenarios.
**Notable patterns**:
- Tasks with higher cognitive demand (e.g., CSWM, MPFB) involve more complex instructions and dynamic elements.
- Color coding is consistently used to denote key elements (e.g., start, goal, distractors), aiding in task clarity.
**Why it matters**:
This framework provides a structured way to evaluate spatial cognition, which is critical for applications in psychology, education, and human-computer interaction. The tasks mirror real-world challenges, making them relevant for assessing spatial reasoning in diverse populations.
</details>
Ö¯
Ö¯
Ö¯
Ö¯
Figure 3: Large-scale spatial cognition. We design ten environment layouts based on experimental protocols in cognitive science. The top row shows bird's-eye view renderings of these environments. To evaluate largescale spatial cognition in frontier models, we implement three observation spaces: egocentric image, discrete map (DM) image, and discrete map (DM) text (see bottom row). Ego image shows a first-person view within the environment. DMimage shows a quantized, allocentric bird's-eye view of the 2 . 5m × 2 . 5m region centered on the current position. Unlike ego image, DM image enables performing the large-scale tasks in a simplified setting without requiring perspective geometry. DM text depicts the DM image using text characters. We evaluate multimodal models using ego image and DM image, and large language models using DM text.
<details>
<summary>Image 3 Details</summary>

### Visual Description
## Diagram: Navigation System Representation
### Overview
The image depicts a multi-layered representation of a navigation system, combining visual, spatial, and textual elements. It includes an "Ego image" (a room with a painting), a "Discrete map image" (a grid with color-coded regions), and a "Discrete map text" (a matrix with numerical and symbolic data). The system uses color-coded legends to differentiate obstacles, navigable space, current position, and landmarks.
### Components/Axes
1. **Ego Image**
- A room with a painting of a dog on the left wall.
- No explicit axes or scales, but spatial relationships are implied (e.g., the painting is on the left wall).
2. **Discrete Map Image**
- A 5x5 grid with color-coded regions:
- **Black**: Obstacles (e.g., walls, barriers).
- **Blue**: Navigable space.
- **Yellow**: Current position (marked as "B").
- **Red**: Landmark (marked as "E").
- Legend: Located at the bottom-right corner of the grid.
3. **Discrete Map Text**
- A 5x5 matrix with binary values (0s and 1s) and symbols:
- **0**: Obstacles.
- **1**: Navigable space.
- **B**: Current position.
- **E**: Landmark.
- Legend: Located at the bottom-right corner of the matrix.
### Detailed Analysis
- **Grid and Matrix Alignment**:
- The grid and matrix share the same spatial structure. For example:
- The current position "B" in the grid (yellow square) corresponds to the "B" in the matrix at position (1,1).
- The landmark "E" in the grid (red circle) corresponds to the "E" in the matrix at position (4,4).
- Obstacles (black in the grid, 0s in the matrix) are consistently placed.
- **Color Legend Consistency**:
- **Black** (obstacles) matches in both the grid and matrix.
- **Blue** (navigable space) is the dominant color in both.
- **Yellow** (current position) and **Red** (landmark) are uniquely marked in both representations.
- **Spatial Grounding**:
- The legend for the grid is positioned at the bottom-right corner of the grid.
- The legend for the matrix is positioned at the bottom-right corner of the matrix.
- The "Ego image" is placed above the grid and matrix, with a green arrow pointing to the grid, suggesting a connection between the physical environment and the abstract map.
### Key Observations
1. **Consistency Between Visual and Textual Representations**:
- The grid and matrix are perfectly aligned in terms of spatial relationships (e.g., obstacles, current position, and landmark).
- The use of "B" and "E" in the matrix mirrors the color-coded markers in the grid.
2. **Legend Placement**:
- Both legends are positioned at the bottom-right of their respective sections, ensuring clarity and accessibility.
3. **Navigation Context**:
- The system appears designed for pathfinding or robotics, where the grid/matrix represents a simplified environment, and the ego image provides a real-world reference.
### Interpretation
This diagram illustrates a **multi-modal navigation system** that bridges physical environments (ego image) with abstract spatial data (grid and matrix). The color-coded legend ensures unambiguous interpretation of obstacles, navigable paths, and key points (current position and landmark). The alignment between the grid and matrix suggests a system where visual and textual data are synchronized for tasks like autonomous navigation or spatial reasoning.
- **Notable Patterns**:
- The current position ("B") is near the top-left of the grid, while the landmark ("E") is near the bottom-right, indicating a potential path from start to goal.
- Obstacles are concentrated in the top-left and bottom-right corners, creating a navigable corridor through the center.
- **Underlying Logic**:
- The use of binary values (0s and 1s) in the matrix simplifies the environment for computational processing, while the grid provides a human-readable visual.
- The legend acts as a critical bridge between the two representations, ensuring accurate interpretation of spatial data.
- **Anomalies**:
- No explicit anomalies are present; the system appears logically consistent.
This representation is likely part of a larger framework for AI-driven navigation, where the ego image serves as input, and the grid/matrix enables algorithmic path planning.
</details>
## 3.1.1 IMPLEMENTATION
3D environment generation. We create ten environment layouts based on prior work in cognitive science and artificial intelligence (Tolman, 1948; Gillner & Mallot, 1998; Richardson et al., 1999; Banino et al., 2018; Bouchekioua et al., 2021). Figure 3 shows bird's-eye view images of each layout. See the appendix for more details about the environment generation process.
Observation spaces. We create multiple observation spaces to support evaluating both text-only and vision+text models. These are egocentric images, discrete map (DM) images, and discrete map (DM) text presentations.
- Ego image. The environment is captured using a forward-facing perspective camera placed at the model's location in the environment. This is similar to the setup of an animal navigating through an immersive environment and requires understanding perspective geometry.
- DMimage. This is a quantized bird's-eye view image of a 2 . 5m × 2 . 5m area in the environment surrounding the model's location. This is akin to a human using a map to navigate. The current location is always at the center of the DM image. We use a Pacman-like coloring scheme highlighting the obstacles, navigable space, current postion, and landmarks. DM image simplifies the mapping process by removing the need for perspective geometry understanding.
- DMtext. This is a translation of DM image to an text array. We carefully select the text encoding to ensure compatibility with text tokenizers of popular models and ensure that each element of the array is encoded by the tokenizers of all evaluated models as a distinct token.
See Figure 3 (bottom) for examples of these presentations. The first two observation spaces are used for models that support visual inputs, while the last observation space is used for text-only models. See the appendix for additional illustrations of these tasks and dataset statistics.
## 3.2 SMALL-SCALE SPATIAL COGNITION
In small-scale spatial cognition tasks, we evaluate the models' ability to perceive, imagine, and mentally transform objects or shapes in two and three dimensions. We build on the body of work on visuospatial abilities, which are evaluated in humans via paper-and-pencil tasks (Allen et al., 1996; Weisberg et al., 2014; Meneghetti et al., 2022). These abilities may be used to explain individual differences between participants in large-scale spatial cognition (Meneghetti et al., 2022). We define ten small-scale tasks to evaluate abilities such as spatial perception, spatial visualization, spatial orientation, selective attention, and visuospatial working memory. See Figure 2(bottom) for illustrations of each task. We summarize each task below and provide additional details in the appendix.
- 6 . Mental rotation test (MRT). This is a test of spatial visualization, i.e., the ability to mentally manipulate 2D or 3D stimuli (Vandenberg & Kuse, 1978). In the visual presentation, a reference 3D shape from Shepard & Metzler (1971) is provided along with four choices. The correct choice is a rotated version of the reference, and the remaining choices are rotated versions of an alternate shape. The goal is to identify the correct choice from the distractors. The text-only version of this task uses 2D character arrays, akin to the card rotations test from French et al. (1963).
- 7 . Perspective taking test (PTT). This is a test of spatial orientation, i.e., the ability to imagine being in a different position in space and seeing the surroundings from a new perspective (Kozhevnikov & Hegarty, 2001). We place N randomly-sampled objects (like apples, bats, dogs, books, grapes, etc.) at random locations in an image (with no overlap between objects). The objective is to take the perspective of standing next to an object (say, a bat) facing another object (say, a book), and determine the relative orientation of a third object (say, an apple). This is a multiple-choice QA with four options (only one correct option).
- 8 . Water level test (WLT). This is a test of spatial perception (Piaget et al., 1957). Originally, it was designed to evaluate children's knowledge about the horizontal nature of the surface of water in a sealed bottle regardless of its orientation. Performance on the water-level test was found to be related to performance on spatial ability tests (Foltz, 1978; Wittig & Allen, 1984). We present the model with an image of a water container partially filled with water and ask it to imagine the position of the water if the container were tilted. We implement this as a four-way multiplechoice QA, where each choice is an image showing the tilted container with varying water levels. The objective is to select the one choice that shows the correct water level.
- 9 . Minnesota Paper Form Board test (MPFB). This is a test of spatial visualization, where the model must perform multi-step manipulations of complex spatial information (Meneghetti et al., 2022). Specifically, we provide the model with pieces of a figure and ask it to identify how the pieces fit together (Likert & Quasha, 1941; 1969). We programmatically segment a square into five pieces, and rotate the pieces randomly to generate the final segments. We generate alternate segmentations of a square as negative choices for a multiple-choice QA presentation.
- 10 . Judgement of Line Orientation test (JLO). This is a test of spatial perception (Benton, 1994), where a model must determine the angle between two lines in an image. Our visual presentation shows two lines in an image along with a set of 11 reference lines. The objective is to determine the pair of reference lines that have the same angles between them as the lines in the image. This is presented as a multiple-choice QA with four choices (only one of them correct). Our text-only presentation implements the tasks via lines embedded in 2D integer arrays.
- 11 . Selective attention task (SAtt). This is a test of selective spatial attention, i.e., the ability to selectively attend to a particular region of space while ignoring others (Serences & Kastner, 2014; Pahor et al., 2022). In particular, we use the widely used cancellation task, where the goal is to search for and mark out target stimuli embedded amidst distractors (Della Sala et al., 1992; Brickenkamp & Zillmer, 1998; Dalmaijer et al., 2015; Lacroix et al., 2021; Pahor et al., 2022; Kalina & Walgrave, 2004). We design the task as multiple-choice QA with objects as the stimuli for visual evaluation and characters as stimuli for text-only evaluation. The target stimuli and distractors are arranged on a grid. The answer must be selected from one out of four options. The correct option lists the (row, column) pairs that localize the target stimuli in the grid.
- 12 . Maze completion task (MCT). This is an interactive game to evaluate spatial orientation, planning, and executive functioning (Lacroix et al., 2021). We programmatically create mazes using Mazelib (Stilley, 2014) and render them using a Pacman-like color scheme for the visual presentation and a character array for the text-only presentation (similar to DM image and DM text in Figure 3). Using the maze rendering, a model must sequentially select an up/down/left/right action to reach the goal and execute a stop action to successfully complete the task. If the model does not reach the goal within 250 actions, it is considered to have failed. We measure the success rate, i.e., the percentage of mazes where the model reaches the goal within the allotted time.
- 13 . Corsi block-tapping task (CBTT). This is a test of visuospatial working memory and attention (Corsi, 1972; Claessen et al., 2015). We create a digital Corsi board with N blue-colored blocks that are randomly placed on the board with no overlap ( N ∈ [5 , 8]) . We randomly sample a sequence of K taps, where each block is tapped at most once ( K ∈ [4 , N ]) . The taps are digitally rendered on the blocks by highlighting them in yellow when tapped, yielding an sequence of K images. After presenting the K images, we provide a rendering of the board with integer IDs
assigned to each block and ask the model to reproduce the sequence of taps using these IDs. We treat this as multiple-choice QA with four choices of tap sequences, only one of which is correct.
- 14 . Spatial addition task (SAdd). This is a test of visuospatial working memory, i.e., the ability to store and manipulate spatial information in memory (Wechsler, 2009). The model is presented with two 2D grids, where each grid location can be empty or contain a blue or red dot. The objective is to add the two grids together by following certain rules. If a grid location has a blue dot in exactly one of grids, the result should be a blue dot. If a grid location has blue dots on both grids, the result should be a white dot. Red dots are distractors and must be ignored. We programmatically generate grid pairs with sizes sampled from { 3 , 5 , 7 , 9 } and pseudo-randomly populate them with blue and red dots. We formulate the task as multiple-choice QA, presenting four grids as possible answers, exactly one of which is correct.
- 15 . Cambridge spatial working memory test (CSWM). This is an interactive game that evaluates visuospatial working memory (Sahakian et al., 1988). The model is presented with an image containing N blue colored boxes ( N ∈ [3 , 7]) . A yellow 'treasure' is initially hidden in one of the boxes. The model must sequentially select boxes one at a time to find the hidden treasure. Once the treasure is found, another treasure is placed in one of the remaining boxes. The objective is to locate all the yellow treasures via a process of elimination. We programmatically generate instances of this task by randomly sampling blue boxes, placing them at random locations (without overlap), and placing the treasures in each box in random order. At each step, we assign random integer IDs to each box as a reference for selecting a box. The boxes' integer IDs are randomized in each step, forcing the model to remember boxes based on their spatial positions. When the model finds a treasure, the box containing the treasure becomes yellow. The model must find all the treasures before a time limit T (determined based on N ) to succeed.
As with large-scale spatial cognition, we also implement purely textual presentations of these tasks to support evaluation of large language models (LLMs). Figure 2 illustrates both the multimodal and the purely textual presentations. The key idea in instantiating the textual presentations is to encode all spatial information via 2D character arrays. We did not identify a natural such encoding for the water level test (WLT) and did not include a text-only presentation for it for this reason. See the appendix for additional illustrations of these tasks. In some tasks, such as MRT, MPFB, and JLO, the text presentations are substantially easier than the corresponding visual presentations. However, the visual and textual presentations match closely for the remaining tasks, enabling us to identify modality-specific limitations of multimodal models by evaluating them on the two presentations.
## 4 EXPERIMENTS
Baselines. We evaluate a number of LLMs and VLMs. Using text-only presentations, we evaluate GPT-4v and GPT-4o (OpenAI, 2023; 2024), Claude 3.5 Sonnet (Anthropic, 2024), the Llama3 family (Dubey et al., 2024), Mistral models such as Mixtral 8x7B, Mixtral 8x22B, and Mistral 123B (Jiang et al., 2024; Mistral AI team, 2024a), and two Yi 1.5 models (Young et al., 2024). Using multimodal presentations, we evaluate GPT-4v and GPT-4o (OpenAI, 2023; 2024), Claude 3.5 Sonnet (Anthropic, 2024), LlaVA-NeXT-Interleave (Li et al., 2024a), Pixtral 12B (Mistral AI team, 2024b), and Phi-3.5-vision (Abdin et al., 2024). We use the vLLM inference engine for evaluating the open-source models (Kwon et al., 2023). For each task, we implement a prompt that provides a detailed description of the task and the expected response format (see the appendix). We also list the results of a chance baseline that selects an answer at random. For multiple-choice QA tasks, chance is at 25% . For interactive tasks, the chance baseline samples an action at random in each step. We further include human performance for reference for the multiple-choice QA tasks. See the appendix for additional implementation details.
Large-scale spatial cognition results. The results are shown in Table 1, grouped by presentation modality (ego image, DM image, DM text). For image-based presentations, we evaluate Claude 3.5 Sonnet, GPT-4v and GPT-4o because they support video understanding (via a succession of images). For DM text, we evaluate both open and closed LLMs. We also list the performance of the chance baseline for calibration, as well as human performance (see the appendix for details). In the text-only modality, Claude 3.5 Sonnet attains the highest average performance. Mistral 123B is the highest-performing open model. All evaluated models struggle with large-scale spatial cognition, falling significantly below human performance on direction estimation, distance estimation,
## Observation space: Ego image
Table 1: Large-scale spatial cognition results. The three tables show results for different observation spaces. Results below 50% of human performance are gray. Methods are sorted based on their overall performance.
| Method | Direction estimation | Distance estimation | Map sketching | Route retracing | Shortcut discovery | Average |
|-------------------|----------------------------|----------------------------|----------------------------|----------------------------|----------------------------|----------------------------|
| Human | 82.8 | 83.2 | 96.6 | - | - | - |
| GPT-4o | 32 . 0 ± 4 . 1 | 36 . 5 ± 5 . 0 | 33 . 3 ± 4 . 1 | 6 . 6 ± 3 . 6 | 6 . 4 ± 1 . 0 | 23 . 0 |
| Claude 3.5 Sonnet | 29 . 0 ± 2 . 9 | 34 . 4 ± 2 . 9 | 27 . 5 ± 8 . 3 | 7 . 4 ± 2 . 8 | 0 . 0 ± 0 . 0 | 19 . 6 |
| GPT-4v | 29 . 7 ± 0 . 3 | 31 . 9 ± 2 . 7 | 20 . 0 ± 11 . 8 | 1 . 6 ± 1 . 2 | 3 . 9 ± 0 . 9 | 17 . 4 |
| Chance | 25.0 | 25.0 | 25.0 | 0.0 | 0.0 | 15.0 |
| | Observation space: DMimage | Observation space: DMimage | Observation space: DMimage | Observation space: DMimage | Observation space: DMimage | Observation space: DMimage |
| Method | Direction estimation | Distance estimation | Map sketching | Route retracing | Shortcut discovery | Average |
| Human | 82.9 | 82.5 | 100.0 | - | - | - |
| GPT-4o | 29 . 5 ± 5 . 5 | 31 . 9 ± 1 . 0 | 33 . 3 ± 3 . 3 | 23 . 6 ± 3 . 1 | 25 . 9 ± 2 . 0 | 28 . 8 |
| Claude 3.5 Sonnet | 32 . 5 ± 2 . 3 | 40 . 0 ± 2 . 6 | 30 . 0 ± 4 . 1 | 15 . 4 ± 4 . 3 | 13 . 7 ± 6 . 4 | 26 . 3 |
| GPT-4v | 26 . 3 ± 3 . 0 | 29 . 3 ± 4 . 1 | 45 . 0 ± 5 . 0 | 13 . 7 ± 5 . 2 | 15 . 3 ± 3 . 0 | 25 . 9 |
| Chance | 25.0 | 25.0 | 25.0 | 0.0 | 0.0 | 15.0 |
| | Observation space: DMtext | Observation space: DMtext | Observation space: DMtext | Observation space: DMtext | Observation space: DMtext | Observation space: DMtext |
| Method | Direction estimation | Distance estimation | Map sketching | Route retracing | Shortcut discovery | Average |
| Human | 66.7 | 76.5 | 66.7 | - | - | - |
| Claude 3.5 Sonnet | 29 . 2 ± 4 . 4 | 40 . 2 ± 3 . 1 | 51 . 7 ± 5 . 5 | 26 . 5 ± 2 . 9 | 20 . 0 ± 3 . 0 | 33 . 5 |
| GPT-4o | 28 . 7 ± 4 . 1 | 33 . 3 ± 1 . 7 | 46 . 7 ± 4 . 1 | 27 . 5 ± 3 . 2 | 26 . 6 ± 0 . 1 | 32 . 6 |
| Mistral 123B | 30 . 5 ± 5 . 1 | 28 . 9 ± 5 . 7 | 38 . 3 ± 5 . 5 | 20 . 3 ± 2 . 8 | 19 . 9 ± 3 . 0 | 27 . 6 |
| GPT-4v | 30 . 7 ± 4 . 1 | 26 . 5 ± 2 . 7 | 40 . 8 ± 6 . 0 | 20 . 6 ± 5 . 8 | 15 . 4 ± 2 . 0 | 26 . 8 |
| Llama 3 70B | 27 . 0 ± 2 . 2 | 30 . 4 ± 1 . 9 | 35 . 0 ± 8 . 3 | 13 . 2 ± 9 . 2 | 5 . 3 ± 4 . 1 | 22 . 2 |
| Yi 1.5 34B | 26 . 2 ± 4 . 7 | 35 . 7 ± 1 . 4 | 35 . 0 ± 10 . 7 | 3 . 2 ± 0 . 2 | 1 . 1 ± 1 . 6 | 20 . 2 |
| Mixtral 8x22B | 21 . 3 ± 1 . 9 | 19 . 4 ± 1 . 4 | 39 . 2 ± 12 . 6 | 1 . 5 ± 1 . 4 | 3 . 9 ± 1 . 7 | 17 . 0 |
| Yi 1.5 9B | 10 . 8 ± 1 . 0 | 20 . 0 ± 3 . 7 | 35 . 0 ± 5 . 0 | 5 . 0 ± 2 . 2 | 1 . 3 ± 1 . 5 | 14 . 4 |
| Llama 3 8B | 22 . 5 ± 2 . 9 | 24 . 6 ± 2 . 1 | 23 . 3 ± 7 . 1 | 0 . 0 ± 0 . 0 | 1 . 1 ± 1 . 6 | 14 . 3 |
| Mixtral 8x7B | 15 . 8 ± 2 . 0 | 16 . 1 ± 1 . 4 | 30 . 0 ± 8 . 2 | 1 . 1 ± 1 . 6 | 1 . 1 ± 1 . 6 | 12 . 8 |
| Chance | 25.0 | 25.0 | 25.0 | 0.0 | 0.0 | 15.0 |
and map sketching, and less than 30% SPL on route retracing and shortcut discovery, even with allocentric presentation. With egocentric multimodal presentation (the closest counterpart to classic experimental protocols in animal cognition), the models are near chance level on all tasks.
Human performance ranges from 80% to 100% accuracy on image-based presentations of the multiple-choice QA tasks. Since perceiving large sequences of text arrays is non-trivial for humans, the performance drops to 65% -80% for the text presentations.
Small-scale spatial cognition results. The results are shown in Table 2. With multimodal presentations, we benchmark GPT-4o, GPT-4v, Claude 3.5 Sonnet, and a number of open multimodal models. With purely textual presentations, we benchmark both open and closed models. We also list the performance of the chance baseline for calibration, as well as human performance (see the appendix for details).
Performance of some model classes (e.g., GPT-4o, GPT-4v, Claude 3.5 Sonnet) on purely textual presentations is considerably higher than on multimodal presentations. The best-performing models, Claude 3.5 Sonnet and GPT-4o, achieve 43.8% and 40.1% average accuracies in the multimodal regime and 64.5% and 65.2% average accuracies with purely textual presentations. (Chance is < 25% .) We attribute this in part to the simplified nature of the text-only implementations of tasks like MRT, MPFB, and JLO (e.g., the text-only presentation of mental rotation uses only 2D shapes and constrained 2D rotations) and in part to the relative developmental maturity of large language models (LLMs) versus multimodal models on the remaining tasks.
On tasks that evaluate visuospatial working memory (specifically SAtt, CBTT, SAdd, and CSWM), the strongest LLMs perform well. On selective attention (SAtt), GPT-4o, Claude 3.5 Sonnet, Mistral 123B, and GPT-4v all achieve over 95% accuracy, matching or outperforming the human performance on this task. On the other hand, all models perform poorly on maze completion (MCT), in both presentation modalities. (Note that the models operate with full visibility, as illustrated in Figure 2.) With multimodal presentation, all evaluated models are near chance on perspective taking (PTT) and the Minnesota Paper Form Board test (MPFB). On mental rotation (MRT), the best models are near chance with multimodal presentation, which uses 3D shapes, and only marginally better with purely textual presentation, which uses 2D arrays and constrained rotations.
Multimodal
Table 2: Small-scale spatial cognition results. The two tables show results for multimodal and text-only presentations, respectively. Results below 50% of human performance are gray, results above 90% of human performance are bold . Methods are sorted based on their average performance. ( ∗ Some multimodal models ran out of memory on MCT and CSWM tasks; their accuracy is taken to be 0 for calculating the average.)
| Method | MRT | PTT | WLT | MPFB | JLO | SAtt | MCT | CBTT | SAdd | CSWM | Average |
|---------------------|---------------------|---------------------|---------------------|-------------------|-----------------|-----------------|----------------|----------------|---------------------|-------------------------------|-----------|
| Human | 78.5 | 80.0 | 94.0 | 84.0 | 82.0 | 95.0 | - | 100.0 | 98.0 | - | - |
| Claude 3.5 Sonnet | 29 . 9 ± 3 . 8 | 21 . 8 ± 2 . 9 | 37 . 0 ± 4 . 6 | 35 . 5 ± 7 . 0 | 40 . 5 ± 3 . 8 | 90 . 5 ± 3 . 5 | 2 . 2 ± 1 . 8 | 56 . 5 ± 3 . 8 | 48 . 0 ± 6 . 2 | 76 . 7 ± 2 . 5 | 43.8 |
| GPT-4o | 33 . 3 ± 1 . 9 | 26 . 5 ± 3 . 6 | 59 . 0 ± 10 . 8 | 27 . 0 ± 2 . 2 | 26 . 5 ± 5 . 9 | 70 . 2 ± 1 . 8 | 10 . 4 ± 1 . 0 | 68 . 0 ± 2 . 0 | 40 . 5 ± 7 . 1 | 40 . 0 ± 0 . 0 | 40.1 |
| GPT-4v | 32 . 3 ± 0 . 3 | 28 . 0 ± 2 . 0 | 35 . 0 ± 7 . 7 | 22 . 5 ± 4 . 1 | 26 . 5 ± 6 . 8 | 59 . 8 ± 4 . 4 | 0 . 7 ± 1 . 0 | 44 . 5 ± 3 . 0 | 32 . 0 ± 4 . 5 | 26 . 7 ± 3 . 4 | 30 . 8 |
| Pixtral 12B | 28 . 3 ± 3 . 1 | 23 . 2 ± 4 . 9 | 43 . 0 ± 7 . 0 | 30 . 5 ± 7 . 9 | 24 . 5 ± 7 . 3 | 36 . 0 ± 3 . 9 | OOM | 39 . 5 ± 3 . 0 | 28 . 5 ± 6 . 1 | OOM | 25 . 4 ∗ |
| Phi-3.5-vision | 24 . 1 ± 1 . 0 | 27 . 0 ± 3 . 2 | 22 . 5 ± 7 . 9 | 26 . 0 ± 0 . 0 | 21 . 0 ± 4 . 1 | 44 . 0 ± 4 . 6 | OOM | 33 . 0 ± 4 . 6 | 22 . 0 ± 6 . 8 | OOM | 22 . 0 ∗ |
| Llava interleave 7B | 25 . 1 ± 3 . 2 | 25 . 8 ± 5 . 8 | 25 . 0 ± 8 . 5 | 25 . 0 ± 3 . 3 | 24 . 0 ± 5 . 7 | 32 . 0 ± 4 . 9 | OOM | 25 . 5 ± 5 . 7 | 27 . 0 ± 4 . 1 | OOM | 20 . 9 ∗ |
| Chance | 25.0 | 25.0 | 25.0 | 25.0 | 25.0 | 25.0 | 0.0 | 25.0 | 25.0 | 33 . 8 ± 5 . 4 | 23.4 |
| | Text-only | Text-only | Text-only | Text-only | Text-only | Text-only | Text-only | Text-only | Text-only | Text-only | Text-only |
| Method | MRT | PTT | MPFB | JLO | SAtt | MCT | | CBTT | SAdd | CSWM | Average |
| Human | 90.0 | 75.0 | 92.0 | 98.0 | 96.0 | | - | 100.0 | 98.0 | - | - |
| GPT-4o | 41 . 9 ± 6 . 2 | 55 . 5 ± 3 . 9 | 50 . 5 ± 9 . 6 | 66 . 5 ± 4 . 8 | 98 . 8 ± | 0 . 4 21 . 5 | ± 3 . 8 | 82 . 5 ± 1 . 7 | 93 . 5 ± 3 . 6 | 76 . 7 ± 2 . 5 | 65.2 |
| Claude 3.5 Sonnet | 37 . 5 ± 1 . 8 | 50 . 0 ± 7 . 5 | 45 . 0 ± 6 . 7 | 70 . 5 ± 4 . 3 | 97 . 0 ± 1 | . 0 10 . 0 | ± 1 . 1 | 97 . 5 ± 0 . 9 | 91 . 5 ± 4 . 3 | 82 . 0 ± 3 . 3 | 64.5 |
| Mistral 123B | 39 . 4 ± 6 . 2 | 44 . 8 ± 4 . 0 | 48 . 5 ± 5 . 2 | 57 . 0 ± 5 . 4 | 97 . 5 ± | 0 . 5 14 . 8 | ± 2 . 8 | 88 . 5 ± 0 . 9 | 92 . 5 ± 0 . 9 | 62 . 0 ± 2 . 8 | 60.5 |
| GPT-4v | 41 . 2 ± 7 . 2 | 67 . 5 ± 6 . 5 | 34 . 0 ± 6 . 0 | 62 . 0 ± 4 . 0 | 95 . 8 ± | 1 . 3 3 . 7 | ± 1 . 0 | 87 . 5 ± 3 . 6 | 79 . 0 ± 2 . 2 | 45 . 3 ± 2 . 5 | 57.3 |
| Llama 3 70B | 28 . 1 ± 9 . 2 | 29 . 2 ± 2 . 4 | 38 . 5 ± 3 . 8 | 42 . 5 ± 0 . 9 | 71 . 8 ± | 3 . 8 1 . 5 | ± 1 . 0 | 52 . 5 ± 5 . 7 | 62 . 5 ± 5 . 4 | 34 . 0 ± 5 . 9 | 40.0 |
| Mixtral 8x22B | 26 . 9 ± 3 . 2 | 24 . 5 ± 5 . 2 | 31 . 0 ± 5 . 9 | 36 . 0 ± 5 . 1 | 73 . 5 ± | 3 . 6 1 . 5 | ± 2 . 1 | 55 . 0 ± 3 . 3 | 68 . 0 ± 6 . 8 | 17 . 3 ± 2 . 5 | 37 . 0 |
| Yi 1.5 34B | 20 . 6 ± 6 . 0 | 28 . 0 ± 2 . 1 | 34 . 5 ± 4 . 6 | 33 . 5 ± 3 . 6 | 58 . 2 ± | 4 . 3 0 . 7 | ± 1 . 0 | 35 . 5 ± 8 . 4 | 41 . 5 ± 0 . 9 | 24 . 0 ± 0 . 0 | 30 . 7 |
| Yi 1.5 9B | 21 . 2 ± 1 . 2 | 23 . 8 ± 2 . 7 | 30 . 0 ± 3 . | ± 5 . | 48 . 2 ± | 4 . 0 0 . 7 | ± 1 . 0 | 36 . 5 ± 4 . 6 | 51 . 5 ± 8 . 9 | 24 . 7 ± 8 . 4 | 29 . 0 |
| | 14 . 4 ± 1 . | ± 5 . 1 | 2 | 24 . 5 5 | | . 7 0 . 0 | ± 0 . 0 | 27 . 5 ± 7 . 1 | | 26 . 0 ± 6 . 5 | 24 . 7 |
| Llama 3 8B | 1 | 25 . 8 | 26 . 0 ± 4 . 2 | 27 . 0 ± 1 . 7 | 46 . 0 ± | 3 | | 22 . 5 | 30 . 0 ± 7 . 3 | | 23 . 8 |
| Mixtral 8x7B Chance | 19 . 4 ± 4 . 5 25.0 | 10 . 5 ± 0 . 9 25.0 | 29 . 5 ± 5 . 7 25.0 | 27 . 5 ± 7 . 25.0 | 5 39 . 0 ± 25.0 | 5 . 4 0 . 0 0.0 | ± 0 . 0 | ± 3 . 8 25.0 | 43 . 5 ± 3 . 3 25.0 | 22 . 7 ± 4 . 1 33 . 0 ± 5 . 3 | 23.1 |
Humans perform well, achieving over 80% accuracy on the majority of the multiple-choice QA tasks with both text-only and multimodal presentations. Humans perform better on the textual presentations of tasks like MRT, MPFB and JLO than their vision counterparts due to the simplified nature of the text-only implementations.
Ecological compatibility of SPACE with frontier models. Our results indicate that current frontier models lack spatial cognition. Alternatively, these results could be the result of models not understanding the inputs presented to them (i.e., the inputs are not ecological compatibility). We study this in Appendix A.1 and demonstrate that this is not the case. Models can understand the inputs correctly and perform non-spatial cognition tasks well, yet fail to demonstrate spatial cognition.
## 5 DISCUSSION
We presented SPACE, a benchmark for spatial cognition in frontier models. Our evaluation of contemporary models brings up intriguing questions and opportunities for further investigation. First, our results underscore that frontier models exhibit a fundamentally different form of intelligence from what has been observed (and studied) in humans and animals. No biological intelligence we have encountered has exhibited such advanced skill in some aspects of higher cognition (Trinh et al., 2024) while failing so profoundly in basic spatial cognition. This is particularly intriguing because in biological intelligence, spatial cognition is considered a prerequisite for higher cognition, and breakdowns in spatial cognition are diagnostic of higher-level disorders (Cappa, 2008; Possin, 2010; Verghese et al., 2017; Cammisuli et al., 2024). From a scientific standpoint, the constellation of traits exhibited by frontier models is fascinating and may inspire a new cognitive science (Simon, 2019). As a precautionary stance, we can refrain from drawing analogies based on experience with biological cognition. (E.g., 'a model won the Mathematics Olympiad therefore it possesses a comparable cognitive repertoire to a human Olympiad winner and could be expected to have comparable skill in other domains'.)
Could deficiencies in spatial cognition be causally linked to some of the puzzling breakdowns exhibited by contemporary frontier models in higher-level tasks? What is the roadmap for bringing spatial cognition in frontier models up to the level of animal cognition (and perhaps beyond)? Is this a prerequisite for attaining some of the more far-reaching aspirations of contemporary artificial intelligence research? Does embodiment play a role, as it has in prior forms of intelligence (Smith & Gasser, 2005; Savva et al., 2019)? Or will artificial cognition continue to develop along a fundamentally different ontogenetic path? We expect further advances to increase the robustness and generality of frontier models, and to continue to broaden our understanding of the nature of intelligence.
## REFERENCES
- Marah I Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat S. Behl, et al. Phi-3 technical report: A highly capable language model locally on your phone. arXiv:2404.14219 , 2024.
- Gary L Allen, Kathleen C Kirasic, Shannon H Dobson, Richard G Long, and Sharon Beck. Predicting environmental learning from spatial abilities: An indirect route. Intelligence , 22(3), 1996.
- David I Anderson, Joseph J Campos, David C Witherington, Audun Dahl, Monica Rivera, Minxuan He, Ichiro Uchiyama, and Marianne Barbu-Roth. The role of locomotion in psychological development. Frontiers in Psychology , 4, 2013.
- Peter Anderson, Angel X. Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, and Amir R. Zamir. On evaluation of embodied navigation agents. arXiv:1807.06757 , 2018.
- Anthropic. Introducing Claude 3.5 Sonnet, 2024. https://www.anthropic.com/news/ claude-3-5-sonnet .
- Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual question answering. In ICCV , 2015.
- Leopold Aschenbrenner. Situational awareness: The decade ahead, 2024. https: //situational-awareness.ai/wp-content/uploads/2024/06/ situationalawareness.pdf .
- Anas Awadalla, Le Xue, Oscar Lo, Manli Shu, Hannah Lee, Etash Kumar Guha, Matt Jordan, Sheng Shen, Mohamed Awadalla, Silvio Savarese, Caiming Xiong, Ran Xu, Yejin Choi, and Ludwig Schmidt. MINT-1T: Scaling open-source multimodal data by 10x: A multimodal dataset with one trillion tokens. arXiv:2406.11271 , 2024.
- Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J Chadwick, Thomas Degris, Joseph Modayil, et al. Vector-based navigation using grid-like representations in artificial agents. Nature , 557(7705), 2018.
- Arthur Lester Benton. Contributions to neuropsychological assessment: A clinical manual . Oxford University Press, USA, 1994.
- Mark Blades and Christopher Spencer. The development of children's ability to use spatial representations. Advances in child development and behavior , 25, 1994.
- Nicole Blaser, G Dell'Omo, G Dell'Ariccia, David Paul Wolfer, and H-P Lipp. Testing cognitive navigation in unknown territories: homing pigeons choose different targets. Journal of Experimental Biology , 216(16), 2013.
- Tyler Bonnen, Stephanie Fu, Yutong Bai, Thomas O'Connell, Yoni Friedman, Nancy Kanwisher, Josh Tenenbaum, and Alexei Efros. Evaluating multiview object consistency in humans and image models. In NeurIPS , 2025.
- Youcef Bouchekioua, Aaron P Blaisdell, Yutaka Kosaki, Iku Tsutsui-Kimura, Paul Craddock, Masaru Mimura, and Shigeru Watanabe. Spatial inference without a cognitive map: the role of higher-order path integration. Biological Reviews , 96(1), 2021.
- R Brickenkamp and E Zillmer. Test d2: concentration-endurance test. Gottingen Ger. CJ Hogrefe , 1998.
- Rodney Brooks. Rodney brooks' three laws of artificial intelligence, 2024. https://rodneybrooks.com/ rodney-brooks-three-laws-of-artificial-intelligence/ .
- Davide Maria Cammisuli, Gloria Marchesi, Virginia Bellocchio, Edoardo Nicol` o Aiello, Barbara Poletti, Federico Verde, Vincenzo Silani, Nicola Ticozzi, Stefano Zago, Teresa Difonzo, et al. Behavioral disorders of spatial cognition in patients with mild cognitive impairment due to alzheimer's disease (the bdsc-mci project): Ecological validity of the corsi learning suvra-span test. Journal of Personalized Medicine , 14(5), 2024.
- SF Cappa. Cognitive neurology: a clinical textbook . Oxford University Press, 2008.
- Boyuan Chen, Zhuo Xu, Sean Kirmani, Brian Ichter, Danny Driess, Pete Florence, Dorsa Sadigh, Leonidas J. Guibas, and Fei Xia. Spatialvlm: Endowing vision-language models with spatial reasoning capabilities. In CVPR , 2024a.
- Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. Sharegpt4v: Improving large multi-modal models with better captions. arXiv:2311.12793 , 2023.
- Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, and Feng Zhao. Are we on the right way for evaluating large visionlanguage models? arXiv:2403.20330 , 2024b.
- Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pond´ e de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv:2107.03374 , 2021.
- Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll´ ar, and C. Lawrence Zitnick. Microsoft COCO captions: Data collection and evaluation server. arXiv:1504.00325 , 2015.
- An-Chieh Cheng, Hongxu Yin, Yang Fu, Qiushan Guo, Ruihan Yang, Jan Kautz, Xiaolong Wang, and Sifei Liu. Spatialrgpt: Grounded spatial reasoning in vision language model. arXiv:2406.01584 , 2024.
- Franc ¸ois Chollet. On the measure of intelligence. arXiv:1911.01547 , 2019.
- Michiel HG Claessen, Ineke JM Van Der Ham, and Martine JE Van Zandvoort. Computerization of the standard corsi block-tapping task affects its underlying cognitive concepts: a pilot study. Applied Neuropsychology: Adult , 22(3), 2015.
- Philip Michael Corsi. Human memory and the medial temporal region of the brain. Phd thesis, McGill University, 1972. https://escholarship.mcgill.ca/concern/theses/ 05741s554 .
- Christopher J. Cueva and Xue-Xin Wei. Emergence of grid-like representations by training recurrent neural networks to perform spatial localization. In ICLR , 2018.
- Edwin S Dalmaijer, Stefan Van der Stigchel, Tanja CW Nijboer, Tim HW Cornelissen, and Masud Husain. Cancellationtools: All-in-one software for administration and analysis of cancellation tasks. Behavior Research Methods , 47, 2015.
- Dawson-Haggerty et al. trimesh, 2019. https://trimesh.org/ .
- Miguel de Guinea, Alejandro Estrada, K Anne-Isola Nekaris, and Sarie Van Belle. Cognitive maps in the wild: revealing the use of metric information in black howler monkey route navigation. Journal of Experimental Biology , 224(15), 2021.
- Sergio Della Sala, Marcella Laiacona, Hans Spinnler, and Chiara Ubezio. A cancellation test: its reliability in assessing attentional deficits in Alzheimer's disease. Psychological Medicine , 22(4), 1992.
- Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR , 2009.
- Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv:2407.21783 , 2024.
- Pul Ashby Foltz. Adult performance on piaget's water level task and its relation of spatial orientation and visualization. Master's thesis, University of Richmond, 1978. https://scholarship.richmond.edu/cgi/viewcontent.cgi? article=1424&context=masters-theses .
- Nigel Foreman, Denny Foreman, Alison Cummings, and Sandra Owens. Locomotion, active choice, and spatial memory in children. The Journal of General Psychology , 117(2), 1990.
- John W French, Ruth B Ekstrom, and Leighton A Price. Manual for kit of reference tests for cognitive factors. 1963.
- Andrea Frick and Wenke M¨ ohring. A matter of balance: Motor control is related to children's spatial and proportional reasoning skills. Frontiers in Psychology , 6, 2016.
- Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, and Rongrong Ji. MME: A comprehensive evaluation benchmark for multimodal large language models. arXiv:2306.13394 , 2023.
- Chaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv:2405.21075 , 2024a.
- Xingyu Fu, Yushi Hu, Bangzheng Li, Yu Feng, Haoyu Wang, Xudong Lin, Dan Roth, Noah A. Smith, Wei-Chiu Ma, and Ranjay Krishna. BLINK: Multimodal large language models can see but not perceive. arXiv:2404.12390 , 2024b.
- Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, et al. Datacomp: In search of the next generation of multimodal datasets. In NeurIPS , 2023.
- Maya Geva-Sagiv, Liora Las, Yossi Yovel, and Nachum Ulanovsky. Spatial cognition in bats and rats: from sensory acquisition to multiscale maps and navigation. Nature Reviews Neuroscience , 16(2), 2015.
- Sabine Gillner and Hanspeter A Mallot. Navigation and acquisition of spatial knowledge in a virtual maze. Journal of Cognitive Neuroscience , 10(4), 1998.
- Yash Goyal, Tejas Khot, Aishwarya Agrawal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. International Journal of Computer Vision , 127(4), 2019.
- Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, et al. Hallusionbench: an advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models. In CVPR , 2024.
- Mary Hegarty and David Waller. A dissociation between mental rotation and perspective-taking spatial abilities. Intelligence , 32(2), 2004.
- Mary Hegarty, Daniel R Montello, Anthony E Richardson, Toru Ishikawa, and Kristin Lovelace. Spatial abilities at different scales: Individual differences in aptitude-test performance and spatiallayout learning. Intelligence , 34(2), 2006.
- Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In ICLR , 2021a.
- Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset. In NeurIPS Datasets and Benchmarks , 2021b.
- Anna A Ivanova, Aalok Sathe, Benjamin Lipkin, Unnathi Kumar, Setayesh Radkani, Thomas H Clark, Carina Kauf, Jennifer Hu, RT Pramod, Gabriel Grand, et al. Elements of world knowledge (ewok): A cognition-inspired framework for evaluating basic world knowledge in language models. arXiv:2405.09605 , 2024.
- Petra Jansen. The dissociation of small-and large-scale spatial abilities in school-age children. Perceptual and Motor Skills , 109(2), 2009.
- Petra Jansen and Martin Heil. The relation between motor development and mental rotation ability in 5-to 6-year-old children. International Journal of Developmental Science , 4(1), 2010.
- Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv:2401.04088 , 2024.
- Ashley N Kalina and Suzie A Walgrave. Normative evaluation of a letter cancellation instrument for the assessment of sustained attention: A construct validation study. The Journal of Undergraduate Research , 2(1), 2004.
- Maria Kozhevnikov and Mary Hegarty. A dissociation between object manipulation spatial ability and spatial orientation ability. Memory & Cognition , 29, 2001.
- Maria Kozhevnikov, Michael A Motes, and Mary Hegarty. Spatial visualization in physics problem solving. Cognitive Science , 31(4), 2007.
- Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with PagedAttention. In SOSP , 2023.
- Emilie Lacroix, St´ ephanie Cornet, Naima Deggouj, and Martin Gareth Edwards. The visuo-spatial abilities diagnosis (vsad) test: Evaluating the potential cognitive difficulties of children with vestibular impairment through a new tablet-based computerized test battery. Behavior Research Methods , 53, 2021.
- Barbara Landau. Spatial cognition. In Encyclopedia of the Human Brain . Elsevier, 2002.
- Hugo Laurenc ¸on, Lucile Saulnier, L´ eo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, and Victor Sanh. OBELICS: An open web-scale filtered dataset of interleaved image-text documents. In NeurIPS , 2023.
- Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava-next-interleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv:2407.07895 , 2024a.
- Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In CVPR , 2024b.
- Rensis Likert and WH Quasha. Minnesota Paper Form Board Test . Psychological Corporation, 1941.
- Rensis Likert and William H Quasha. Revised Minnesota paper form board test . Psychological Corporation, 1969.
- Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In NeurIPS , 2023a.
- Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, and Dahua Lin. Mmbench: Is your multi-modal model an all-around player? arXiv:2307.06281 , 2023b.
- Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, KaiWei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In ICLR , 2024.
- Arjun Majumdar, Anurag Ajay, Xiaohan Zhang, Pranav Putta, Sriram Yenamandra, Mikael Henaff, Sneha Silwal, Paul Mcvay, Oleksandr Maksymets, Sergio Arnaud, et al. Openeqa: Embodied question answering in the era of foundation models. In CVPR , 2024.
- Hanspeter A Mallot. From Geometry to Behavior: An Introduction to Spatial Cognition . MIT Press, 2024.
- Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. OK-VQA: A visual question answering benchmark requiring external knowledge. In CVPR , 2019.
- J´ er ˆ ome Marquet-Dol´ eac, R´ egis Soppelsa, and Jean-Michel Albaret. Laby 5-12: Test des labyrinthes . Hogrefe, 2010.
- John C Marshall and Gereon R Fink. Spatial cognition: Where we were and where we are. Neuroimage , 14(1), 2001.
- Chiara Meneghetti, Clara Zancada-Men´ endez, Patricia Sampedro-Piquero, Laudino Lopez, Massimiliano Martinelli, Lucia Ronconi, and Barbara Rossi. Mental representations derived from navigation: The role of visuo-spatial abilities and working memory. Learning and Individual Differences , 49, 2016.
- Chiara Meneghetti, Laura Miola, Enrico Toffalini, Massimiliano Pastore, and Francesca Pazzaglia. Learning from navigation, and tasks assessing its accuracy: The role of visuospatial abilities and wayfinding inclinations. Journal of Environmental Psychology , 75, 2021.
- Chiara Meneghetti, Laura Miola, Tommaso Feraco, and Veronica Muffato. Individual differences in navigation: An introductory overview. In Prime Archives in Psychology . Vide Leaf, 2nd edition, 2022.
- Emil W Menzel. Chimpanzee spatial memory organization. Science , 182(4115), 1973.
- Mistral AI team. Large enough, 2024a. https://mistral.ai/news/ mistral-large-2407/ .
- Mistral AI team. Announcing pixtral 12b, 2024b. https://mistral.ai/news/ pixtral-12b/ .
- Ida Momennejad, Hosein Hasanbeig, Felipe Vieira Frujeri, Hiteshi Sharma, Robert Osazuwa Ness, Nebojsa Jojic, Hamid Palangi, and Jonathan Larson. Evaluating cognitive maps in large language models with cogeval: No emergent planning. In NeurIPS , 2023.
- Arsenii Moskvichev, Victor Vikram Odouard, and Melanie Mitchell. The conceptarc benchmark: Evaluating understanding and generalization in the ARC domain. Transactions on Machine Learning Research , 2023.
- Nora S. Newcombe. Making space: The development of spatial representation and reasoning . MIT Press, 2000.
- Nora S. Newcombe. Picture this: Increasing math and science learning by improving spatial thinking. American Educator , 34(2), 2010.
- Nora S. Newcombe. Spatial Cognition. In Open Encyclopedia of Cognitive Science . MIT Press, 2024.
- Rahel Noser and Richard W Byrne. Mental maps in chacma baboons (papio ursinus): using intergroup encounters as a natural experiment. Animal Cognition , 10, 2007.
- John O'Keefe and Lynn Nadel. The hippocampus as a cognitive map . Oxford University Press, 1978.
- OpenAI. GPT-4 technical report. arXiv:2303.08774 , 2023.
- OpenAI. Hello gpt-4o, 2024. https://openai.com/index/hello-gpt-4o/ .
- Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In NeurIPS , 2022.
- Anja Pahor, Randy E Mester, Audrey A Carrillo, Eunice Ghil, Jason F Reimer, Susanne M Jaeggi, and Aaron R Seitz. Ucancellation: A new mobile measure of selective attention and concentration. Behavior Research Methods , 54(5), 2022.
- Viorica Patraucean, Lucas Smaira, Ankush Gupta, Adri` a Recasens, Larisa Markeeva, Dylan Banarse, Skanda Koppula, Joseph Heyward, Mateusz Malinowski, Yi Yang, et al. Perception test: A diagnostic benchmark for multimodal video models. In NeurIPS , 2023.
- HL Payne, GF Lynch, and Dmitriy Aronov. Neural representations of space in the hippocampus of a food-caching bird. Science , 373(6552), 2021.
- Francesca Pazzaglia and Holly A Taylor. Perspective, instruction, and cognitive style in spatial representation of a virtual environment. Spatial Cognition and Computation , 7(4), 2007.
- Michael Peters, Bruno Laeng, Kerry Latham, Marla Jackson, Raghad Zaiyouna, and Chris Richardson. A redrawn vandenberg and kuse mental rotations test-different versions and factors that affect performance. Brain and Cognition , 28(1), 1995.
- Roger Paul Peters. Wolf-sign: Scents And Space In A Wide-ranging Predator. University of Michigan, 1974.
- Jean Piaget, Baerbel Inhelder, F. J. Langdon, and J. L. Lunzer. The child's conception of space. British Journal of Educational Studies , 5(2), 1957.
- Leila M Porter and Paul A Garber. Foraging and spatial memory in wild weddell's saddleback tamarins (saguinus fuscicollis weddelli) when moving between distant and out-of-sight goals. International Journal of Primatology , 34, 2013.
- Katherine L Possin. Visual spatial cognition in neurodegenerative disease. Neurocase , 16(6), 2010.
- Andrea Presotto, Richard Fayrer-Hosken, Caitlin Curry, and Marguerite Madden. Spatial mapping shows that some african elephants use cognitive maps to navigate the core but not the periphery of their home ranges. Animal Cognition , 22, 2019.
- Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D. Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. In NeurIPS , 2023.
- Ahad Rana. Common crawl - building an open web-scale crawl using hadoop, 2010. https: //www.slideshare.net/hadoopusergroup/common-crawlpresentation .
- Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy P. Lillicrap, JeanBaptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv:2403.05530 , 2024.
- Anthony E Richardson, Daniel R Montello, and Mary Hegarty. Spatial knowledge acquisition from maps and from navigation in real and virtual environments. Memory & Cognition , 27(4), 1999.
- Barbara J Sahakian, Robin G Morris, John L Evenden, Andrew Heald, Raymond Levy, Michael Philpot, and Trevor W Robbins. A comparative study of visuospatial memory and learning in alzheimer-type dementia and parkinson's disease. Brain , 111(3), 1988.
- Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: an adversarial winograd schema challenge at scale. Commun. ACM , 64(9), 2021.
- Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. Habitat: A platform for embodied AI research. In ICCV , 2019.
- John T. Serences and Sabine Kastner. A multi-level account of selective attention. The Oxford Handbook of Attention , 2014.
- Roger N Shepard and Jacqueline Metzler. Mental rotation of three-dimensional objects. Science , 171(3972), 1971.
Herbert A. Simon. The Sciences of the Artificial . MIT Press, 3rd edition, 2019.
- Linda Smith and Michael Gasser. The development of embodied cognition: Six lessons from babies. Artificial Life , 11(1-2), 2005.
- Ben Sorscher, Gabriel C Mel, Samuel A Ocko, Lisa M Giocomo, and Surya Ganguli. A unified theory for the computational and mechanistic origins of grid cells. Neuron , 111(1), 2023.
- Robert J Spencer, Carrington R Wendell, Paul P Giggey, Stephen L Seliger, Leslie I Katzel, and Shari R Waldstein. Judgment of line orientation: an examination of eight short forms. Journal of Clinical and Experimental Neuropsychology , 35(2), 2013.
- John Stilley. mazelib: A python api for creating and solving mazes, 2014. https://github. com/john-science/mazelib .
- Together Computer. Redpajama: An open source recipe to reproduce llama training dataset, 2023. https://github.com/togethercomputer/RedPajama-Data .
- Sivan Toledo, David Shohami, Ingo Schiffner, Emmanuel Lourie, Yotam Orchan, Yoav Bartan, and Ran Nathan. Cognitive map-based navigation in wild bats revealed by a new high-throughput tracking system. Science , 369(6500), 2020.
- Edward C Tolman. Cognitive maps in rats and men. Psychological Review , 55(4), 1948.
- Luca Tommasi, Cinzia Chiandetti, Tommaso Pecchia, Valeria Anna Sovrano, and Giorgio Vallortigara. From natural geometry to spatial cognition. Neuroscience & Biobehavioral Reviews , 36(2), 2012.
- Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, and Saining Xie. Eyes wide shut? Exploring the visual shortcomings of multimodal LLMs. In CVPR , 2024.
- Trieu Trinh, Yuhuai Wu, Quoc Le, He He, and Thang Luong. Solving olympiad geometry without human demonstrations. Nature , 2024.
- Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. PlanBench: An extensible benchmark for evaluating large language models on planning and reasoning about change. In NeurIPS , 2024.
- Steven G Vandenberg and Allan R Kuse. Mental rotations, a group test of three-dimensional spatial visualization. Perceptual and Motor Skills , 47(2), 1978.
- Marina Vasilyeva and Stella F Lourenco. Development of spatial cognition. Wiley Interdisciplinary Reviews: Cognitive Science , 3(3), 2012.
- Joe Verghese, Richard Lipton, and Emmeline Ayers. Spatial navigation and risk of cognitive impairment: A prospective cohort study. Alzheimer's & Dementia , 13(9), 2017.
- David Ed Waller and Lynn Ed Nadel. Handbook of Spatial Cognition . American Psychological Association, 2013.
- Lu Wang, Allan S Cohen, and Martha Carr. Spatial ability at two scales of representation: A metaanalysis. Learning and Individual Differences , 36, 2014.
- David Wechsler. WMS-IV: Wechsler Memory Scale . Pearson, 2009.
- Steven M Weisberg, Victor R Schinazi, Nora S Newcombe, Thomas F Shipley, and Russell A Epstein. Variations in cognitive maps: understanding individual differences in navigation. Journal of Experimental Psychology: Learning, Memory, and Cognition , 40(3), 2014.
- Joseph F Welklin, Benjamin R Sonnenberg, Carrie L Branch, Virginia K Heinen, Angela M Pitera, Lauren M Benedict, Lauren E Whitenack, Eli S Bridge, and Vladimir V Pravosudov. Spatial cognitive ability is associated with longevity in food-caching chickadees. Science , 385(6713), 2024.
- Erik Wijmans, Manolis Savva, Irfan Essa, Stefan Lee, Ari S. Morcos, and Dhruv Batra. Emergence of maps in the memories of blind navigation agents. In ICLR , 2023.
- Michele Andrisin Wittig and Mary J Allen. Measurement of adult performance on piaget's water horizontality task. Intelligence , 8(4), 1984.
- Dˆ everton Pl´ acido Xavier, Filipa Abreu, Antonio Souto, and Nicola Schiel. Choosing the best way: how wild common marmosets travel to efficiently exploit resources. Animal Cognition , 27(1), 2024.
- Jiayun Xu, Mauricio Girardi-Schappo, Jean-Claude Beique, Andr´ e Longtin, and Leonard Maler. Shortcutting from self-motion signals reveals a cognitive map in mice. Elife , 13, 2024.
- Yutaro Yamada, Yihan Bao, Andrew Kyle Lampinen, Jungo Kasai, and Ilker Yildirim. Evaluating spatial understanding of large language models. Transactions on Machine Learning Research , 2024.
- Kaining Ying, Fanqing Meng, Jin Wang, Zhiqian Li, Han Lin, Yue Yang, Hao Zhang, Wenbo Zhang, Yuqi Lin, Shuo Liu, et al. Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models towards multitask AGI. In ICML , 2024.
- Eunice Yiu, Maan Qraitem, Charlie Wong, Anisa Noor Majhi, Yutong Bai, Shiry Ginosar, Alison Gopnik, and Kate Saenko. Kiva: Kid-inspired visual analogies for testing large multimodal models. arXiv:2407.17773 , 2024.
- Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, et al. Yi: Open foundation models by 01.ai. arXiv:2403.04652 , 2024.
- Christopher J Young, Susan C Levine, and Kelly S Mix. The connection between spatial and mathematical ability across development. Frontiers in Psychology , 9, 2018.
- Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabilities. In ICML , 2024.
- Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. MMMU: A massive multi-discipline multimodal understanding and reasoning benchmark for expert AGI. In CVPR , 2024.
## A APPENDIX
## A.1 ECOLOGICAL COMPATIBILITY OF MULTIMODAL INPUTS WITH FRONTIER MODELS
Our results in Section 4 suggest that state-of-the-art frontier models fail in the spatial cognition tasks presented in SPACE. These failures can be attributed to the lack of spatial cognition in these models. Alternatively, these failures could be due to models not comprehending the inputs presented to them (i.e., the inputs are not ecologically compatible with the models). To rule out this alternative possibility, we design additional tests unrelated to spatial cognition on the same vision / text inputs used in our benchmark. If models succeed on these tests, we can infer that the inputs are ecologically compatible since the models can understand and perform tasks using these inputs. In each test, we pose a series of multiple-choice questions evaluating a model's fine-grained understanding of the inputs. We now describe these additional tests.
## Test 1: Given discrete map image / text inputs (see Figure 3), answer the following questions:
- Q1. What is the size of the grid (H x W)?
- Q2. What is your current (x, y) location?
- Q3. What are the (x, y) locations of all navigable cells? Include cells containing landmarks and your current position.
- Q4. What are the (x, y) locations of all obstacle cells?
- Q5. What are the landmarks visible in the image / array?
- Q6. What are the locations of the landmarks visible in the image / array?
## Test 2: Given an ego image (see Figure 3), answer the following questions:
- Q1. What is the name of the landmark visible in the image?
- Q2. Is the landmark < name > in the left half of the image?
- Q3. Is the landmark < name > in the right half of the image?
- Q4. Is the landmark < name > in the central section of the image?
## Test 3: Given two consecutive ego images from a walkthrough (see Figure 4), answer the following question:
- Q1. What is the action taken to go from image 1 to image 2 (move forward, turn left, turn right, wait/do nothing)?
## Test 4: Given a perspective taking image / text array (see Figures 9 and 10), answer the following questions:
- Q1. How many objects / non-zero locations are present in the image / array?
- Q2. What objects / non-zero locations are present in the image / array?
- Q3. Is < object / location > to the left of < object / location > in the image / array?
- Q4. Is < object / location > to the above < object / location > in the image / array?
## Test 5: Given water level test images (see Figure 11), answer the following questions:
- Q1. Is there water in the water container?
- Q2. From image 1 to image 2, is the water container rotated to the left, right or not rotated at all?
- Q3. From image 1 to image 2, what is the absolute rotation angle of the water container (in degrees)?
## Test 6: Given a grid of icons / characters from selective attention (see Figures 16 and 17), answer the following questions:
- Q1. How many total objects / characters are present in the image / grid (including repetitions)?
- Q2. What is the size of the grid of objects / characters (width x height)?
- Q3. How many unique objects / characters are present in the grid (ignore repetitions)?
Results discussion: We evaluate GPT-4o and GPT-4v on these tests. The results are shown in Tables 3 and 4. Both models largely understand DM image and text inputs (test 1). However, they fall short in calculating the grid size for DM images (Q1). GPT-4o understands egocentric images,
## Multimodal evaluation
| | Test | 1 | 1 | 1 | 1 | 1 | | Test 2 | Test 2 | Test 2 | Test 2 | Test 2 | Test 3 |
|--------|--------|------|------|------|-------|------|------|----------|----------|----------|----------|----------|----------|
| Model | Q1 | Q2 | Q3 | Q4 | Q5 | Q6 | Avg. | Q1 | Q2 | Q3 | Q4 | Avg. | Q1 |
| GPT-4o | 30.4 | 78.2 | 89.4 | 87.8 | 100.0 | 93.6 | 79.9 | 100.0 | 84.0 | 95.5 | 83.5 | 92.6 | 59.3 |
| GPT-4v | 55.8 | 86.2 | 89.8 | 91.2 | 99.8 | 79.2 | 83.6 | 98.0 | 45.0 | 36.5 | 56.5 | 66.8 | 48.0 |
## Text-only evaluation
Table 3: Measuring ecological compatibility of multimodal inputs with frontier models (part 1)
| | Test 1 | Test 1 | Test 1 | Test 1 | Test 1 | Test 1 | Test 1 | Test 2 | Test 2 | Test 2 | Test 2 | Test 2 | Test 3 |
|--------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| Model | Q1 | Q2 | Q3 | Q4 | Q5 | Q6 | Avg. | Q1 | Q2 | Q3 | Q4 | Avg. | Q1 |
| GPT-4o | 100.0 | 100.0 | 77.5 | 85.6 | 100.0 | 82.8 | 90.9 | - | - | - | - | - | - |
| GPT-4v | 100.0 | 100.0 | 96.8 | 91.0 | 100.0 | 77.6 | 94.2 | - | - | - | - | - | - |
## Multimodal evaluation
| Model | | | | | | Test 4 | Test 4 | Test 4 | Test 4 | Test | Test | Test | Test |
|---------|------|------|------|------|------|----------|----------|----------|----------|--------|--------|--------|--------|
| Model | Q1 | Q2 | Q3 | Q4 | Avg. | Q1 | Q2 | Q3 | Avg. | Q1 | Q2 | Q3 | Avg. |
| GPT-4o | 99.6 | 99.6 | 89.8 | 87.7 | 92.4 | 100.0 | 73.9 | 38.7 | 64.0 | 83.0 | 90.5 | 58.2 | 77.2 |
| GPT-4v | 78.3 | 87.4 | 78.4 | 76.0 | 79.0 | 100.0 | 56.3 | 32.4 | 55.4 | 74.5 | 88.0 | 35.8 | 66.0 |
## Text-only evaluation
Table 4: Measuring ecological compatibility of multimodal inputs with frontier models (part 2)
| | Test 4 | Test 4 | Test 4 | Test 4 | Test 5 | Test 5 | Test 5 | Test 5 | Test 5 | Test 6 | Test 6 | Test 6 | Test 6 |
|--------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| Model | Q1 | Q2 | Q3 | Q4 | Avg. | Q1 | Q2 | Q3 | Avg. | Q1 | Q2 | Q3 | Avg. |
| GPT-4o | 97.6 | 99.6 | 99.5 | 94.2 | 97.4 | - | - | - | - | 100.0 | 100.0 | 99.5 | 99.8 |
| GPT-4v | 96.8 | 98.1 | 92.2 | 76.8 | 90.8 | - | - | - | - | 99.5 | 100.0 | 94.8 | 98.1 |
i.e., recognizes and localizes landmarks in egocentric images (test 2). GPT-4v recognizes landmarks well (Q1), but performs poorly in localization (Q2, Q3 and Q4). Both GPT-4o and GPT-4v perform poorly on action estimation (test 3) and estimation of water container rotations (Q2 and Q3 in test 5). GPT-4o excels in understanding the perspective taking inputs with multimodal and text-only presentations (test 4). GPT-4v also performs well on test 4, but is worse with multimodal inputs when compared to text-only inputs. Finally, both GPT-4o and GPT-4v perform adequately with counting objects (Q1 in test 6) and grid sizes (Q2 in test 6) on selective attention task inputs with multimodal inputs. However, they struggle to calculate the number of unique objects / characters (Q3 in test 6). Both GPT-4o and GPT-4v excel at the text-only presentation of test 6.
Our results indicate that state-of-the-art models can understand multimodal and text-only inputs provided in our benchmark. They perform well in most of the tests, but have specific shortcomings (e.g., localizing landmarks in ego images for GPT-4v, understanding rotations of water containers and counting unique characters / objects in a grid). Importantly, the average results on each test is much higher than the SPACE task counterparts. For example, even though GPT-4o and GPT-4v understand DM text inputs nearly perfectly (test 1), they perform poorly in the DM text versions of the large-scale spatial cognition tasks (see Table 1). Similarly, even though GPT-4o understands the perspective taking inputs nearly perfectly for both text-only and multimodal presentations, it performs poorly on the perspective taking task in SPACE (see Table 2). Therefore, the failure of frontier models on SPACE is most likely due to their lack of spatial cognition, and not because they cannot understand the inputs presented to them.
## A.2 SMALL-SCALE SPATIAL COGNITION: ADDITIONAL DETAILS
We described the small-scale spatial cognition tasks from our benchmark in Section 3.2. Here, we provide additional details about the historical context and motivations behind these tasks.
Mental rotation test (MRT). This was introduced by Vandenberg & Kuse (1978) as a test of spatial visualization. The original MRT contained 20 items, where each item consisted of a criterion figure, two correct alternatives, and two distractors (Vandenberg & Kuse, 1978). The criterion figure is a perspective rendering of a 3D criterion shape from Shepard & Metzler (1971). The correct alternatives are rotated versions of the criterion shape, where the rotation is applied in the 2D image space
on the criterion figure, or along the vertical axis in 3D for the criterion shape. The distractors are rotated mirror-images of the criterion shape or renderings of other criterion shapes. The goal was to identify the two correct alternatives from the four choices. We implement a version of MRT with one correct choice and three distractors, and incorporate rotations along multiple axes (Peters et al., 1995).
Perspective taking test (PTT). This was introduced by Kozhevnikov & Hegarty (2001) as a test of spatial orientation. An arrangement of objects is shown on a piece of paper. A test participant is asked to take the perspective of standing next to an object (say, object A) facing another (say, object B), and is required to point to a third object (say, object C). This task has been used extensively in subsequent literature (Hegarty & Waller, 2004; Weisberg et al., 2014; Meneghetti et al., 2022). We implement this task by randomly sampling N icons of objects like cars, carrots, chairs, and grapes and place them at random locations in an image (with no overlap between objects). We then randomly sample three of the N objects as A, B, and C.
Water level test (WLT). This was introduced by Piaget et al. (1957) as a test of visuospatial perception. Originally, the test was designed to evaluate children's knowledge about the horizontal nature of the surface of water in a sealed bottle regardless of its orientation. Children were presented with bottles partially filled with colored water and asked to imagine the position of the water if it were tilted. Children had to gesture, draw, or use cardboard cutouts to answer the question (Piaget et al., 1957; Foltz, 1978; Wittig & Allen, 1984). Performance on the water-level test was found to be related to performance on spatial ability tests (Foltz, 1978; Wittig & Allen, 1984).
Judgement of Line Orientation test (JLO). This was introduced by Benton (1994) as a measure of visuospatial perception. The original implementation contained 30 samples presented in a flip-book style, where two lines are shown at the top of each page. The goal is to determine the angles between the two lines by comparing them to an array of reference lines (i.e., pick two reference lines that have same angle between them as the lines at the top). There have been multiple variations of JLO with subsets of the 30 questions for faster evaluation (Spencer et al., 2013). We recreate the JLO test suite by randomly sampling pairs of lines on a 2D plane with an angle between 0 to 180 degrees (in multiples of 18 degrees) and formulate it as multiple-choice QA.
Selective attention task (SAtt). This is designed to evaluate selective spatial attention (Serences & Kastner, 2014; Pahor et al., 2022). In particular, we use the widely used cancellation task, where the goal is to search for and mark out target stimuli embedded amidst distractors (Della Sala et al., 1992; Brickenkamp & Zillmer, 1998; Dalmaijer et al., 2015; Lacroix et al., 2021; Pahor et al., 2022; Kalina &Walgrave, 2004). The stimuli may be characters (Brickenkamp & Zillmer, 1998; Dalmaijer et al., 2015; Pahor et al., 2022; Della Sala et al., 1992; Kalina & Walgrave, 2004), pictures (Lacroix et al., 2021; Pahor et al., 2022), or icons (Lacroix et al., 2021). We implement this task with objects as the stimuli for visual evaluation and characters as stimuli for textual evaluation.
Maze completion task (MCT). This task was designed to evaluate spatial orientation, planning, and executive functioning (Lacroix et al., 2021). It was used as a neuropsychological test to assess executive function disorders in children (Marquet-Dol´ eac et al., 2010).
Corsi block-tapping task (CBTT). This is designed to assess visuospatial working memory and attention in healthy participants and patients with known or suspected brain damage (Corsi, 1972; Claessen et al., 2015). An examiner demonstrates a sequence of block-tapping movements on a board containing fixed blocks placed in pseudo-random positions. Participants are required to reproduce the same sequence (forward condition) or the inverted sequence (backward condition) of block-tapping movements to succeed. We evaluate frontier models on the forward condition since prior work has not found significant differences between task performance in the forward and backward conditions (Claessen et al., 2015).
Spatial addition task (SAdd). This was introduced in the fourth edition of the Wechsler Memory Scale, a suite of neuropsychological tests to evaluate memory function in individuals aged 16 to 90 (Wechsler, 2009). SAdd evaluates visuospatial storage and manipulation in working memory. A test participant is shown a grid with blue and red dots for five seconds. The participant is asked to remember the location of the blue dots and ignore the red dots. The participant is then shown another such grid. The objective is to add the two grids together by following certain rules. If a grid location has a blue dot in exactly one of grids, the result should be blue. If a grid location has blue dots on both grids, the result should be white.
Cambridge spatial working memory test (CSWM). This was designed to evaluate spatial working memory in human subjects (Sahakian et al., 1988). Multiple colored boxes are shown on a screen. Ayellow 'treasure' is initially hidden in one of the boxes. The participant must select boxes one at a time to open them and search for the treasure. Once the treasure is found, another treasure is placed in one of the remaining boxes. The intention is for the participant to locate all the yellow treasures via a process of elimination.
## A.3 SPACE EXAMPLES
We illustrate examples for each task from our proposed SPACE benchmark.
## Large-scale spatial cognition
- Egocentric image observations: Figures 4
- DMimage observations 2 : Figures 5, 6
## Small-scale spatial cognition
- MRT: Figures 7 and 8
- PTT: Figures 9 and 10
- WLT: Figure 11
- MPFB: Figures 12 and 13
- JLO: Figures 14 and 15
- MCT: Figures 22 and 23
- CBTT: Figures 18 and 19
- SAdd: Figures 20 and 21
- CSWM: Figures 24 and 25
## A.4 IMPLEMENTATION DETAILS
We provide additional implementation details about our experimental setup in this section.
3D environment generation: We create ten environment layouts based on prior work in cognitive science and artificial intelligence (Tolman, 1948; Gillner & Mallot, 1998; Richardson et al., 1999; Banino et al., 2018; Bouchekioua et al., 2021). Figure 3 shows bird's-eye view images of each layout. We populate each environment with visual landmarks in the form of paintings hanging on the walls, where the painting frames are 3D meshes and the paintings are images from ImageNet (Deng et al., 2009). To create a 3D environment for a given layout, we first randomly sample textures for walls, floors, and ceilings from a database of textures to create the base 3D mesh. Next, we randomly assign ImageNet images and 3D frame meshes to predefined landmark locations in the environment. We create the 3D environment using the Trimesh library and export it in glTF format (DawsonHaggerty et al., 2019). We simulate the environment using the Habitat simulator (Savva et al., 2019). We create 3 environments per layout, for a total of 30 environments in our benchmark.
Randomized trails for evaluation: For multiple-choice QA, we randomize the placement of the correct answer among the four choices such that it appears in each of the four positions once, yielding four trials per question. For each trial, we evaluate the performance over all questions to obtain the average accuracy. For interactive tasks like route retracing, shortcut discovery, MCT and CSWM, we evaluate each model in three independent trials and obtain the corresponding metrics. By performing multiple trials, we can compute means and standard deviations for each model on each task across the trials.
Humanperformance: Weobtain human performance on SPACE tasks by evaluating 29 participants (aged 20 - 50). We evenly divide the questions from our benchmark across all participants. For each participant, we provide HTML files containing a subset of questions from each SPACE task and the corresponding choices. The HTML files contain formatted versions of the prompts used to evaluate
2 DMtext observations are obtained by simply converting the DM image to text as illustrated in Figure 3.
frontier models. We do not provide any additional instructions or background information about how to solve the tasks. For efficiency, we group all questions corresponding to a single environment in the large-scale spatial cognition tasks. Each participant is assigned to view a video walkthrough from one environment and asked to answer a series of questions about that same environment. This is in line with classical protocols in human cognition (Allen et al., 1996; Hegarty et al., 2006; Pazzaglia & Taylor, 2007; Weisberg et al., 2014; Meneghetti et al., 2016; 2021). We further provide a CSV file where the participant is instructed to enter the answers. We instruct the participants to perform all tasks mentally without any aids like pen and paper. Each participant is estimated to have taken 60 to 90 minutes to answer all the questions. The participants send us their responses and we evaluate them collectively. We denote the collective performance of all participants as the human performance in Tables 1 and 2.
Note that we establish the human baseline only for the multiple-choice QA tasks since it was straightforward to share the test materials with the participants online and obtain their answers. The interactive tasks would require us to meet participants in person to perform evaluations and we were not equipped to do this.
Image preprocessing: For most of our experiments, we use square images. We provide the images to models as is without preprocessing. For most models (especially closed-source ones), the processing of the image beyond the input stage is outside our control. We rely on the model creators to correctly process the images. The exact image resolution and aspect ratios are task-dependent and listed in Table 5. For egocentric video inputs in the large-scale spatial cognition tasks, the number of frames varies from 61 to 240. Since GPT-4o, GPT-4v and Claude 3.5 Sonnet APIs did not permit 240+ frames as inputs, we subsample the video frames by a factor of 2 before providing them to the model. For DM video inputs, the number of frames varies from 13 to 72. We provide them as is to the model.
Table 5: Image resolutions and aspect ratios for images and videos in SPACE.
| Task | Image resolutions (W × H) |
|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|
| Large-scale spatial cognition (Ego images) | 512 × 512 |
| Large-scale spatial cognition (DM images) | 512 × 512 |
| Mental rotation (MRT) | Varies from 595 × 541 to 1133 × 1432 since we crop the redundant white space around images. |
| Perspective taking (PTT) | 640 × 480 |
| Water level (WLT) | Varies from 239 × 488 to 787 × 631 since we crop the redundant white space around images. |
| Minnesota paper form board (MPFB) | 480 × 480 for choice images (i.e., puzzle pieces put together). Varies from 831 × 578 to 1211 × 740 for the image containing the puzzle pieces. |
| Judgement of line orientation (JLO) | 512 × 512 for the input image, 1656 × 910 for the legend |
| Selective attention (SAtt) | 512 × 512 for 3 × 3 grids, 768 × 768 for 4 × 4 grids, and 1024 × 1024 for 5 × 5 and 6 × 6 grids |
| Maze completion (MCT) | 1100 × 1100 for 11 × 11 mazes, 2300 × 2300 for 23 × 23 mazes, and 3100 × 3100 for 31 × 31 mazes |
| Corsi block-tapping (CBTT) | 1024 × 1024 |
| Spatial addition (SAdd) | 300 × 300 for 3 × 3 grids, 500 × 500 for 5 × 5 grids, 700 × 700 for 7 × 7 grids, and 900 × 900 for 9 × 9 grids |
| Cambridge spatial working memory (CSWM) | 1024 × 1024 |
Prompting frontier models for SPACE tasks: We evaluate frontier models on each of the SPACE tasks using zero-shot prompting. For each task, we design a prompt that provides a detailed description of the task and the expected response format. Below, we provide the prompt templates for each of the SPACE tasks. While the prompts have been formatted for visual display in L A T E X, the content remains the same. We have replaced images and arrays (in some cases) with placeholders for brevity.
Figure 4: Large-scale spatial cognition with ego image observations
<details>
<summary>Image 4 Details</summary>

### Visual Description
## Diagram: Interactive Navigation Task Environment
### Overview
The image depicts a technical diagram of an interactive navigation task environment. It combines a spatial map with embedded text-based tasks, designed to evaluate spatial reasoning, route planning, and environmental understanding. The left side shows a 2D map with landmarks and paths, while the right side contains task descriptions and examples.
### Components/Axes
#### Map Section (Left)
- **Landmarks**:
- Castle (blue cross)
- Jaguar (green cross)
- Guitar (red cross)
- Bell (orange cross)
- Aeroplane (purple cross)
- Start (blue dot)
- Goal (red dot)
- **Paths**:
- Blue path: Connects Start → Castle → Jaguar → Guitar → Bell → Aeroplane → Goal
- Green path: Shortcut from Start → Guitar → Bell → Aeroplane → Goal
- **Map Sketches**: Four grid-based sketches showing landmark positions relative to Start/Goal.
#### Task Sections (Right)
1. **Video Walkthrough**:
- Sequence of 12 frames showing a path through the environment.
2. **Direction Estimation**:
- Question: "What direction (in degrees) is the storage chest relative to you?"
- Choices: A) 139°, B) 19°, C) -131°, D) -101°
3. **Distance Estimation**:
- Question: "What are the Euclidean distances (in m) to the Guitar, Castle, Bell, and Aeroplane?"
- Choices:
- A) 4.0, 4.3, 7.1, 8.1
- B) 5.1, 4.3, 0.0, 9.1
- C) 8.1, 4.3, 4.0, 7.1
- D) 8.1, 4.3, 7.1, 4.0
4. **Map Sketching**:
- Instruction: "Sketch a map of the environment with locations of the start, goal, and landmarks."
- Four example sketches with varying landmark positions.
5. **Route Retracing**:
- Instruction: "Retrace the path to the goal from the start location."
- Visual: Dashed blue path vs. solid blue path.
6. **Shortcut Discovery**:
- Instruction: "Find a shortcut to the goal."
- Visual: Dashed blue path (original) vs. green dotted path (shortcut).
### Detailed Analysis
- **Map Layout**:
- The map is a grid with brown pathways and gray walls. Landmarks are positioned at intersections.
- Blue path follows a zigzag route, while the green path takes a direct diagonal shortcut.
- **Task Structure**:
- Tasks are presented in a vertical list with headers, questions, and multiple-choice answers.
- Visual aids (e.g., map sketches, path comparisons) accompany each task.
- **Color Coding**:
- Landmarks use distinct colors (blue, green, red, orange, purple) for easy identification.
- Paths use blue (main route) and green (shortcut) for differentiation.
### Key Observations
1. **Spatial Relationships**:
- The Guitar is centrally located, equidistant from multiple landmarks.
- The Aeroplane is positioned near the Goal, suggesting proximity-based tasks.
2. **Task Complexity**:
- Direction Estimation requires angular reasoning (e.g., 139° vs. -131°).
- Distance Estimation tests numerical precision (e.g., 0.0m for Jaguar implies proximity).
3. **Path Optimization**:
- The green shortcut reduces path length by ~30% compared to the blue route.
- Route Retracing and Shortcut Discovery emphasize path efficiency.
### Interpretation
This diagram represents a simulated environment for evaluating AI or human navigation skills. The tasks progressively increase in complexity:
1. **Basic Navigation**: Video Walkthrough and Map Sketching establish spatial awareness.
2. **Analytical Reasoning**: Direction and Distance Estimation require quantitative analysis.
3. **Advanced Planning**: Route Retracing and Shortcut Discovery test path optimization.
The environment’s design suggests applications in robotics, autonomous systems, or cognitive science research. The inclusion of multiple-choice questions implies automated evaluation, while the map sketches allow for qualitative assessment. The color-coded landmarks and paths enhance usability, enabling quick identification of key elements during task execution.
</details>
## Large-scale spatial cognition
- Direction estimation:
Prompts 1, 2 and 3
- Distance estimation:
Prompts 4, 5 and 6
- •
- Map sketching:
Prompts 4, 5 and 6
- Route retracing:
Prompts 10, 11, 12 and 13
- •
- Shortcut discovery:
Prompts 14, 15, 16 and 17
## Small-scale spatial cognition
- Mental rotation test: Prompts 18 and 19
- Perspective taking test:
Prompts 20 and 21
- Water level test:
Prompt 22
- Minnesota Paper Form Board test:
Prompts 23 and 24
- Judgement of Line Orientation test:
Prompts 25 and 26
- Selective attention task:
Prompts 27 and 28
- Maze completion task:
Prompts 29 and 30
- Corsi block-tapping task: Prompts 32 and 31
- •
- Spatial addition task:
Prompts 33 and 34
- Cambridge spatial working memory test:
Prompts 35 and 36
Figure 5: Large-scale spatial cognition with DM image observations. Please note that the top-down visualization on the left needs to be rotated by 90 â—¦ clockwise to get the DM images.
<details>
<summary>Image 5 Details</summary>

### Visual Description
## Diagram: Spatial Navigation Task Interface
### Overview
The image depicts a technical interface for a spatial navigation experiment. It includes a grid-based environment map, task instructions, and example responses. The layout is divided into sections for different cognitive tasks (e.g., direction estimation, route retracing) with embedded diagrams, multiple-choice questions, and video walkthrough references.
### Components/Axes
1. **Main Environment Map (Left Side)**
- **Grid Layout**: A 3x3 grid with labeled landmarks:
- **N** (North, top-left corner)
- **F** (Front, bottom-left corner)
- **K** (Key, top-right corner)
- **T** (Target, bottom-right corner)
- **Path Highlighting**: Blue lines connect landmarks, indicating navigational routes.
- **Color Coding**:
- Blue lines: Pathways
- Red circles: Landmark positions
2. **Task Sections (Right Side)**
- **Direction Estimation**:
- Question: *"Pretend you are standing next to landmark F. What is the angle (in degrees) between the line connecting your location to F and your location to N?"*
- Choices: A) 72, B) 78, C) 168, D) 12
- Video Walkthrough: Grid showing blue/yellow squares with red/yellow dots.
- **Distance Estimation**:
- Question: *"You are standing on landmark T. What are the Euclidean distances (in meters) to F, K, and N?"*
- Choices: A) 7.2, 0.9, 1.5; B) 1.2, 7.2, 7.1; C) 7.1, 7.2, 11.2; D) 1.2, 7.2, 7.1
- **Map Sketching**:
- Instructions: *"Sketch the environment with start, goal, and landmarks. Choose the best option."*
- Example Responses: Two grid-based maps with labeled landmarks (F, T, K, N) and start/goal positions.
- **Route Retracing**:
- Question: *"Retrace the shortest path from start to goal in the video walkthrough."*
- Diagram: Grid with dashed blue lines showing the path.
- **Shortcut Discovery**:
- Question: *"Find a shortcut from start to goal in the video walkthrough."*
- Diagram: Grid with dotted blue lines indicating the shortcut.
### Detailed Analysis
- **Landmark Positions**:
- N (North) is at the top-left, F (Front) at the bottom-left, K (Key) at the top-right, and T (Target) at the bottom-right.
- Blue lines connect N→F→T→K, forming a loop.
- **Task Formats**:
- All tasks use grid-based diagrams with labeled landmarks.
- Multiple-choice questions test spatial reasoning (angles, distances, pathfinding).
- Video walkthroughs are referenced as visual aids for each task.
- **Color Legend**:
- Blue: Pathways/navigation routes
- Red: Landmark markers
- Yellow: Highlighted landmarks in video walkthroughs
### Key Observations
1. **Spatial Reasoning Focus**: Tasks emphasize angle estimation, distance calculation, and path optimization.
2. **Ambiguity in Choices**: Some distance options (e.g., A: 7.2, 0.9, 1.5) may conflict with grid proportions, suggesting potential errors or intentional distractors.
3. **Video Walkthrough Integration**: Tasks rely on visual demonstrations, implying a multimodal experimental design.
### Interpretation
This interface likely belongs to a study on **cognitive mapping** or **navigation skill assessment**. The tasks simulate real-world wayfinding challenges, requiring participants to:
- Infer spatial relationships from partial information (e.g., video walkthroughs).
- Compare mental maps to actual layouts (map sketching).
- Optimize routes (shortcut discovery).
The inclusion of multiple-choice formats suggests automated scoring, while the grid-based diagrams standardize responses for analysis. The experiment may explore how individuals process spatial information in dynamic environments, with applications in robotics, psychology, or urban planning.
**Note**: No numerical data trends are present, as the image focuses on task structure rather than statistical results.
</details>
Figure 6: Large-scale spatial cognition with DM image observations. Please note that the top-down visualization on the left needs to be rotated by 90 â—¦ clockwise to get the DM images.
<details>
<summary>Image 6 Details</summary>

### Visual Description
## Diagram: Spatial Navigation Task Interface
### Overview
The image depicts a technical interface for spatial navigation tasks, combining a grid-based map with interactive questions and diagrams. The left side shows a 2D map with labeled landmarks (A, L, N, O) and a blue path connecting them. The right side contains four task types: Direction estimation, Distance estimation, Map sketching, and Shortcut discovery, each with diagrams, questions, and multiple-choice answers.
### Components/Axes
#### Map Section (Left)
- **Labels**: Red circles marked "A", "L", "N", "O" (landmarks).
- **Path**: Blue line connecting landmarks in the sequence A → L → O → N → A.
- **Background**: Grid with varying shades of brown (walls/floors) and black (obstacles).
- **Legend**: No explicit legend, but colors imply:
- Blue: Path/route.
- Brown: Floor/wall boundaries.
- Black: Obstacles.
#### Task Section (Right)
1. **Direction Estimation**
- **Question**: "Pretend you are standing next to landmark A. What is the angle (in degrees) between the line connecting your location to A and O?"
- **Diagram**: Grid with points A (red) and O (yellow).
- **Choices**:
- A) 159
- B) 141
- C) 69
- D) -171
2. **Distance Estimation**
- **Question**: "You are on landmark Z. What are the Euclidean distances (in meters) to A, N, V, O, and L?"
- **Diagram**: Grid with points A, N, V, O, L.
- **Choices**:
- A) 4.5, 11.4, 8.5, 10.0, 11.2
- B) 4.5, 8.5, 11.4, 10.0, 11.2
- C) 0.5, 2.0, 3.4, 19.2, 3.5
- D) 11.4, 8.5, 4.5, 11.2, 10.0
3. **Map Sketching**
- **Question**: "Sketch a map of the environment based on the video walkthrough. Include start, goal, and landmarks."
- **Diagram**: Grid with "Start" (blue) and "Goal" (red) markers.
- **Choices**: Four grid layouts with varying placements of Start, Goal, and landmarks.
4. **Shortcut Discovery**
- **Question**: "Find a shortcut to the goal from the start location. The route may have unnecessary detours."
- **Diagram**: Grid with a blue "Video walkthrough route" and a green "Shortcut route."
### Detailed Analysis
- **Map Layout**:
- Landmarks A, L, N, O are positioned in a non-linear sequence.
- The blue path forms a loop (A → L → O → N → A), suggesting a cyclic navigation task.
- Obstacles (black squares) block direct paths between landmarks.
- **Task-Specific Details**:
- **Direction Estimation**: The correct answer (D) -171° implies a near-opposite direction from A to O, requiring angular reasoning.
- **Distance Estimation**: Correct answer (B) lists distances in ascending order (4.5, 8.5, 10.0, 11.2, 11.4), matching the spatial hierarchy of landmarks.
- **Map Sketching**: The correct layout (top-right) aligns Start and Goal with the video’s path, emphasizing spatial memory.
- **Shortcut Discovery**: The green path bypasses detours, highlighting efficiency in route optimization.
### Key Observations
- The map’s blue path is the shortest route connecting all landmarks, but the tasks require deeper spatial analysis.
- Direction estimation involves negative angles, indicating counterclockwise orientation.
- Distance estimation tests Euclidean distance calculation, with precise decimal values.
- Map sketching and shortcut discovery emphasize visual-spatial reasoning and path optimization.
### Interpretation
This interface simulates real-world navigation challenges, such as wayfinding in complex environments. The tasks:
1. **Direction Estimation**: Assess angular reasoning and orientation skills.
2. **Distance Estimation**: Evaluate quantitative spatial awareness.
3. **Map Sketching**: Test memory and visualization of environments.
4. **Shortcut Discovery**: Measure problem-solving for efficient routing.
The interface likely serves as a training tool for robotics, autonomous systems, or human spatial cognition studies. The correct answers (e.g., D for direction, B for distances) reflect optimal solutions based on geometric principles. The map’s cyclic path and obstacle placement suggest a focus on adaptive navigation in constrained spaces.
</details>
Figure 7: Mental rotation (MRT) with visual inputs: Which choice image shows the reference shape rotated in 3D?
<details>
<summary>Image 7 Details</summary>

### Visual Description
## Diagram: 3D Cube Structure Configurations
### Overview
The image presents a reference 3D cube structure labeled "Reference shape" at the top, followed by three rows of four alternative configurations each labeled "Choice 1" through "Choice 4". All structures are composed of interconnected gray cubes with black grid lines. The configurations vary in angular orientation, cube stacking patterns, and spatial relationships between components.
### Components/Axes
- **Primary Elements**:
- Reference shape (top row, leftmost)
- Four choice configurations per row (total 12 structures)
- Green rectangular highlights around specific structures:
- Row 2: Choice 2 and Choice 3
- Row 3: Choice 1 and Choice 3
- **Visual Characteristics**:
- All structures use identical cube dimensions
- Configurations vary in:
- Angular deviation from reference shape
- Cube stacking density
- Overhang presence/absence
- Base footprint configuration
### Detailed Analysis
1. **Reference Shape**:
- L-shaped configuration with 12 cubes
- Base: 4x3 grid with 9 cubes
- Vertical extension: 3 cubes
2. **Choice Configurations**:
- **Choice 1 (Row 1)**:
- 90° rotation around vertical axis
- Creates diagonal overhang
- Base footprint: 3x4 grid
- **Choice 2 (Row 1)**:
- 45° angular deviation
- Creates stepped profile
- Base footprint: 2x5 grid
- **Choice 3 (Row 1)**:
- Mirror image of reference
- Base footprint: 4x3 grid
- **Choice 4 (Row 1)**:
- Compacted configuration
- Base footprint: 3x3 grid
*(Similar pattern analysis applies to Rows 2-3 with structural variations in cube stacking and angular relationships)*
### Key Observations
1. **Structural Diversity**:
- Configurations demonstrate 3D spatial problem-solving variations
- Base footprint variations range from 2x5 to 4x3 grids
- Overhang presence correlates with angular deviation
2. **Highlighted Choices**:
- Green-boxed structures show:
- Row 2 Choice 2: Maximum angular deviation
- Row 2 Choice 3: Most compact configuration
- Row 3 Choice 1: Extreme overhang development
- Row 3 Choice 3: Hybrid reference/compact design
### Interpretation
The diagram appears to explore spatial optimization principles through cube configuration variations. The highlighted structures suggest:
1. **Choice 2 (Row 2)** represents maximum angular efficiency
2. **Choice 3 (Row 2)** demonstrates optimal space utilization
3. **Choice 1 (Row 3)** shows potential structural instability due to overhang
4. **Choice 3 (Row 3)** balances reference shape familiarity with compact design
The progression from reference to choices indicates an exploration of:
- Angular relationships (90° → 45° → mirror)
- Footprint optimization (4x3 → 3x3)
- Vertical space utilization tradeoffs
No numerical data or quantitative metrics are present in the image. The analysis is based solely on visual spatial relationships and structural characteristics.
</details>
view ,none, back used,year ,none view,been ,none
Figure 8: Mental rotation (MRT) with text inputs: Which choice array shows the reference array rotated in 2D?
<details>
<summary>Image 8 Details</summary>

### Visual Description
## Table: Reference Array and Choice Options
### Overview
The image presents a structured table with four columns labeled "Reference array," "Choice 1," "Choice 2," "Choice 3," and "Choice 4." Each column contains numerical data organized in rows, with some rows highlighted in green. Below each column, there are textual labels or categories, such as "view,used,view" or "back,none,back." The table appears to compare numerical values across different choices, with annotations indicating specific patterns or selections.
---
### Components/Axes
- **Columns**:
- **Reference array**: Contains a sequence of numerical values (e.g., `0,0,8,1,8,0,0`).
- **Choice 1**: Numerical data with annotations like "view,used,view" and "used,year,none."
- **Choice 2**: Numerical data with annotations like "back,none,back" and "used,year,none."
- **Choice 3**: Numerical data with annotations like "view,been,none" and "used,year,back."
- **Choice 4**: Numerical data with annotations like "view,used,view" and "back,none,back."
- **Rows**: Each row contains numerical values (e.g., `0,0,8,1,8,0,0`) and corresponding textual labels below the columns.
- **Annotations**: Green boxes highlight specific entries in "Choice 3" and "Choice 4," suggesting selected or significant data points.
---
### Detailed Analysis
#### Numerical Data
- **Reference array**:
- Rows: `0,0,8,1,8,0,0`; `0,0,2,0,1,0,0`; `0,0,7,0,7,0,4`; `9,0,9,5,0,0,0`; `0,0,0,7,2,0,1`; `0,0,5,1,0,2,0`; `0,2,3,0,0,0,7`.
- **Choice 1**:
- Rows: `0,0,8,1,8,0,0`; `0,0,1,0,2,0,0`; `4,0,7,0,7,0,0`; `0,0,5,9,0,9,0`; `1,0,2,7,0,0,0`; `0,0,2,0,1,5,0`; `7,0,0,0,3,2,0`.
- **Choice 2**:
- Rows: `7,0,1,0,4,0,0`; `0,0,0,0,0,0,0`; `0,0,2,0,7,1,8`; `0,1,7,5,0,0,1`; `3,5,0,9,7,2,8`; `2,0,0,0,0,0,0`; `0,0,0,0,9,0,0`.
- **Choice 3**:
- Rows: `0,0,0,9,0,0,0`; `2,0,0,0,0,0,0`; `3,5,0,9,7,0,1`; `0,1,7,5,0,0,1`; `0,0,2,0,7,1,8`; `0,2,0,0,0,0,0`; `7,0,1,0,4,0,0`.
- **Choice 4**:
- Rows: `0,0,0,9,0,0,0`; `0,0,0,0,0,0,2`; `8,2,7,9,0,5,3`; `1,0,0,5,7,1,0`; `8,1,7,0,2,0,0`; `0,0,0,0,0,2,0`; `0,0,4,0,1,0,7`.
#### Textual Labels
- **Below columns**:
- **Choice 1**: `view,used,view`; `used,year,none`; `view,been,none`.
- **Choice 2**: `back,none,back`; `used,year,none`; `view,been,none`.
- **Choice 3**: `view,used,view`; `back,none,back`; `view,been,none`.
- **Choice 4**: `view,used,view`; `back,none,back`; `view,none,back`.
#### Green Boxes
- **Choice 3**: Highlighted rows include `0,0,0,9,0,0,0` and `7,0,1,0,4,0,0`.
- **Choice 4**: Highlighted rows include `0,0,0,9,0,0,0` and `0,0,0,0,0,0,2`.
---
### Key Observations
1. **Numerical Patterns**:
- The "Reference array" contains a mix of zeros and non-zero values, with some rows having higher concentrations of non-zero entries (e.g., `9,0,9,5,0,0,0`).
- "Choice 3" and "Choice 4" show repeated zeros in certain rows, suggesting potential default or neutral values.
2. **Annotations**:
- Textual labels like "view,used,view" and "back,none,back" may represent categorical variables or conditions influencing the numerical data.
- Green boxes in "Choice 3" and "Choice 4" likely indicate selected or optimal choices based on the data.
3. **Inconsistencies**:
- The last row in "Choice 3" has seven elements (`7,0,1,0,4,0,0`), while others have five, which may indicate a formatting error or a special case.
---
### Interpretation
- **Data Meaning**: The table likely compares performance metrics, scores, or outcomes across four choices, with the "Reference array" serving as a baseline. The annotations (e.g., "view,used,view") might represent variables or conditions affecting the choices.
- **Relationships**:
- The green boxes in "Choice 3" and "Choice 4" suggest these options are prioritized or validated by the data.
- The textual labels below each column could indicate categories (e.g., "view," "used," "back") that are being evaluated or compared.
- **Outliers/Anomalies**:
- The last row in "Choice 3" has an extra element, which may be a data entry error or a unique case requiring further investigation.
- The repeated zeros in "Choice 2" and "Choice 4" might indicate neutral or inactive states for certain rows.
---
### Conclusion
This table provides a structured comparison of numerical data across four choices, with annotations and highlighted entries suggesting specific patterns or selections. The textual labels and green boxes indicate categorical variables and prioritized options, respectively. Further analysis would require contextual information about the variables and their significance.
</details>
Pretend that you are standing at the bat and facing the book. At what clockwise angle (in degrees) is the apple located relative to you?
<details>
<summary>Image 9 Details</summary>

### Visual Description
## Icon-Based Choice Diagram: Symbolic Representation of Options
### Overview
The image presents a square layout containing seven distinct icons arranged spatially, with a "Choices" section on the right listing four numerical options (A-D) with negative values. The diagram combines symbolic imagery with quantitative data, suggesting a decision-making or evaluation framework.
### Components/Axes
1. **Icon Elements**:
- **Top-left**: Brown dog (facing right)
- **Top-right**: Blue book (closed, vertical spine)
- **Center**: Red apple (with green leaf)
- **Right-center**: Black bat (flying right)
- **Lower-center**: Purple grapes (cluster with stem)
- **Left-center**: Green snake (coiled)
- **Bottom-center**: Yellow pyramid (three-tiered)
2. **Choices Section**:
- **Right-aligned text block** labeled "Choices"
- **Options**:
- A) -35
- B) -15
- C) -115
- D) -55 (highlighted in green)
### Detailed Analysis
- **Icon Placement**:
- Dog (A) and Book (B) occupy top corners, creating a bookend effect.
- Apple (C) anchors the center, with Bat (D) positioned to its right.
- Grapes (E) and Snake (F) flank the apple diagonally, while Pyramid (G) anchors the bottom.
- **Numerical Values**:
- All choices show negative values, with C (-115) being the most extreme.
- D (-55) is visually emphasized via green highlighting, despite not being the lowest value.
- **Spatial Relationships**:
- Icons form a loose circular pattern around the central apple.
- Choices are spatially separated from icons, suggesting a categorical distinction.
### Key Observations
1. **Value Anomaly**: Choice D (-55) is highlighted despite being the second-highest value (closest to zero), contradicting typical negative-value prioritization.
2. **Icon Symbolism**:
- Dog/Book (A/B): Representational of domesticity/knowledge
- Apple/Bat/Grapes (C/D/E): Natural elements with potential symbolic weight
- Snake/Pyramid (F/G): Could imply danger or ancient wisdom
3. **Negative Value Context**: All choices are negative, implying a "cost" or "penalty" system rather than positive scoring.
### Interpretation
This diagram appears to model a decision-making scenario where options are evaluated based on symbolic associations and quantitative penalties. The green highlighting of D (-55) suggests it may be a "selected" or "recommended" option despite not being the optimal numerical choice. The spatial arrangement of icons around the apple (C) might indicate it as a central reference point, with other symbols representing contextual factors. The negative values could reflect resource expenditure, risk assessment, or opportunity cost in a hypothetical scenario. The pyramid's placement at the bottom might symbolize foundational considerations in the decision process.
</details>
Pretend that you are standing at the grapes and facing the donut. At what clockwise angle (in degrees) is the book relative to you?
Pretend that you are standing at the cake and facing the desk. At what clockwise angle (in degrees) is the bat relative to you?
Figure 9: Perspective taking (PTT) with visual inputs .
<details>
<summary>Image 10 Details</summary>

### Visual Description
## List of Choices with Numerical Values
### Overview
The image displays a collection of icons (bat, dog, watermelon, carrot, basketball, grapes, donut, book) arranged in a grid, accompanied by a list of labeled choices (A–D) with numerical values. The numerical values are negative integers, with one value highlighted in green.
### Components/Axes
- **Icons**:
- Bat (top-left)
- Dog (top-center)
- Watermelon (top-right)
- Carrot (middle-left)
- Basketball (middle-center)
- Grapes (middle-right)
- Donut (bottom-left)
- Book (bottom-right)
- **Choices**:
- **A) -17** (black text)
- **B) -37** (green text)
- **C) -77** (black text)
- **D) -57** (black text)
### Detailed Analysis
- **Numerical Values**:
- Choice A: -17
- Choice B: -37 (highlighted in green)
- Choice C: -77
- Choice D: -57
- **Spatial Grounding**:
- The icons are positioned in a grid layout, with the choices listed vertically to the right of the icons.
- The green highlight for Choice B is visually distinct from the black text of other choices.
### Key Observations
- All numerical values are negative, suggesting a scoring system where lower (more negative) values may indicate higher priority or significance.
- Choice B (-37) is the only value highlighted in green, potentially indicating it is the correct answer, a target value, or a special category.
- The icons do not have explicit textual labels, but their placement may correlate with the choices (e.g., bat → A, dog → B, etc.).
### Interpretation
- The green highlight on Choice B (-37) likely signifies its importance, such as being the correct answer in a quiz or the optimal selection in a decision-making context.
- The negative values could represent a ranking system where lower (more negative) numbers are preferable, or they might reflect a deficit or penalty.
- The absence of explicit labels for the icons suggests they may serve as visual metaphors or categories for the choices, but this requires further context to confirm.
- The structured layout implies a direct relationship between the icons and the choices, though the exact mapping is not explicitly stated.
## Notes
- No chart, diagram, or data table is present. The image primarily consists of icons and a list of choices with numerical values.
- The textual information is limited to the choices (A–D) and their associated numbers.
- The green color for Choice B is the only non-default formatting, drawing attention to its value.
</details>
<details>
<summary>Image 11 Details</summary>

### Visual Description
## Diagram: Object-Choice Association Matrix
### Overview
The image depicts a diagrammatic layout of eight distinct objects arranged in a grid-like pattern, each associated with a labeled numerical value under a "Choices" section. The objects include food items, animals, furniture, and abstract symbols. Numerical values are assigned to specific objects via alphanumeric labels (A-D), with some values negative and others positive.
### Components/Axes
- **Objects**:
- Top-left: Chocolate-glazed donut (brown icing, white center hole).
- Top-right: Brown dog (profile view, collar).
- Left-middle: Purple grapes (cluster with green stem).
- Left-bottom: Yellow tiered cake (three layers, lit candle).
- Center-left: Black bat (wing spread).
- Center-right: Red apple (green leaf).
- Bottom-left: Wooden table (two drawers, brown surface).
- Bottom-right: Yellow armchair (polka-dot pattern, four legs).
- **Choices Section**:
- Right-aligned vertical list labeled "Choices" with four options:
- **A) -39**: Associated with the dog (top-right).
- **B) -19**: Associated with the apple (center-right).
- **C) 21**: Associated with the bat (center-left).
- **D) 1**: Associated with the chair (bottom-right).
### Detailed Analysis
- **Object-Choice Mapping**:
- The dog (A) is assigned the most negative value (-39).
- The apple (B) has a moderately negative value (-19).
- The bat (C) has a positive value (21), the highest among all.
- The chair (D) has the smallest positive value (1).
- The remaining objects (donut, grapes, cake, table) lack direct numerical associations.
- **Spatial Distribution**:
- Negative values (-39, -19) are assigned to objects in the top-right and center-right regions.
- Positive values (21, 1) are assigned to objects in the center-left and bottom-right regions.
- Unassociated objects (donut, grapes, cake, table) occupy the left and bottom-left regions.
### Key Observations
1. **Value Extremes**: The dog (A) has the most extreme negative value (-39), while the bat (C) has the highest positive value (21).
2. **Negative Values**: Both negative values (-39, -19) are assigned to consumable items (dog, apple), though the dog is an animal.
3. **Positive Values**: The bat (C) and chair (D) have positive values, but their magnitudes differ significantly (21 vs. 1).
4. **Unassociated Objects**: Four objects (donut, grapes, cake, table) lack numerical labels, suggesting incomplete data or a filtering mechanism.
### Interpretation
The diagram appears to represent a decision-making or prioritization task where objects are scored numerically. The stark contrast between negative and positive values (-39 vs. 21) suggests a binary or polarized evaluation system. However, the absence of context (e.g., criteria for scoring, units, or purpose) limits interpretability. The unassociated objects may indicate excluded categories or incomplete data entry. The spatial arrangement does not follow a clear pattern, implying the values are not derived from positional logic but rather arbitrary or context-dependent assignments.
**Note**: No axis titles, legends, or scales are present, and the diagram lacks explanatory text beyond the "Choices" heading. The numerical values are embedded directly in the choices list, with no visual correlation to the objects’ positions or attributes.
</details>
Figure 10: Perspective taking (PTT) with text inputs . The array colors are only for illustration purposes.
<details>
<summary>Image 12 Details</summary>

### Visual Description
## Multiple Choice Clock Angle Problems
### Overview
The image contains four distinct clock angle problems, each asking for the clockwise angle (in degrees) between a specified position and a target number, given a starting position and facing direction. Each problem includes multiple-choice answers with one correct option highlighted in green.
---
### Components/Axes
1. **Problem Structure**:
- **Setup**: "Pretend that you are standing at [X] and facing [Y]. At what clockwise angle (in degrees) is [Z] relative to you?"
- **Choices**: Four options labeled A–D, with one correct answer marked in green.
- **Visual Layout**: Each problem is separated by horizontal dividers, with choices listed below the question.
2. **Clock Representation**:
- A circular clock face is depicted for each problem, with numbers 1–12 arranged clockwise.
- The starting position (e.g., "standing at 6") is marked with a bold number, and the facing direction (e.g., "facing 9") is indicated with an arrow or bold text.
---
### Detailed Analysis
#### Problem 1
- **Setup**: Standing at 6, facing 9. Find the angle to 1.
- **Choices**:
- A) -119 ✅
- B) -59
- C) -99
- D) -159
- **Clock Layout**: Numbers 1–12 arranged clockwise. Starting position (6) and facing direction (9) are highlighted.
#### Problem 2
- **Setup**: Standing at 7, facing 2. Find the angle to 1.
- **Choices**:
- A) -115
- B) 165
- C) -135 ✅
- D) -75
- **Clock Layout**: Numbers 1–12 arranged clockwise. Starting position (7) and facing direction (2) are highlighted.
#### Problem 3
- **Setup**: Standing at 4, facing 9. Find the angle to 6.
- **Choices**:
- A) 136
- B) 116 ✅
- C) 76
- D) 156
- **Clock Layout**: Numbers 1–12 arranged clockwise. Starting position (4) and facing direction (9) are highlighted.
#### Problem 4
- **Setup**: Standing at 9, facing 4. Find the angle to 2.
- **Choices**:
- A) -36 ✅
- B) -96
- C) -76
- D) 4
- **Clock Layout**: Numbers 1–12 arranged clockwise. Starting position (9) and facing direction (4) are highlighted.
---
### Key Observations
1. **Negative Angles**: Some answers are negative (e.g., -119, -135), likely indicating directionality relative to the facing orientation.
2. **Consistent Formatting**: All problems use the same structure, with choices listed in a vertical column and a clock diagram below.
3. **Correct Answers**: Highlighted in green, with checkmarks (✅) added in the transcription for clarity.
---
### Interpretation
- **Angle Calculation Logic**:
- Each hour mark on a clock represents 30° (360°/12).
- The angle is calculated clockwise from the **facing direction** to the target number, adjusted for the starting position.
- Negative angles may represent counterclockwise measurements relative to the facing direction.
- **Purpose**: These problems test spatial reasoning and understanding of rotational angles in a circular context.
- **Notable Patterns**:
- Problems with negative answers (e.g., -119, -135, -36) suggest scenarios where the target number is behind the facing direction.
- Positive answers (e.g., 116, 136) occur when the target is ahead of the facing direction.
---
### Conclusion
The image provides a structured set of clock angle problems designed to assess geometric reasoning. Each problem isolates variables (starting position, facing direction, target number) to test the solver’s ability to compute relative angles in a circular system. The consistent formatting and visual aids (clock diagrams) enhance clarity for educational or assessment purposes.
</details>
Figure 11: Water level (WLT) with vision inputs: Given a water container filled with water, predict the water level in the rotated container.
<details>
<summary>Image 13 Details</summary>

### Visual Description
## Diagram: Container Rotation and Liquid Distribution Assessment
### Overview
The image presents a comparative analysis of liquid distribution in containers before and after rotation, alongside geometric shape orientation variations. It consists of two primary sections:
1. **Circular containers** (beaker-like) with liquid levels
2. **Geometric shapes** (rectangles and diamonds) with liquid-filled regions
Each section includes:
- Original orientation
- Rotated orientation
- Four comparative choices (Choice 1–4)
Two choices per section are highlighted with green boxes (Choice 3 for containers, Choice 4 for shapes).
---
### Components/Axes
#### Labels and Structure
- **Top Row**:
- "Original container" (circular beaker with half-filled liquid)
- "Rotated container" (empty circular outline)
- "Choice 1"–"Choice 4" (variations in liquid distribution)
- **Middle Row**:
- Rectangular shapes with liquid levels
- Similar structure to the top row but with rectangular geometry
- **Bottom Row**:
- Diamond-shaped figures with liquid levels
- Mirroring the rectangular row’s structure
#### Visual Elements
- **Liquid Representation**:
- Light blue shading indicates liquid-filled regions
- No explicit scale or volume measurements provided
- **Rotation Indicators**:
- Arrows or implied orientation changes (not explicitly labeled)
- **Highlighted Choices**:
- Green boxes emphasize "Choice 3" (containers) and "Choice 4" (shapes)
---
### Detailed Analysis
#### Circular Containers
- **Original**: Half-filled liquid in a circular beaker with a spout.
- **Rotated**: Empty circular outline (no liquid).
- **Choices**:
- **Choice 1**: Minimal liquid at the base.
- **Choice 2**: Diagonal liquid division (50% fill).
- **Choice 3** (highlighted): Uniform half-fill matching the original.
- **Choice 4**: Diagonal liquid division (70% fill).
#### Rectangular Shapes
- **Original**: Small liquid layer at the base.
- **Rotated**: Empty rectangle.
- **Choices**:
- **Choice 1**: Minimal liquid at the base.
- **Choice 2**: Diagonal liquid division (30% fill).
- **Choice 3**: Diagonal liquid division (50% fill).
- **Choice 4** (highlighted): Diagonal liquid division (70% fill).
#### Diamond Shapes
- **Original**: Small liquid layer at the base.
- **Rotated**: Empty diamond.
- **Choices**:
- **Choice 1**: Minimal liquid at the base.
- **Choice 2**: Diagonal liquid division (30% fill).
- **Choice 3**: Diagonal liquid division (50% fill).
- **Choice 4** (highlighted): Diagonal liquid division (70% fill).
---
### Key Observations
1. **Consistency in Highlighted Choices**:
- Containers: Choice 3 replicates the original liquid distribution.
- Shapes: Choice 4 shows the highest liquid fill (70%).
2. **Rotation Impact**:
- Rotated containers/shapes are depicted as empty, suggesting liquid redistribution upon rotation.
3. **Geometric Variation**:
- Rectangles and diamonds show similar liquid distribution patterns despite differing shapes.
---
### Interpretation
The diagram likely assesses understanding of **liquid conservation principles** and **geometric orientation effects**. Key insights:
- **Containers**: The highlighted Choice 3 implies that rotation does not alter liquid volume, only its apparent distribution.
- **Shapes**: The 70% fill in Choice 4 (shapes) suggests a focus on maximizing liquid capacity post-rotation.
- **Educational Purpose**: The exercise may test spatial reasoning or fluid dynamics concepts, emphasizing how container geometry influences liquid behavior during rotation.
No numerical data or explicit legends are present; conclusions rely on visual comparisons and implied relationships.
</details>
Figure 12: Minnesota Paper Form Board (MPFB) with visual inputs: Which one of the four choices shows what it would be like when the puzzle pieces are put together? The puzzle pieces can be rotated but not flipped.
<details>
<summary>Image 14 Details</summary>

### Visual Description
## Puzzle Assembly Diagram: Correct Configuration Identification
### Overview
The image presents a spatial reasoning puzzle with three distinct sets of fragmented polygonal pieces arranged vertically on the left. To the right, four configuration options (Choice 1-4) are displayed in a 2x2 grid format for each puzzle set. Correct solutions are highlighted with green borders, indicating successful piece alignment.
### Components/Axes
- **Left Panel**:
- Three vertically stacked puzzle sets labeled implicitly by position
- Each set contains 5-6 irregular polygonal pieces with varying edge configurations
- Piece shapes include rectangles (various sizes), triangles, and trapezoids
- **Right Panel**:
- Four configuration options per puzzle set (Choice 1-4)
- Each choice displays a 2x2 grid of potential solutions
- Green borders highlight correct configurations
### Detailed Analysis
1. **Puzzle Set 1 (Top Row)**:
- Contains 5 pieces: 2 large rectangles, 1 medium rectangle, 1 small rectangle, 1 triangle
- Correct configuration: Choice 1's top-left grid
- Key feature: Triangle piece must occupy bottom-right corner position
2. **Puzzle Set 2 (Middle Row)**:
- Contains 6 pieces: 2 large rectangles, 2 medium rectangles, 1 small rectangle, 1 triangle
- Correct configuration: Choice 2's top-right grid
- Notable: Medium rectangles form a continuous horizontal band in correct solution
3. **Puzzle Set 3 (Bottom Row)**:
- Contains 5 pieces: 2 large rectangles, 2 medium rectangles, 1 triangle
- Correct configuration: Choice 4's bottom-right grid
- Distinctive: Triangle piece connects two medium rectangles diagonally
### Key Observations
- Correct solutions consistently position the triangle piece in corner or junction locations
- Medium rectangles serve as transitional elements between larger shapes
- Incorrect configurations (non-green) show misaligned edges or disconnected shapes
- Green borders appear only on fully contiguous configurations with matching edge patterns
### Interpretation
This diagram evaluates spatial problem-solving skills through:
1. **Pattern Recognition**: Identifying complementary edge geometries
2. **Spatial Reasoning**: Visualizing piece rotations and translations
3. **Hierarchical Assembly**: Progressing from individual pieces to complete configurations
The staggered correct answer positions (top-left, top-right, bottom-right) suggest intentional design to prevent pattern memorization, requiring fresh analysis for each puzzle set. The increasing piece complexity from top to bottom (5→6→5 pieces) may indicate progressive difficulty, though piece count alone doesn't determine solution difficulty - edge configuration complexity is equally critical.
</details>
Figure 13: Minnesota Paper Form Board (MPFB) with text inputs: Which one of the four choices shows what it would be like when the array pieces are put together? The pieces merge at the edges (1s). They can be rotated in multiples of 90 degrees but not flipped. The array colors are purely for illustration purposes.
<details>
<summary>Image 15 Details</summary>

### Visual Description
## Grid of Binary Arrays: Array Pieces and Four Choices
### Overview
The image displays a structured grid of binary arrays (1s and 0s) organized into **Array pieces** and **Four Choices** (Choice 1–4). Each "Choice" contains a 12x12 grid of 1s and 0s, while the "Array pieces" are smaller 4x4 matrices arranged in a 6x2 layout. The first row of each "Choice" grid is highlighted in green, suggesting a specific focus or selection.
### Components/Axes
- **Labels**:
- **Array pieces**: A 6x2 grid of 4x4 binary matrices.
- **Choice 1–4**: Four 12x12 grids of 1s and 0s, each with the first row highlighted in green.
- **Structure**:
- **Array pieces**: 24 total matrices (6 rows × 2 columns), each 4x4.
- **Choices**: 4 distinct 12x12 grids, each with a consistent first-row highlight.
### Detailed Analysis
#### Array Pieces
- The 6x2 grid of 4x4 matrices contains repeated patterns of 1s and 0s. For example:
- Top-left matrix: `1,1,1,1` (row 1), `1,0,0,1` (row 2), `1,0,0,1` (row 3), `1,1,1,1` (row 4).
- Other matrices show variations, such as `1,0,0,1` repeated in multiple rows.
#### Choices 1–4
- **Choice 1**:
- 12x12 grid with the first row (`1,1,1,1,1,1,1,1,1,1,1,1`) highlighted in green.
- Subsequent rows show a mix of 1s and 0s, with some rows containing all 1s (e.g., row 1, 7, 12).
- **Choice 2**:
- First row: `1,1,1,1,1,1,1,1,1,1,1,1` (green).
- Rows 2–12 alternate between 1s and 0s, with some rows having all 0s (e.g., row 2, 6).
- **Choice 3**:
- First row: `1,1,1,1,1,1,1,1,1,1,1,1` (green).
- Rows 2–12 show a mix of 1s and 0s, with some rows having all 0s (e.g., row 2, 6).
- **Choice 4**:
- First row: `1,1,1,1,1,1,1,1,1,1,1,1` (green).
- Rows 2–12 alternate between 1s and 0s, with some rows having all 0s (e.g., row 2, 6).
### Key Observations
1. **Highlighted Rows**: The first row of each "Choice" grid is consistently `1,1,1,1,1,1,1,1,1,1,1,1` (all 1s), suggesting a deliberate selection or priority.
2. **Repetition**: The "Array pieces" contain repeated 4x4 patterns (e.g., `1,0,0,1`), which may indicate modular or reusable components.
3. **Symmetry**: Choices 2–4 share similar structures, with alternating 1s and 0s in rows, while Choice 1 has more 1s in later rows.
### Interpretation
- The **highlighted first rows** across all choices likely represent a critical or foundational element (e.g., a header, key data row, or activation signal).
- The **array pieces** may serve as building blocks for constructing the larger "Choice" grids, with their 4x4 patterns potentially encoding specific functionalities or constraints.
- The **repetition of 1s and 0s** in the "Choice" grids could imply a binary encoding system, where 1s denote active states and 0s denote inactive states. The green highlights might indicate a default or primary configuration.
- **Anomalies**: No clear outliers are observed, but the uniformity of the first rows across all choices suggests a standardized design principle.
### Conclusion
The image appears to represent a structured binary system, possibly for data encoding, configuration settings, or modular design. The "Array pieces" and "Choices" likely interact to form a larger framework, with the highlighted rows acting as anchors or reference points. Further analysis would require understanding the context of the 1s/0s (e.g., binary data, logic gates, or access permissions).
</details>
Figure 14: Judgement of line orientation (JLO) with visual inputs: Which pair of lines from the legend have a matching angle to the lines in the input image?
<details>
<summary>Image 16 Details</summary>

### Visual Description
## Diagram: Line Selection Task Interface
### Overview
The image depicts a line selection task interface with a radial legend and six input images. Each input image contains two intersecting lines, with multiple-choice options provided to identify the correct pair of lines. Correct answers are highlighted in green text.
### Components/Axes
1. **Legend** (Top-left):
- Radial diagram with 11 labeled lines (1–11) arranged in a clockwise fan pattern.
- Lines originate from a central point and extend outward at equal angular intervals.
- Labels are positioned along the lines, with line 1 at the bottom-left and line 11 at the top-right.
2. **Input Images** (Arranged in two rows of three):
- Each image contains two intersecting lines.
- Labels: "Input image" above each diagram.
- Choices: Four options (A–D) listing line pairs, with correct answers in green.
### Detailed Analysis
#### Input Image 1
- **Lines**: Two diagonal lines intersecting near the center.
- **Choices**:
- A) Lines 1 and 4 (Correct)
- B) Lines 1 and 3
- C) Lines 1 and 2
- D) Lines 1 and 8
#### Input Image 2
- **Lines**: Two nearly horizontal lines with slight upward slope.
- **Choices**:
- A) Lines 1 and 3
- B) Lines 1 and 7
- C) Lines 1 and 9 (Correct)
- D) Lines 1 and 6
#### Input Image 3
- **Lines**: Two nearly vertical lines with slight rightward tilt.
- **Choices**:
- A) Lines 1 and 2
- B) Lines 1 and 5 (Correct)
- C) Lines 1 and 6
- D) Lines 1 and 9
#### Input Image 4
- **Lines**: Two diagonal lines intersecting near the top-right.
- **Choices**:
- A) Lines 1 and 9
- B) Lines 1 and 2
- C) Lines 1 and 5 (Correct)
- D) Lines 1 and 6
#### Input Image 5
- **Lines**: Two nearly horizontal lines with slight downward slope.
- **Choices**:
- A) Lines 1 and 7
- B) Lines 1 and 9 (Correct)
- C) Lines 1 and 3
- D) Lines 1 and 6
#### Input Image 6
- **Lines**: Two diagonal lines intersecting near the bottom-left.
- **Choices**:
- A) Lines 1 and 9
- B) Lines 1 and 2
- C) Lines 1 and 5
- D) Lines 1 and 6 (Correct)
### Key Observations
1. **Line 1 Consistency**: Line 1 appears in all input images and is part of every correct answer.
2. **Angular Relationships**: Correct line pairs often correspond to lines with complementary angles (e.g., lines 1 and 4 in Image 1 form a 90° intersection).
3. **Distractor Patterns**: Incorrect choices frequently include lines adjacent to the correct pair in the legend (e.g., lines 1 and 3 in Image 1, where line 3 is adjacent to line 4).
### Interpretation
The task appears to test spatial reasoning and familiarity with the legend's angular structure. Line 1 serves as a fixed reference point, while the second line in each correct pair is determined by its angular relationship to line 1. For example:
- In Image 1, line 4 is perpendicular to line 1 (90° angle).
- In Image 2, line 9 forms a shallow angle with line 1, matching the input image's lines.
- Distractors often exploit proximity in the legend (e.g., line 3 near line 4 in Image 1).
This suggests the task evaluates the ability to map 2D geometric relationships to a radial coordinate system. The consistent use of line 1 as an anchor implies it may represent a baseline or reference direction in the task's underlying logic.
</details>
Figure 15: Judgement of line orientation (JLO) with text inputs: Which choice has an angle between lines 1 and 2 that matches the angle from the input array? The array colors are purely for illustration purposes.
<details>
<summary>Image 17 Details</summary>

### Visual Description
## Grid of Numerical Arrays: Input and Four Choices
### Overview
The image displays a grid of numerical arrays organized into five columns: **Input array**, **Choice 1**, **Choice 2**, **Choice 3**, and **Choice 4**. Each array is a matrix of cells containing the values **1**, **2**, or **0**, with some cells highlighted in green. The arrays are structured in rows and columns, though exact dimensions are not specified. The green boxes emphasize specific regions within **Choice 2** and **Choice 4**.
### Components/Axes
- **Labels**:
- Columns are labeled as **Input array**, **Choice 1**, **Choice 2**, **Choice 3**, and **Choice 4**.
- No explicit axis titles or legends are present.
- **Color Coding**:
- Green boxes highlight specific sub-regions in **Choice 2** (top-left) and **Choice 4** (bottom-right).
- No legend is provided to explain the color coding.
### Detailed Analysis
#### Input Array
- Contains a mix of **1s**, **2s**, and **0s** distributed across rows and columns.
- Example values:
- Row 1: `1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
- Row 2: `1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
- Row 3: `1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
- Row 4: `2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
- Row 5: `2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
#### Choice 1
- Similar structure to the Input array but with minor modifications:
- Row 4: `1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
- Row 5: `2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
#### Choice 2
- Green box highlights the **top-left sub-region** (rows 1–3, columns 1–4).
- Example values:
- Row 1: `2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
- Row 2: `1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
- Row 3: `1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
#### Choice 3
- Green box highlights the **bottom-right sub-region** (rows 4–5, columns 1–4).
- Example values:
- Row 4: `1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
- Row 5: `2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
#### Choice 4
- Green box highlights the **bottom-right sub-region** (rows 4–5, columns 1–4).
- Example values:
- Row 4: `1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
- Row 5: `2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0`
### Key Observations
1. **Green Highlights**:
- **Choice 2** emphasizes the **top-left sub-region** (rows 1–3, columns 1–4), where values transition from **2s** to **1s**.
- **Choice 4** emphasizes the **bottom-right sub-region** (rows 4–5, columns 1–4), where values remain consistent as **1s** and **2s**.
2. **Value Distribution**:
- **1s** and **2s** dominate the arrays, with **0s** acting as placeholders or neutral values.
- **Choice 1** introduces a shift from **2s** to **1s** in rows 4–5.
3. **Pattern Consistency**:
- All choices retain the same structure as the Input array but with localized modifications.
### Interpretation
The image likely represents a decision-making or optimization process where the **Input array** is transformed into four distinct configurations (**Choices 1–4**). The **green boxes** suggest that specific sub-regions are prioritized or modified in each choice:
- **Choice 2** focuses on the **top-left** region, possibly indicating a focus on early-stage adjustments.
- **Choice 4** emphasizes the **bottom-right** region, suggesting a focus on later-stage or critical modifications.
- The consistent use of **1s** and **2s** implies a binary or categorical system (e.g., "active/inactive," "high/low priority").
The absence of a legend or explicit axis labels limits quantitative interpretation, but the visual emphasis on green-highlighted regions suggests these areas are of particular interest in the context of the choices. The data may represent iterative adjustments, trade-offs, or prioritization strategies in a structured decision framework.
</details>
Figure 16: Selective attention (SAtt) with visual inputs: What are the (row, column) grid locations of the reference object? The top-left element of the grid is (0, 0).
<details>
<summary>Image 18 Details</summary>

### Visual Description
## Diagram: Multiple Choice Grid Layouts with Coordinate Annotations
### Overview
The image presents four distinct quadrants, each featuring a "Target object" on the left and a 5x5 grid of "Choices" on the right. Each quadrant tests positional reasoning by associating a target object with coordinate-based options (A-D). Coordinates are listed in (x,y) format, with some values exceeding grid boundaries (e.g., y=5 in a 0-4 grid).
### Components/Axes
1. **Target Objects**:
- Top-left: Tree
- Top-right: Fish
- Bottom-left: Shoe
- Bottom-right: Apple
2. **Choice Grids**:
- 5x5 spatial grids (x-axis: 0-4, y-axis: 0-4)
- Each grid contains 4 labeled options (A-D) with coordinate lists
3. **Coordinate Notation**:
- Parenthetical pairs (e.g., (0,5), (1,3))
- Some coordinates exceed grid limits (e.g., (5,5))
### Detailed Analysis
#### Top-left Quadrant (Tree)
- **Target Object**: Green tree with brown trunk
- **Choices**:
- A: (0,5), (1,3), (2,1), (3,3), (4,5)
- B: (0,5), (1,3), (2,0), (3,3), (4,4)
- C: (0,5), (1,3), (2,1), (3,3), (4,4)
- D: (0,5), (1,2), (2,1), (3,3), (5,5)
#### Top-right Quadrant (Fish)
- **Target Object**: Blue-green fish with white eye
- **Choices**:
- A: (0,3), (1,1), (2,3), (3,3)
- B: (0,0), (1,3), (2,3), (3,2)
- C: (0,0), (1,3), (2,3), (3,3)
- D: (0,2), (1,3), (2,3), (3,3)
#### Bottom-left Quadrant (Shoe)
- **Target Object**: Orange sneaker with star
- **Choices**:
- A: (0,3), (1,3), (2,1), (3,1)
- B: (0,3), (1,0), (2,1), (3,0)
- C: (0,3), (1,2), (2,1), (3,3)
- D: (0,3), (1,0), (2,1), (3,1)
#### Bottom-right Quadrant (Apple)
- **Target Object**: Red apple with green leaf
- **Choices**:
- A: (0,4), (1,2), (2,2), (3,2), (4,4)
- B: (0,0), (1,2), (2,2), (3,2), (4,0)
- C: (0,4), (1,2), (2,2), (3,2), (4,3)
- D: (0,4), (1,2), (2,2), (3,2), (4,0)
### Key Observations
1. **Coordinate Anomalies**:
- Multiple choices include y=5 (e.g., Tree A, D) despite 0-4 grid limits
- Fish D and Apple D share identical coordinate sets
2. **Pattern Repetition**:
- (3,3) appears in 7/8 choice sets
- (2,1) appears in 6/8 choice sets
3. **Grid Boundaries**:
- 5x5 grids imply valid coordinates should be 0-4 for both axes
- 25 total positions per grid, but choices only list 4-5 coordinates
### Interpretation
This diagram appears to test spatial reasoning through coordinate-based selection tasks. The repeated use of (3,3) and (2,1) suggests these positions may represent common reference points or "anchors" in the grids. The inclusion of out-of-bounds coordinates (e.g., y=5) introduces ambiguity - these could be intentional distractors or transcription errors. The identical coordinate sets in Fish D and Apple D imply either a design oversight or a deliberate test of attention to detail. The grids' 5x5 structure (25 positions) contrasts with the 4-5 coordinates per choice, suggesting incomplete spatial coverage or intentional omission of certain positions.
</details>
Figure 17: Selective attention (SAtt) with text inputs: What are the (row, column) grid locations of the target character? The top-left element of the grid is (0, 0).
<details>
<summary>Image 19 Details</summary>

### Visual Description
## List of Target Characters and Choices
### Overview
The image presents a structured list of target characters (`b`, `5`, `k`, `9`) with associated "Choices" labeled A-D. Each choice contains a sequence of coordinate pairs in parentheses, some highlighted in green. The data appears to represent positional or index-based mappings for each target character.
---
### Components/Axes
- **Target Characters**: Labeled at the top of each section (`b`, `5`, `k`, `9`).
- **Choices**: Subsections under each target character, labeled A-D.
- **Coordinate Pairs**: Tuples of integers (e.g., `(0,7)`, `(1,0)`) within each choice, some highlighted in green.
---
### Detailed Analysis
#### Target Character: `b`
- **Choices**:
- **A**: `(0,7)`, `(1,0)`, `(2,8)`, `(3,1)`, `(4,5)`, `(5,5)`, `(6,8)`, `(7,1)`, `(8,2)`
- **B**: `(0,7)`, `(1,0)`, `(2,8)`, `(3,1)`, `(4,5)`, `(5,5)`, `(6,8)`, `(7,1)`, `(8,6)` *(green parentheses)*
- **C**: `(0,3)`, `(1,1)`, `(2,0)`, `(3,1)`, `(4,5)`, `(5,6)`, `(6,3)`, `(7,5)`, `(8,0)`
- **D**: `(0,7)`, `(1,0)`, `(2,5)`, `(3,1)`, `(4,5)`, `(5,5)`, `(6,8)`, `(7,1)`, `(8,6)` *(green parentheses)*
#### Target Character: `5`
- **Choices**:
- **A**: `(0,3)`, `(1,3)`, `(2,2)`, `(3,2)`, `(4,2)`
- **B**: `(0,2)`, `(1,2)`, `(2,2)`, `(3,0)`, `(4,1)`
- **C**: `(0,3)`, `(1,3)`, `(2,1)`, `(3,2)`, `(4,4)`
- **D**: `(0,3)`, `(1,3)`, `(2,2)`, `(3,2)`, `(4,4)` *(green parentheses)*
#### Target Character: `k`
- **Choices**:
- **A**: `(0,2)`, `(1,4)`, `(2,4)`, `(3,2)`, `(4,2)` *(green parentheses)*
- **B**: `(0,1)`, `(1,1)`, `(2,4)`, `(3,2)`, `(4,1)`
- **C**: `(0,2)`, `(1,4)`, `(2,4)`, `(3,2)`, `(4,4)`
- **D**: `(0,2)`, `(1,4)`, `(2,4)`, `(3,1)`, `(4,2)`
#### Target Character: `9`
- **Choices**:
- **A**: `(0,3)`, `(1,3)`, `(2,3)`, `(3,6)`, `(4,5)`, `(5,3)`, `(6,2)`
- **B**: `(0,0)`, `(1,3)`, `(2,3)`, `(3,6)`, `(4,5)`, `(5,2)`, `(6,5)` *(green parentheses)*
- **C**: `(0,0)`, `(1,3)`, `(2,3)`, `(3,6)`, `(4,5)`, `(5,2)`, `(6,3)`
- **D**: `(0,0)`, `(1,1)`, `(2,3)`, `(3,6)`, `(4,5)`, `(5,2)`, `(6,5)`
---
### Key Observations
1. **Green Highlighting**: Certain choices (e.g., B for `b`, D for `5`, A for `k`, B for `9`) have coordinate pairs in green, potentially indicating correctness or priority.
2. **Repetition**: Some coordinate pairs repeat across choices (e.g., `(0,7)` appears in multiple `b` choices).
3. **Positional Patterns**: Coordinates often increment sequentially (e.g., `(0,7)`, `(1,0)`, `(2,8)` in `b`), suggesting a positional or index-based system.
---
### Interpretation
The data likely represents a mapping of positions or indices for each target character, possibly for tasks like character recognition, alignment, or error correction. The green highlights may denote validated or optimal mappings. The repetition of certain pairs (e.g., `(0,7)` in `b`) suggests common positional relationships, while variations (e.g., `(8,6)` vs. `(8,2)` in `b`) indicate alternative mappings. The structure implies a systematic approach to encoding character-specific positional data.
</details>
Figure 18: Corsi block tapping (CBTT) with visual inputs: Each row shows a set of blue boxes. The boxes are tapped in a sequence (highlighted in yellow). What is the sequence of taps? Use the box ids in the reference.
<details>
<summary>Image 20 Details</summary>

### Visual Description
## Grid-Based Tap Sequence Test: Cognitive Pattern Recognition
### Overview
The image presents a 3x5 grid of panels (15 total) depicting spatial tap sequences using colored squares. Each panel shows a 3x3 grid with blue squares (static elements) and yellow squares (dynamic "taps"). To the right of each panel are multiple-choice options labeled A-D, containing numerical sequences. The test appears to assess pattern recognition and sequence recall through visual-spatial tasks.
### Components/Axes
1. **Panels**:
- 15 total panels arranged in 3 rows (6, 6, 3 panels per row)
- Each panel contains a 3x3 grid with:
- Blue squares (static background elements)
- Yellow squares (representing "taps" in sequence)
2. **Reference Numbers**:
- Rightmost column contains numbered reference grids (0-6) showing possible tap positions
3. **Multiple-Choice Format**:
- Options A-D for each panel
- Each option contains a 6-number sequence (0-6)
- Correct answers highlighted in green (e.g., A), incorrect in black
### Detailed Analysis
**Panel Structure**:
- All panels use a consistent 3x3 grid layout
- Blue squares occupy fixed positions across panels
- Yellow squares vary in position and quantity (1-3 per panel)
- Taps appear to follow specific spatial patterns (diagonal, vertical, clustered)
**Sequence Patterns**:
1. **Top Row Panels**:
- Panels 1-6 show increasing complexity in tap sequences
- Example: Panel 1 has 1 yellow square at position 3 → Correct answer A) 3,4,5,0,1,2
- Reference numbers show positional relationships (e.g., 0=bottom-left, 6=top-right)
2. **Middle Row Panels**:
- Panels 7-12 introduce multiple yellow squares
- Example: Panel 8 has 2 yellow squares at positions 1 and 5 → Correct answer B) 2,0,6,4,1,5
- Sequences show non-linear spatial progression
3. **Bottom Row Panels**:
- Panels 13-15 feature complex clustered taps
- Example: Panel 15 has 3 yellow squares at positions 0, 2, 4 → Correct answer D) 1,2,4,0,3
**Color Coding**:
- Yellow squares = Taps (dynamic elements)
- Blue squares = Static background
- Green highlighting = Correct answer indicators
### Key Observations
1. **Sequence Logic**:
- Correct answers follow spatial-temporal patterns matching yellow square positions
- Numbers correspond to reference grid positions (0=bottom-left, 6=top-right)
- Example: Panel 10's correct answer D) 1,2,4,0,3 matches diagonal tap progression
2. **Difficulty Progression**:
- Top row: Single-tap sequences
- Middle row: Dual-tap sequences with spatial separation
- Bottom row: Triple-tap sequences requiring pattern recognition
3. **Distractor Patterns**:
- Incorrect options (B/C) often transpose adjacent numbers
- Distractors maintain partial sequence accuracy but disrupt spatial logic
### Interpretation
This test evaluates:
1. **Spatial Memory**: Ability to recall tap positions across panels
2. **Pattern Recognition**: Identifying underlying spatial relationships
3. **Cognitive Flexibility**: Adapting to increasing sequence complexity
The correct answers demonstrate:
- Temporal sequencing (order of taps)
- Spatial mapping (converting grid positions to numerical sequences)
- Pattern abstraction (identifying underlying rules governing tap sequences)
Notable anomalies include Panel 12's correct answer C) 0,4,1,5,3,2, which shows a non-linear progression despite clustered taps. This suggests the test measures both immediate recall and abstract pattern recognition abilities.
</details>
Figure 19: Corsi block tapping (CBTT) with text inputs: Each row shows a set of boxes ( B ) laid out in space. This is shown as a 2D array where 0 is empty space. The boxes are tapped in a sequence (highlighted as T ). What is the sequence of taps? Use the box ids in the reference. The array colors are purely for illustration purposes.
<details>
<summary>Image 21 Details</summary>

### Visual Description
## Table: Sequence of Taps and Reference Numbers with Multiple-Choice Answers
### Overview
The image presents a structured table with three primary columns:
1. **Sequence of taps ("T")**: Rows of binary-like sequences composed of "T" (taps) and "B" (blanks).
2. **Reference numbers**: Columns of numerical values (0-6) and "B" placeholders.
3. **Choices**: Labeled options (A-D) with numerical sequences, where the correct answer is highlighted in green.
The table appears to encode a logic puzzle where sequences of "T" and "B" correspond to selecting specific numbers from the "Reference numbers" column to match one of the "Choices."
---
### Components/Axes
- **Columns**:
- **Sequence of taps ("T")**: Contains rows of 6-character sequences (e.g., `0,0,0,0,0,0`, `B,T,0,0,0,0`).
- **Reference numbers**: Columns of 6 values per row (e.g., `0,0,0,0,0,0`, `0,0,6,0,1,0`).
- **Choices**: Labeled A-D with 5-number sequences (e.g., A) 2,3,6,5,4; D) 4,2,6,5,3).
- **Labels**:
- Top row: "Sequence of taps (T)", "Reference numbers", "Choices".
- Sub-labels under "Choices": A, B, C, D with numerical options.
- **Formatting**:
- Correct answers in "Choices" are highlighted in green (e.g., D) 4,2,6,5,3).
- "B" in sequences and reference numbers likely denotes a placeholder or exclusion.
---
### Detailed Analysis
#### Sequence of Taps ("T")
- **Structure**: Each row has 6 entries (e.g., `0,0,0,0,0,0`, `B,T,0,0,0,0`).
- **Pattern**:
- "T" and "B" alternate, with "B" often appearing in specific positions (e.g., 4th or 5th).
- Example: Row 1: `B,T,0,0,0,0` (B in 1st, T in 2nd).
#### Reference Numbers
- **Structure**: 6 values per row, mixing numbers (0-6) and "B".
- **Examples**:
- Row 1: `0,0,6,0,1,0` (numbers 6 and 1 at positions 3 and 5).
- Row 2: `0,0,0,0,5,0` (number 5 at position 5).
#### Choices
- **Options**:
- A) 2,3,6,5,4
- B) 4,2,6,3,5
- C) 1,3,4,5,2
- D) 4,2,6,5,3 (correct answer, green-highlighted).
---
### Key Observations
1. **Correct Answers**:
- D) 4,2,6,5,3 is consistently the correct choice in the first two rows.
- Other rows show varying correct answers (e.g., A) 2,1,7,6 in the third section).
2. **Sequence-Reference Correlation**:
- "B" in sequences may indicate positions to exclude or prioritize in reference numbers.
- Example: In row 1, "B" in the 1st position and "T" in the 2nd might map to selecting numbers from positions 3, 4, 5, etc.
3. **Reference Number Patterns**:
- Numbers like 6, 5, and 4 appear frequently in correct answers.
- "B" in reference numbers may act as a wildcard or invalid value.
---
### Interpretation
The table likely represents a logic-based puzzle where:
- The "Sequence of taps" dictates which positions in the "Reference numbers" column to select.
- "B" in sequences may denote skipped or ignored positions, while "T" indicates active selection.
- The "Choices" column provides possible answers, with the correct one derived by mapping the sequence to the reference numbers.
For example:
- In the first row, the sequence `B,T,0,0,0,0` might instruct selecting numbers from positions 3, 4, 5, etc., in the reference numbers (`0,0,6,0,1,0`), leading to the correct answer D) 4,2,6,5,3.
This structure suggests a systematic method for encoding answers, possibly for a game or cognitive test. The use of "B" and "T" as binary indicators aligns with common logic puzzle conventions.
</details>
Figure 20: Spatial addition (SAdd) with visual inputs: What is the sum of the two arrays? Ignore red circles. Blue circles represent 1, white circles represent 2 and empty spaces represent 0.
<details>
<summary>Image 22 Details</summary>

### Visual Description
## Grid-Based Selection Diagram: Array Transformation Process
### Overview
The image displays a 3x4 grid of 12 smaller 5x5 matrices arranged in three rows. Each matrix contains colored dots (blue, red, white) with specific positional patterns. The diagram is divided into three main sections:
1. **Array 1** (top-left 3x2 grids)
2. **Array 2** (middle 3x2 grids)
3. **Choice 1-4** (rightmost 3x4 grids)
Notable elements include:
- Red dots in Array 1/2 grids
- White dots in Choice grids
- Green outline highlighting Choice 2 in the top row
### Components/Axes
**Primary Labels:**
- Top row: "Array 1" (left), "Array 2" (center), "Choice 1-4" (right)
- Each grid: 5x5 coordinate system with no explicit axis labels
- Color coding:
- Red: Array 1/2 elements
- Blue: Base elements
- White: Selected/transformed elements
- Green: Highlight indicator
**Spatial Relationships:**
- Array 1: Top-left quadrant (3 grids)
- Array 2: Middle-left quadrant (3 grids)
- Choices: Right quadrant (4 grids)
- Green outline: Top row, second column (Choice 2)
### Detailed Analysis
**Array 1 Patterns:**
1. First grid: Red dot at (1,1), blue dots at (2,2), (3,3), (4,4)
2. Second grid: Red dot at (1,3), blue dots at (2,1), (3,2), (4,4)
3. Third grid: Red dot at (3,1), blue dots at (1,2), (2,3), (4,4)
**Array 2 Patterns:**
1. First grid: Red dot at (2,2), blue dots at (1,1), (3,3), (4,4)
2. Second grid: Red dot at (3,3), blue dots at (1,1), (2,2), (4,4)
3. Third grid: Red dot at (4,4), blue dots at (1,1), (2,2), (3,3)
**Choice Grids:**
- All choices show blue base elements with white overlays
- White dot patterns vary:
- Choice 1: 3 white dots in diagonal formation
- Choice 2: 4 white dots forming square pattern (highlighted)
- Choice 3: 2 white dots in opposite corners
- Choice 4: 3 white dots in triangular formation
### Key Observations
1. **Red Dot Distribution:**
- Array 1: Red dots positioned at matrix diagonals
- Array 2: Red dots positioned at matrix anti-diagonals
- No red dots in Choice grids
2. **White Dot Patterns:**
- Choice 2 (highlighted) shows most complex pattern (4 dots)
- All choices maintain blue base elements while adding white overlays
3. **Spatial Logic:**
- Array 1/2 red dots appear to "transform" into white dots in choices
- Green outline suggests Choice 2 represents optimal transformation
### Interpretation
This diagram appears to model a selection/transformation process:
1. **Input Arrays:** Array 1 and 2 represent different input configurations with red dots as key elements
2. **Transformation Logic:** Blue elements remain constant while red elements are converted to white through selection
3. **Optimal Outcome:** The green-highlighted Choice 2 suggests a preferred transformation pattern (4 white dots forming square)
4. **Pattern Significance:** Diagonal/anti-diagonal red dot placement in arrays correlates with specific white dot configurations in choices
The diagram demonstrates a systematic approach to element selection/transformation, with the green outline serving as a visual cue for the optimal solution. The consistent blue base elements across all choices indicate preserved elements during the transformation process.
</details>
Figure 21: Spatial addition (SAdd) with text inputs: What is the sum of the two arrays? Ignore R . E is 0, B is 1 and W is 2. The array colors are purely for illustration purposes.
<details>
<summary>Image 23 Details</summary>

### Visual Description
## Grid of Arrays and Choices: Letter Pattern Configurations
### Overview
The image displays a structured grid of letter patterns organized into **Array 1**, **Array 2**, and four **Choice** configurations (Choice 1–4). Each section contains a 5x5 grid of letters (E, B, R, W), with specific cells highlighted in green. The layout suggests a comparative analysis of positional changes or substitutions across configurations.
### Components/Axes
- **Labels**:
- **Array 1** (leftmost section)
- **Array 2** (second section)
- **Choice 1–4** (rightmost sections)
- **Grid Structure**:
- Each array/choice contains a 5x5 grid of letters.
- Letters include **E** (most frequent), **B**, **R**, and **W** (rare).
- **Highlighted Cells**:
- Green boxes emphasize specific cells in **Choice 4** (top row) and **Choice 2** (middle row).
### Detailed Analysis
#### Array 1
- **Row 1**: `E, B, E, R, E`
- **Row 2**: `E, E, E, E, E`
- **Row 3**: `E, E, E, E, E`
- **Row 4**: `E, E, B, E, E`
- **Row 5**: `E, E, E, E, E`
#### Array 2
- **Row 1**: `E, B, E, E, E`
- **Row 2**: `E, B, E, E, E`
- **Row 3**: `E, E, E, E, E`
- **Row 4**: `E, R, E, E, E`
- **Row 5**: `E, E, E, E, E`
#### Choice 1
- **Row 1**: `E, W, E, E, E`
- **Row 2**: `B, E, E, E, E`
- **Row 3**: `E, E, E, E, E`
- **Row 4**: `E, E, E, B, E`
- **Row 5**: `E, E, E, E, E`
#### Choice 2
- **Row 1**: `E, B, E, E, E`
- **Row 2**: `E, E, E, E, B`
- **Row 3**: `E, E, E, B, E`
- **Row 4**: `E, E, E, E, E`
- **Row 5**: `E, E, E, E, B`
#### Choice 3
- **Row 1**: `E, W, E, E, E`
- **Row 2**: `B, E, E, E, E`
- **Row 3**: `E, E, E, B, E`
- **Row 4**: `E, E, E, E, E`
- **Row 5**: `E, E, E, E, E`
#### Choice 4
- **Row 1**: `E, W, E, E, E` (highlighted)
- **Row 2**: `E, B, E, E, E`
- **Row 3**: `E, E, E, E, E`
- **Row 4**: `E, E, B, E, E`
- **Row 5**: `E, E, E, E, E`
### Key Observations
1. **Repetition of "E"**:
- "E" dominates all grids, appearing in ~80% of cells across all configurations.
2. **Substitutions**:
- **B** and **R** are sparse, with **W** appearing only in **Choice 1**, **Choice 3**, and **Choice 4**.
3. **Highlighted Cells**:
- **Choice 4**’s top row (`E, W, E, E, E`) and **Choice 2**’s middle row (`E, E, D, E, E`) are emphasized, suggesting these positions are critical.
4. **Pattern Shifts**:
- **Choice 1** and **Choice 3** introduce **W** in the first row, while **Choice 2** and **Choice 4** retain **B** in the second row.
### Interpretation
The grid likely represents a decision-making or optimization scenario where:
- **Array 1 and 2** define baseline configurations.
- **Choices 1–4** explore variations, with **W** and **B** acting as modifiers.
- Highlighted cells in **Choice 4** and **Choice 2** may indicate priority positions for substitutions (e.g., **W** in the top row could signify a "weight" or "priority" marker).
- The absence of **R** in **Choice 1–4** suggests it is excluded from optimized configurations.
This structure could model scenarios like resource allocation, error correction, or pattern recognition, where specific letter placements (e.g., **W** in critical positions) alter outcomes. The green highlights imply these cells are focal points for analysis.
</details>
Figure 22: Maze completion (MCT) with visual inputs: We illustrate examples of mazes used for the MCT task. We programmatically generate mazes of different sizes using Mazelib (Stilley, 2014). Blue cells are obstacles. Black cells are navigable space. The yellow square represents the current location. The red circle represents the goal location.
<details>
<summary>Image 24 Details</summary>

### Visual Description
## Diagram: Maze Structures with Colored Markers
### Overview
The image displays three maze diagrams of varying sizes (large, medium, small) arranged in a grid layout. Each maze is composed of black pathways on a blue background, with colored dots (yellow and red) positioned at specific nodes. No textual labels, legends, or axis markers are present.
### Components/Axes
- **Maze Layouts**:
- **Large Maze (Left)**: Dominates the left side, occupying ~60% of the image width.
- **Medium Maze (Bottom-Right)**: Positioned below the large maze, ~30% width.
- **Small Maze (Top-Right)**: Overlaps the top-right corner, ~15% width.
- **Colored Markers**:
- **Yellow Dots**: Appear at the top-left corner of the large maze and the top-left corner of the small maze.
- **Red Dots**: Located at the bottom-right corner of the large maze, bottom-right corner of the medium maze, and bottom-right corner of the small maze.
### Detailed Analysis
- **Maze Complexity**:
- The large maze exhibits the highest complexity, with dense, intertwining pathways.
- The medium and small mazes have simpler, more linear structures.
- **Marker Placement**:
- Yellow dots are consistently placed at the **top-left** of their respective mazes.
- Red dots are consistently placed at the **bottom-right** of their respective mazes.
- **No Textual Elements**:
- No labels, legends, or axis titles are visible.
### Key Observations
1. **Marker Consistency**: Yellow and red dots are positioned at fixed corners across all mazes, suggesting a potential symbolic or navigational purpose.
2. **Size Correlation**: Larger mazes have more intricate pathways, while smaller mazes are simplified.
3. **Absence of Labels**: No textual information is provided to explain the purpose or context of the mazes or markers.
### Interpretation
The image likely represents a visual puzzle or navigation challenge, with colored markers indicating start/end points or objectives. The lack of textual labels implies the mazes are designed for intuitive interpretation, possibly for a game or cognitive test. The consistent placement of markers across mazes suggests a standardized framework, though the absence of explicit instructions leaves the exact purpose ambiguous. The size variation may indicate difficulty scaling or spatial constraints in different contexts.
</details>
Figure 23: Maze completion (MCT) with text inputs: We illustrate examples of mazes used for the MCT task. We programmatically generate mazes of different sizes using Mazelib (Stilley, 2014). 0 s are obstacles. 1 s are navigable space. A represents the current location. G represents the goal location. The array colors are purely for illustration purposes.
<details>
<summary>Image 25 Details</summary>

### Visual Description
## Binary Matrix with Embedded Annotations
### Overview
The image displays a large grid composed of binary digits (0s and 1s) organized into three distinct sections: a dominant left section, a smaller top-right section, and a larger bottom-right section. Scattered throughout the grid are alphanumeric annotations ("A" and "G") embedded within specific cells. The grid lacks explicit axes, legends, or numerical labels beyond the binary values.
### Components/Axes
- **Grid Structure**:
- **Left Section**: 20 rows × 30 columns of binary data.
- **Top-Right Section**: 5 rows × 10 columns of binary data.
- **Bottom-Right Section**: 15 rows × 20 columns of binary data.
- **Annotations**:
- **"A"**: Appears in the first row of the left section (column 2) and the top-right section (row 5, column 10).
- **"G"**: Appears in the bottom-right section (row 15, column 20).
### Detailed Analysis
- **Binary Patterns**:
- The left section exhibits repetitive sequences of "1,1,1,1" and "0,0,0,0" in rows 2–19, suggesting structured data blocks.
- The top-right and bottom-right sections show less repetition, with sporadic "1" and "0" distributions.
- **Annotation Placement**:
- "A" and "G" are positioned at the edges of their respective sections, potentially marking boundaries or key data points.
- No other alphanumeric characters are present.
### Key Observations
1. **Structural Segmentation**: The grid is divided into three logical regions, each with distinct binary patterns.
2. **Sparse Annotations**: Only two unique letters ("A" and "G") are embedded, with no clear correlation to binary values.
3. **Repetition vs. Randomness**: The left section shows higher repetition, while the right sections appear more randomized.
### Interpretation
The grid likely represents a binary dataset with embedded markers ("A" and "G") to denote specific events, categories, or anomalies. The segmentation into sections suggests modular data organization, possibly for processing or analysis. The lack of explicit labels or legends implies the annotations are self-referential or context-dependent within the dataset. The repetition in the left section may indicate a controlled or standardized data block, while the right sections could represent variable or unstructured data. The placement of "A" and "G" at section boundaries might signify transitions or critical thresholds in the data flow.
</details>
Figure 24: Cambridge spatial working memory (CSWM) with visual inputs: Weillustrate two game plays of the CSWM task in the two rows. In each row, we show the initial observation followed by actions taken and the resulting observations. Note how the box identities change after each step. This is intended to force models to remember boxes by their spatial locations instead of their integer identities. As treasures get collected, they are populated in the 'Treasures collected' section of the game screen. When a treasure is collected, a new treasure is placed in one of the boxes where the treasure never appeared before.
<details>
<summary>Image 26 Details</summary>

### Visual Description
## Diagram: Treasure Collection Simulation
### Overview
The image depicts a sequence of 10 panels arranged in two rows of five, illustrating a step-by-step process of collecting treasures by opening numbered boxes. Each panel shows a grid of boxes (labeled 0–5) and a "Treasures collected" section below, which tracks the number of treasures gathered through colored slots (white for uncollected, yellow for collected). Arrows connect panels to indicate the sequence of actions.
### Components/Axes
- **Panels**: 10 panels (5 per row) representing sequential states.
- **Boxes**: Grids of 5x2 boxes labeled 0–5 in each panel.
- **Treasures collected**: Sections below each panel with slots (white/yellow) indicating collected treasures.
- **Actions**: Labeled as "Open box X" (X = 0, 2, 0, 1, 3 in the top row; 0, 2, 0, 3 in the bottom row).
### Detailed Analysis
1. **Panel 1 (Top Row, Leftmost)**:
- Boxes: 0, 1, 2, 3, 4, 5 (arranged in a grid).
- Action: "Open box 0" (highlighted in yellow).
- Treasures collected: 1 yellow slot.
2. **Panel 2 (Top Row, Second from Left)**:
- Boxes: 0, 1, 2, 3, 4, 5.
- Action: "Open box 2" (highlighted).
- Treasures collected: 2 yellow slots.
3. **Panel 3 (Top Row, Third from Left)**:
- Boxes: 0, 1, 2, 3, 4, 5.
- Action: "Open box 0" (highlighted).
- Treasures collected: 3 yellow slots.
4. **Panel 4 (Top Row, Fourth from Left)**:
- Boxes: 0, 1, 2, 3, 4, 5.
- Action: "Open box 1" (highlighted).
- Treasures collected: 4 yellow slots.
5. **Panel 5 (Top Row, Rightmost)**:
- Boxes: 0, 1, 2, 3, 4, 5.
- Action: "Open box 3" (highlighted).
- Treasures collected: 5 yellow slots.
6. **Panel 6 (Bottom Row, Leftmost)**:
- Boxes: 0, 1, 2, 3, 4, 5.
- Action: "Open box 0" (highlighted).
- Treasures collected: 1 yellow slot.
7. **Panel 7 (Bottom Row, Second from Left)**:
- Boxes: 0, 1, 2, 3, 4, 5.
- Action: "Open box 2" (highlighted).
- Treasures collected: 2 yellow slots.
8. **Panel 8 (Bottom Row, Third from Left)**:
- Boxes: 0, 1, 2, 3, 4, 5.
- Action: "Open box 0" (highlighted).
- Treasures collected: 3 yellow slots.
9. **Panel 9 (Bottom Row, Fourth from Left)**:
- Boxes: 0, 1, 2, 3, 4, 5.
- Action: "Open box 3" (highlighted).
- Treasures collected: 4 yellow slots.
10. **Panel 10 (Bottom Row, Rightmost)**:
- Boxes: 0, 1, 2, 3, 4, 5.
- Action: "Open box 3" (highlighted).
- Treasures collected: 5 yellow slots.
### Key Observations
- **Treasure Progression**: Each action increases the number of yellow slots in the "Treasures collected" section by 1, indicating a cumulative collection process.
- **Box Repetition**: Box 0 is opened multiple times (panels 1, 3, 6, 8), suggesting it may be a recurring or reset mechanism.
- **Box 3**: Opened in panels 5, 9, and 10, with the final panel showing all 5 treasures collected.
- **No Explicit Labels**: The numbers in the boxes (0–5) lack explicit meaning (e.g., treasure types, values), but their positions remain consistent across panels.
### Interpretation
The diagram simulates a treasure-collection game where players open boxes in a specific sequence to accumulate treasures. The repetition of opening box 0 and the final collection of all 5 treasures in panel 10 suggest a structured progression. The lack of explicit labels for box numbers implies they may represent positions or identifiers rather than treasure values. The cumulative nature of the "Treasures collected" section emphasizes the importance of sequential actions in achieving the goal.
</details>
Figure 25: Cambridge spatial working memory (CSWM) with text inputs: We illustrate two game plays of the CSWM task in the two rows. In each row, we show the initial observation (provided as text arrays) followed by actions taken and the resulting observations. The boxes in each array are the non-zero elements. Note how the box identities change after each step. This is intended to force models to remember boxes by their spatial locations instead of their integer identities. As treasures get collected, the 'Number of treasures collected' gets incremented. When a treasure is collected, a new treasure is placed in one of the boxes where the treasure never appeared before. The array colors are purely for illustration purposes.
<details>
<summary>Image 27 Details</summary>

### Visual Description
## Diagram: Treasure Collection Process
### Overview
The image depicts a sequential process of treasure collection across two rows of boxes. Each box contains numerical values and a "T" symbol, with arrows indicating actions (e.g., "Open box 1") that transition between states. The bottom row tracks the number of treasures collected, incrementing as boxes are opened.
### Components/Axes
- **Top Row Boxes**:
- Labeled with numbers (1–4) and "T" symbols (e.g., "T" in box 1, "T" in box 3).
- Each box contains a grid of 6 cells (e.g., "0,0,0,0,0,1" in the first box).
- Text: "Number of treasures collected: X / Y" (e.g., "0 / 4" initially).
- **Bottom Row Boxes**:
- Similar grid structure but with higher numerical values (e.g., "6,2,0,0,0,0").
- Text: "Number of treasures collected: X / 6" (e.g., "0 / 6" initially).
- **Arrows**:
- Labeled with actions (e.g., "Action: Open box 1").
- Connect top and bottom row boxes, indicating state transitions.
### Detailed Analysis
1. **Top Row Progression**:
- Initial state: All boxes have "0" in most cells, with "T" in specific positions (e.g., box 1 has "T" in the 5th cell).
- After "Open box 1": Box 1’s grid changes to "0,0,0,0,1,0" (treasure collected).
- Subsequent actions (e.g., "Open box 4") update grids and increment the "collected" count (e.g., "3 / 4").
2. **Bottom Row Progression**:
- Initial state: All boxes have "0" in most cells.
- After "Open box 1": Box 1’s grid updates to "0,0,0,0,1,0" (treasure collected).
- Subsequent actions (e.g., "Open box 4") update grids and increment the "collected" count (e.g., "2 / 6").
3. **Action Flow**:
- Arrows show a left-to-right sequence:
- Top row: Open box 1 → Open box 1 → Open box 4 → Open box 3 → Open box 4.
- Bottom row: Open box 1 → Open box 4 → Open box 2 → Open box 1 → Open box 2.
### Key Observations
- **Treasure Distribution**:
- Top row treasures are sparse (e.g., "T" in 1/4 boxes), while bottom row treasures are denser (e.g., "T" in 2/6 boxes).
- **Count Incrementation**:
- Top row progresses from 0/4 to 3/4, while bottom row progresses from 0/6 to 2/6.
- **Action Consistency**:
- Repeated actions (e.g., "Open box 1" twice) suggest iterative treasure collection.
### Interpretation
The diagram illustrates a systematic process of treasure collection, where each action (opening a box) reveals or collects treasures. The top row represents a smaller-scale collection (4 total treasures), while the bottom row involves a larger-scale process (6 total treasures). The "T" symbols likely denote successful treasure retrieval, and the numerical grids may represent hidden values or probabilities. The repeated actions suggest a trial-and-error mechanism, with each box opening refining the collection strategy. The discrepancy in collected counts (3/4 vs. 2/6) implies varying efficiency or constraints between the two processes.
</details>
```
```
Prompt 1: Direction estimation (Ego image)
```
Prompt 2: Direction estimation (DM image)
USER: You are playing a game in a 2D world. Each image shows the immediate surroundings around you, with you at the center of
the image (in yellow). The black cells are obstacles, i.e., you cannot move over them. The blue cells are navigable spaces that you can
move over. Some blue cells have landmarks in them (red circles with a text label). These are important to remember. Here is a video
taken in the 2D world as you were navigating in it. Understand the 2D world you are navigating in, build a map of the world to keep
track of your position as well as locations of landmarks in the world.
<IMAGE 1>
<IMAGE 2>
.
.
.
You must now answer a question based on your understanding of the 2D world. Pretend that you are standing next to the landmark A.
See image below.
<IMAGE OF A>
What is the angle between the line connecting your current location to the landmark A and the line connecting your current location to
the landmark C? Angles range from -180 to 180 degrees. Here are your choices: 1) 156 2) 96 3) 66 4) -24 Think step by step. Then
answer your question in the following json format.
```
{
"answer": <fill in one of 1/2/3/4 integer choice value>
}
```
```
Prompt 2: Direction estimation (DM image)
## Large-scale spatial cognition
| | | Direction est. | Distance est. | Map sketching | Route retracing | Shortcut discovery |
|-----------|-------------|------------------|-----------------|-----------------|-------------------|----------------------|
| Ego image | # questions | 150 | 135 | 30 | 30 | 30 |
| | # videos | 30 | 30 | 30 | 30 | 30 |
| DMimage | # questions | 150 | 135 | 30 | 30 | 30 |
| | # videos | 30 | 30 | 30 | 30 | 30 |
| DMtext | # questions | 150 | 135 | 30 | 30 | 30 |
## Small-scale spatial cognition
| | | MRT | PTT | WLT | MPFB | JLO | SAtt | MCT | CBTT | SAdd | CSWM |
|---------|-------------|-------|-------|-------|--------|-------|--------|-------|--------|--------|--------|
| Visual | # questions | 172 | 100 | 50 | 50 | 50 | 100 | 45 | 50 | 50 | 50 |
| Visual | # images | 139 | 20 | 300 | 250 | 51 | 200 | - | 297 | 300 | - |
| Textual | # questions | 40 | 100 | - | 50 | 50 | 100 | 45 | 50 | 50 | 50 |
Table 6: SPACE benchmark statistics: We show the number of questions, images, and videos for each SPACE task. For large-scale spatial cognition tasks, we have one video per environment. We generate questions and navigation tasks based on these videos. Some small-scale spatial cognition tasks have multiple images for the same question (e.g., MPFB, WLT, SAtt and CBTT), while other tasks have multiple questions for the same image (e.g., PTT, MRT). For interactive tasks like MCT, CSWM, route retracing and shortcut discovery, images are rendered conditioned on the actions taken by the agent.
Prompt 3: Direction estimation (DM text)
USER: You are playing a game in a 2D text world. The console of the game is represented as a comma-separated text array. Obstacles are represented using 0, i.e., you cannot move over them. Navigable spaces that you can move over are represented using 1. Some navigable spaces have landmarks represented as an ascii character (A - Z). These are also navigable spaces and are just labeled with an ascii character. These landmarks are important to remember. You will always be located at the center of the array with your position highlighted using the '*' character. Here is a sequence of console screen recordings taken as you were navigating in the 2D text world. Understand the world you are navigating in, build a map of the world to keep track of your position as well as locations of landmarks in the world.
```
```
You must now answer a question based on your understanding of the 2D text world. Pretend that you are standing next to the landmark B as shown below.
```
1, 1, 1, 1, 1, 1
1, 1, 1, 1, 1, 1
1, B *, 1, 1
1, 0, 1, 1, 0
1, 1, 1, 1, 1
```
What is the angle between the line connecting your current location to the landmark B and the line connecting your current location to the landmark C? Angles range from -180 to 180 degrees. Note that you may not see landmark C in your immediate vicinity. You must use spatial knowledge from the sequence of screen recordings to locate your current position and both landmarks to answer this question.
```
Here are your choices: 1) -127 2) 53 3) 83 4) -97
Think step by step. Then answer your question in the following json format.
```
{
"answer": <fill in one of 1/2/3/4 integer choice value>
}
```
```
```
Prompt 4: Distance estimation (Ego image)
```
Prompt 4: Distance estimation (Ego image)
Prompt 5: Distance estimation (DM image)
USER: You are playing a game in a 2D world. Each image shows the immediate surroundings around you, with you at the center of the image (in yellow). The black cells are obstacles, i.e., you cannot move over them. The blue cells are navigable spaces that you can move over. Some blue cells have landmarks in them (red circles with a text label). These are important to remember. Here is a video taken in the 2D world as you were navigating in it. Understand the 2D world you are navigating in, build a map of the world to keep track of your position as well as locations of landmarks in the world.
```
taken in the 2D world as you were navigating in it. Understand the 2D world you are navigating in, build a map of the world to keep track of your position as well as locations of landmarks in the world.
<IMAGE>
<IMAGE>
.
.
.
You must now answer a question based on your understanding of the 2D world. Pretend that you are standing on the landmark C. What are the euclidean distances (in meters) from landmark C to each of the following landmarks: B, A, N, O, Y? Assume that each grid square (white borders) is 1m x 1m in size. Here are your choices: 1) 4.5, 8.5, 11.4, 11.2, 10.0 2) 11.4, 8.5, 4.5, 11.2, 10.3) 10.0, 8.5, 4.5, 11.4, 11.2) 12.5, 6.0, 3.4, -3.5, 7.2
Think step by step. Then answer your question in the following json format.
```
Prompt 6: Distance estimation (DM text)
USER: You are playing a game in a 2D text world. The console of the game is represented as a comma-separated text array. Obstacles are represented using 0, i.e., you cannot move over them. Navigable spaces that you can move over are represented using 1. Some navigable spaces have landmarks represented as an ascii character (A - Z). These are also navigable spaces and are just labeled with an ascii character. These landmarks are important to remember. You will always be located at the center of the array with your position highlighted using the '*' character. Here is a sequence of console screen recordings taken as you were navigating in the 2D text world. Understand the world you are navigating in, build a map of the world to keep track of your position as well as locations of landmarks
```
USER: You are playing a game in a 2D text world. The console of the game is represented as a comma-separated text array. Obstacles
are represented using 0, i.e., you cannot move over them. Navigable spaces that you can move over are represented using 1. Some
navigable spaces have landmarks represented as an ascii character (A - Z). These are also navigable spaces and are just labeled with an
ascii character. These landmarks are important to remember. You will always be located at the center of the array with your position
highlighted using the "*" character. Here is a sequence of console screen recordings taken as you were navigating in the 2D text world.
Understand the world you are navigating in, build a map of the world to keep track of your position as well as locations of landmarks
in the world.
======== Start of console screen recordings ======
Screen at time = 0
======== End of console screen recordings ======
```
```
USER: You are a sentient AI system capable of visually understanding the physical world, performing spatial reasoning, remembering
landmarks in the world and navigating in it. Here is a video taken in a physical enviornment as you were navigating in it. Understand
the environment you are navigating in, build a map of the environment to keep track of your position as well as locations of landmarks
in the environment. Landmarks are paintings of objects hung on the walls.
<IMAGE 1>
<IMAGE 2>
.
.
```
Prompt 7: Map sketching (Ego image)
## Prompt 8: Map sketching (DM image)
```
Prompt 8: Map sketching (DM image)
USER: You are playing a game in a 2D world. Each image shows the immediate surroundings around you, with you at the center of
the image (in yellow). The black cells are obstacles, i.e., you cannot move over them. The blue cells are navigable spaces that you can
move over. Some blue cells have landmarks in them (red circles with a text label). These are important to remember. Here is a video
taken in the 2D world as you were navigating in it. Understand the 2D world you are navigating in, build a map of the world to keep
track of your position as well as locations of landmarks in the world.
<IMAGE 1>
<IMAGE 2>
.
.
You must now sketch a map of the environment with the locations of the start and landmark locations. Use your understanding of the
2D world. Which of these map sketches best capture the true structure of the 2D world?
Choice 1
<SKETCH IMAGE OF Choice 1>
Choice 2
<SKETCH IMAGE OF Choice 2>
Choice 3
<SKETCH IMAGE OF Choice 3>
Choice 4
<SKETCH IMAGE OF Choice 4>
Think step by step. Then answer your question in the following json format.
```
{
"answer": <fill in one of 1/2/3/4 integer choice value>
}
```
```
```
Prompt 9: Map sketching (DM text)
USER: You are playing a game in a 2D text world. The console screen of the game is represented as a comma-separated text array.
Obstacles are represented using 0, i.e., you cannot move over them. Navigable spaces that you can move over are represented using 1.
Some navigable spaces have landmarks represented as an ascii character (A - Z). These are also navigable spaces and are just labeled
with an ascii character. These landmarks are important to remember. You will always be located at the center of the array with your
position highlighted using the "*" character. Here is a sequence of console screen recordings taken as you were navigating in the 2D
text world. Understand the world you are navigating in, build a map of the world to keep track of your position as well as locations of
landmarks in the world.
========= Start of console screen recordings =========
Screen at time = 0
0,0,0,0
0,0,0,0
0,1+,1,1
0,C,1,1,1
0,0,1,1,0
Screen at time = 1
0,0,0,0,0
0,0,0,0,0
0,0,*,1,1
0,0,C,1,1
0,0,0,1,1
.
.
========= End of console screen recordings =========
```
Prompt 9: Map sketching (DM text)
## Prompt 10: Route retracing (Ego image)
SYSTEM: You are a sentient living creature capable of navigating in environments, building internal spatial representations of environments, and finding goals in them. You will be shown a video of the shortest route from the initial position to the goal. You must look at the video and understand the environment structure and the route taken. Then, you will be placed in the environment at the same initial position. You must navigate from the initial position to the goal using the same route shown in the video, as quickly as possible. Below, you will find sections highlighting more details about the task. You can refer to these for more information.
## OBSERVATIONS:
The images are recorded from a perspective viewpoint (i.e., egocentric or first-person). This means that you are likely to see objects from different angles, resulting in a skewed appearance of the underlying 3D objects. It is important for you to look past this skew in the appearance and percive the true shape of the object in 3D.
## GOAL:
You will be provided an object goal using a text description and an image of the object. You must find the goal object in the environment by repeating the path shown in the video walkthrough. Once you find it, move close to the location of the goal and re-orient yourself to face the object.
## ACTIONS:
You have four actions available.
move forward: move forward by 0.25m along the current heading direction. It does not change the heading angle. turn left: decrease your heading angle by 30 degrees. It does not change the (x, y) position. turn right: increase your heading angle by 30 degrees. It does not change the (x, y) position.
stop: ends the current task. Issue this action only if you think you have reached the goal. If you haven't reached the goal, this action will result in a navigation failure that cannot be recovered from.
## STUCK IN PLACE BEHAVIOR:
Avoid getting stuck in one place, i.e., do not alternate between left and right turns without going anywhere. You must try and move around consistently without being stuck in one place.
## STOP CRITERIA:
Before executing stop, you must ensure that you've 'reached' the goal correctly. To reach a goal, you have to move close enough to the wall where you see the goal, and see the object clearly in your observation in front of you.
## RESPONSE FORMAT:
Respond in the following format:
Reasoning: text explanation string in one or two short sentences - provide all your explanations and inner thoughts here - avoid verbosity and be concise
Intent: state your intent in one short sentence, i.e., what you are trying to achieve
Then provide the final action to take in a json formatted string.
```
```
USER: Here are the sequence of frames from the walkthrough video demonstrating the route you need to take. Analyze the walkthrough to understand the movements and the maze structure. Take a note of all the details needed to help you repeat this route when navigating next. Think step by step.
```
<IMAGE 1>
<IMAGE 2>
*
*
```
## ASSISTANT: ...
USER: Now, you must navigate to the goal. Here is the goal description and the image: Painting of a Soccer Ball
```
USER: Now, you must navigate to the goal. H
```
Prompt 11: Route retracing (DM image)
SYSTEM: You are playing a game in a 2D world. You will be shown a video of the shortest route from an initial position to a goal. You must look at the video and understand the 2D world structure and the route taken. Then, you will be placed in the 2D world at the same initial position. You must navigate from the initial position to the goal using the same route shown in the video, as quickly as possible. Below, you will find sections highlighting more details about the 2D world and the task. You can refer to these for more information.
## 2D WORLD:
The world consists of the following.
* black cells: these are obstacles, i.e., you cannot move over them
* blue cells: these are navigable spaces, i.e., you can move over them
Some blue cells contain landmarks, which are red circles filled with a text character. These are important as they will allow you to better understand the world and locate yourself. Your position will be marked using a yellow square.
## OBSERVATIONS:
The images are recorded from a birds-eye view of the 2D world. The images capture a local neighborhood surrounding your current position in the world, i.e., you will always remain at the center of the image while the world changes around you.
## GOAL:
You will be asked to navigate to a goal landmark. You must find the goal in the 2D world by repeating the path shown in the video. Once you find it, move to the location of the goal till you are standing on the landmark and then execute a stop action.
## ACTIONS:
You have four actions available.
up: move up by one unit cell down: move down by one unit cell
left: move left by one unit cell right: move right by one unit cell
stop: ends the current task. Issue this action only if you think you have reached the goal. If you haven't reached the goal, this action will result in a navigation failure that cannot be recovered from.
## STOP CRITERIA:
Before executing stop, you must ensure that you've 'reached' the goal correctly. To reach a goal, you have to move to the cell containing the goal landmark. Then execute the stop action.
## RESPONSE FORMAT:
Respond in the following format:
Reasoning: text explanation string in one or two short sentences - provide all your explanations and inner thoughts here - avoid verbosity and be concise
Intent: state your intent in one short sentence, i.e., what you are trying to achieve
```
Intent: state your intent in one short sentence, i.e., what you are trying to achieve
Then provide the final action to take in a json formatted string.
```
{
"action": <action name -- must be one of up, down, left, right>
}
```
```
```
{
"action": <action name -- must be one of up, down, left, right>
}
```
```
USER: Here is sequence of video frames recorded in the 2D world. This demonstrates the route you need to repeat. Analyze the video to understand the movements and the world structure. Take a note of all the details needed to help you repeat this route when navigating next. Think step by step.
```
<IMAGE 1>
<IMAGE 2>
*
*
```
## ASSISTANT: ...
USER: Now, you must navigate to the goal based on your knowledge of the 2D world you obtained from the video. Here is the goal description: landmark Y
USER: Here is the local view of your surroundings in the 2D world. You are at the center of this view.
```
<PRE>
<CURRENT IMAGE OBSERVATION>
```
## ASSISTANT: ...
USER: Here is the local view of your surroundings in the 2D world. You are at the center of this view.
```
<PRE>
<CURRENT IMAGE OBSERVATION>
```
.
.
.
## Prompt 12: Route retracing (DM text) - part 1
SYSTEM: You are playing a game in a 2D text world. The console screen of the game is represented as a comma-separated text array. You will be shown a sequence of console screen recordings that demonstrate the shortest route from an initial position to a goal. You must look at the sequence and understand the 2D text world structure and the route taken. Then, you will be placed in the 2D text world at the same initial position. You must navigate from the initial position to the goal using the same route shown in the screen recording sequence, as quickly as possible. Below, you will find sections highlighting more details about the 2D text world and the task. You can refer to these for more information.
## 2D TEXT WORLD:
The console of the game is represented as a comma-separated text array. Obstacles are represented using 0, i.e., you cannot move over them. Navigable spaces that you can move over are represented using 1. Some navigable spaces have landmarks represented as an ascii character (A - Z). These are also navigable spaces and are just labeled with an ascii character. These landmarks are important to remember. You will always be located at the center of the array with your position highlighted using the '*' character.
## OBSERVATIONS:
The images are recorded from a birds-eye view of the 2D world. The images capture a local neighborhood surrounding your current position in the world, i.e., you will always remain at the center of the image while the world changes around you.
## GOAL:
You will be asked to navigate to a goal landmark. You must find the goal in the 2D text world by repeating the path shown in the console screen recording sequence. Once you find it, move to the location of the goal till you are standing on the landmark and then execute a stop action.
## ACTIONS:
You have four actions available.
up: move up by one unit cell down: move down by one unit cell
left: move left by one unit cell right: move right by one unit cell
stop: ends the current task. Issue this action only if you think you have reached the goal. If you haven't reached the goal, this action will result in a navigation failure that cannot be recovered from.
## STOP CRITERIA:
Before executing stop, you must ensure that you've 'reached' the goal correctly. To reach a goal, you have to move to the cell containing the goal landmark. Then execute the stop action.
## RESPONSE FORMAT:
## Respond in the following format:
Reasoning: text explanation string in one or two short sentences - provide all your explanations and inner thoughts here - avoid verbosity and be concise Intent: state your intent in one short sentence, i.e., what you are trying to achieve Then provide the final action to take in a json formatted string.
```
```
{
"action": <action name -- must be one of up, down, left, right>
}
```
```
USER: Here is the sequence of console screen recordings taken in the 2D text world. This demonstrates the route you need to repeat. Analyze the sequence to understand the movements and the world structure. Take a note of all the details needed to help you repeat this route when navigating next. Think step by step.
```
USER: Here is the sequence of console screen recordings analyze the sequence to understand the movements and the route when navigating next. Think step by step.
#### Console screen recorded at time = 0
```
0,0,0,0,0
0,0,0,0,0
0,1,*,1,1
0,C,1,1,1
0,0,1,1,0
```
#### Console screen recorded at time = 1
```
0,0,0,0,0
0,1,1,1,1
0,C,*,1,1
0,0,1,1,0
0,1,1,1,1
```
Prompt 13: Route retracing (DM text) - part 2
<details>
<summary>Image 28 Details</summary>

### Visual Description
## Screenshot: Text-Based Navigation Task
### Overview
The image shows a text-based dialogue between a user and an assistant in a 2D grid navigation task. The user provides a 5x5 grid representing a text world, with the goal to reach landmark "Y". The assistant's responses are redacted (represented by ellipses). The grid includes coordinates, a current position marker ("*"), visible landmarks ("C"), and navigable spaces.
### Components/Axes
- **Grid Structure**: 5x5 matrix with coordinates (row, column) labeled from (0,0) at top-left to (4,4) at bottom-right.
- **Current Position**: Denoted by "*" at coordinate (2,1).
- **Landmarks**:
- Visible landmark "C" at (2,2).
- Goal landmark "Y" (location unspecified in visible text).
- **Navigable Spaces**: All grid cells are marked as navigable ("0", "1", "C", "*").
### Detailed Analysis
#### Grid Layout
```
Row 0: 0, 0, 0, 0, 0
Row 1: 0, 0, 0, 0, 0
Row 2: 0, 1, *, 1, 1
Row 3: 0, C, 1, 1, 1
Row 4: 0, 0, 1, 1, 0
```
#### Key Elements
1. **Current Position**: Centered at (2,1) with "*".
2. **Visible Landmark**: "C" at (2,2), adjacent to the current position.
3. **Goal**: Landmark "Y" is the objective but its coordinates are not explicitly stated in the visible text.
4. **Navigable Spaces**: All cells are marked as traversable (values "0", "1", "C", "*").
### Key Observations
- The grid is fully connected, with no blocked cells (all values are navigable).
- The current position (2,1) is adjacent to the visible landmark "C" at (2,2).
- The goal "Y" is not visible in the provided grid, suggesting it may be outside the 5x5 view or require inference from prior context.
### Interpretation
This task appears to test the assistant's ability to:
1. Parse spatial information from a 2D grid.
2. Navigate using partial visibility (only landmark "C" is visible in the current context).
3. Infer the location of "Y" based on prior knowledge or additional context not shown in the screenshot.
The absence of the assistant's responses prevents analysis of their reasoning process. However, the user's instructions imply a multi-step navigation challenge where the assistant must:
- Use the visible landmark "C" as a reference point.
- Determine the optimal path to "Y" despite limited local context.
- Possibly request additional information if "Y" is outside the current view.
The grid's uniform navigability suggests no obstacles, making pathfinding purely a matter of spatial reasoning rather than obstacle avoidance.
</details>
Prompt 14: Shortcut discovery (Ego image)
SYSTEM: You are a sentient living creature capable of navigating in environments, building internal spatial representations of environments, and finding goals in them. You will be shown a video of some route from the initial position to the goal. You must look at the video and understand the environment structure, and remember the locations of the start and the goal. The video may show a long-winded route from the start to the goal with unnecessary detours. Based on the environment structure, you must identify a faster route to the goal. Then, you will be placed in the environment at the same initial position. You must navigate to the goal using your identified shortest route as quickly as possible. Below, you will find sections highlighting more details about the task. You can refer to these for more information.
## OBSERVATIONS:
The images are recorded from a perspective viewpoint (i.e., egocentric or first-person). This means that you are likely to see objects from different angles, resulting in a skewed appearance of the underlying 3D objects. It is important for you to look past this skew in the appearance and perceive the true shape of the object in 3D.
## GOAL:
You will be provided an object goal using a text description and an image of the object. You must find the goal object in the environment by identifying the shortest route based on your experience from the video. Once you find the goal, move close to its location and reorient yourself to face the object.
## ACTIONS:
You have four actions available.
move forward: move forward by 0.25m along the current heading direction. It does not change the heading angle. turn left: decrease your heading angle by 30 degrees. It does not change the (x, y) position. turn right: increase your heading angle by 30 degrees. It does not change the (x, y) position.
stop: ends the current task. Issue this action only if you think you have reached the goal. If you haven't reached the goal, this action will result in a navigation failure that cannot be recovered from.
## STUCK IN PLACE BEHAVIOR:
Avoid getting stuck in one place, i.e., do not alternate between left and right turns without going anywhere. You must try and move around consistently without being stuck in one place.
## STOP CRITERIA:
Before executing stop, you must ensure that you've 'reached' the goal correctly. To reach a goal, you have to move the robot close enough to the wall where you see the goal, and see the object clearly in your observation in front of you.
## RESPONSE FORMAT:
Respond in the following format:
Reasoning: text explanation string in one or two short sentences - provide all your explanations and inner thoughts here - avoid verbosity and be concise
Intent: state your intent in one short sentence, i.e., what you are trying to achieve
Then provide the final action to take in a json formatted string.
```
'''
{
"action": <action name -- must be one of move_forward, turn_left, turn_right, stop>
}
'''
```
```
top>
```
USER: Here are the sequence of frames from the walkthrough video demonstrating a suboptimal route from the start to some goal location. Analyze the walkthrough to understand the movements and the environment structure. Keep track of the start and goal locations, and the current location in the environment as you watch the walkthrough. Then plan a shortcut route that takes you to the goal while avoiding unnecessary detours. Think step by step.
```
<IMAGE 1>
<IMAGE 2>
*
*
```
## ASSISTANT: ...
USER: Now, you must navigate to the goal. Here is the goal description and the image: Painting of a Soccer Ball
```
USER: Now, you must navigate to the goal. H
```
Prompt 15: Shortcut discovery (DM image)
SYSTEM: You are playing a game in a 2D world. You will be shown a video of some route from an initial position to a goal. You must look at the video and understand the 2D world structure and remember the locations of the start and the goal. The video may show a long-winded route from the start to the goal with unnecessary detours. Based on the world structure, you must identify a faster route to the goal. Then, you will be placed in the 2D world at the same initial position. You must navigate from the initial position to the goal using your identified shortest route as quickly as possible. Below, you will find sections highlighting more details about the 2D world
Some blue cells contain landmarks, which are red circles filled with a text character. These are important as they will allow you to better understand the world and locate yourself. Your position will be marked using a yellow square.
The images are recorded from a birds-eye view of the 2D world. The images capture a local neighborhood surrounding your current position in the world, i.e., you will always remain at the center of the image while the world changes around you.
You will be asked to navigate to a goal landmark. You must find the goal in the 2D world by identifying the shortest path based your your experience from the video. Once you find it, move to the location of the goal till you are standing on the landmark and then execute stop: ends the current task. Issue this action only if you think you have reached the goal. If you haven't reached the goal, this action
Before executing stop, you must ensure that you've 'reached' the goal correctly. To reach a goal, you have to move to the cell containing the goal landmark. Then execute the stop action.
Reasoning: text explanation string in one or two short sentences - provide all your explanations and inner thoughts here - avoid verbosity and be concise
USER: Here is the sequence of video frames recorded in the 2D world. This demonstrates a suboptimal route from the start to some goal location. Analyze the video to understand the movements and the world structure. Keep track of the start and goal locations, and the current location in the world as you watch the video. Then plan a shortcut route that takes you to the goal while avoiding any unnecessary detours. Think step by step.
USER: Now, you must navigate to the goal based on your knowledge of the 2D world you obtained from the video. Here is the goal
- and the task. You can refer to these for more information. 2D WORLD: The world consists of the following. * black cells: these are obstacles, i.e., you cannot move over them * blue cells: these are navigable spaces, i.e., you can move over them OBSERVATIONS: GOAL: a stop action. ACTIONS: You have four actions available. up: move up by one unit cell down: move down by one unit cell left: move left by one unit cell right: move right by one unit cell will result in a navigation failure that cannot be recovered from. STOP CRITERIA: RESPONSE FORMAT: Respond in the following format: Intent: state your intent in one short sentence, i.e., what you are trying to achieve Then provide the final action to take in a json formatted string. ''' { "action": <action name -- must be one of up, down, left, right> } ''' <IMAGE 1> <IMAGE 2> . . . ASSISTANT: ... description: landmark Y USER: Here is the local view of your surroundings in the 2D world. You are at the center of this view. <CURRENT IMAGE OBSERVATION> ASSISTANT: ... USER: Here is the local view of your surroundings in the 2D world. You are at the center of this view. <CURRENT IMAGE OBSERVATION> ASSISTANT: ... . . .
## Prompt 16: Shortcut discovery (DM text) - part 1
SYSTEM: You are playing a game in a text 2D world. The console screen of the game is represented as a comma-separated text array. You will be shown a sequence of console screen recordings that demonstrates a route from an initial position to a goal. You must look at the sequence and understand the 2D text world structure and remember the locations of the start and the goal. The recordings may show a long-winded route from the start to the goal with unnecessary detours. Based on the world structure, you must identify a faster route to the goal. Then, you will be placed in the 2D text world at the same initial position. You must navigate from the initial position to the goal using your identified shortest route as quickly as possible. Below, you will find sections highlighting more details about the 2D text world and the task. You can refer to these for more information.
## 2D TEXT WORLD:
The console of the game is represented as a comma-separated text array. Obstacles are represented using 0, i.e., you cannot move over them. Navigable spaces that you can move over are represented using 1. Some navigable spaces have landmarks represented as an ascii character (A - Z). These are also navigable spaces and are just labeled with an ascii character. These landmarks are important to remember. You will always be located at the center of the array with your position highlighted using the '*' character.
## OBSERVATIONS:
The images are recorded from a birds-eye view of the 2D world. The images capture a local neighborhood surrounding your current position in the world, i.e., you will always remain at the center of the image while the world changes around you.
## GOAL:
You will be asked to navigate to a goal landmark. You must find the goal in the 2D text world by identifying the shortest path based your your experience from the screen recording sequence. Once you find it, move to the location of the goal till you are standing on the landmark and then execute a stop action.
## ACTIONS:
You have four actions available.
up: move up by one unit cell down: move down by one unit cell left: move left by one unit cell right: move right by one unit cell
stop: ends the current task. Issue this action only if you think you have reached the goal. If you haven't reached the goal, this action will result in a navigation failure that cannot be recovered from.
## STOP CRITERIA:
Before executing stop, you must ensure that you've 'reached' the goal correctly. To reach a goal, you have to move to the cell containing the goal landmark. Then execute the stop action.
## RESPONSE FORMAT:
Respond in the following format:
Reasoning: text explanation string in one or two short sentences - provide all your explanations and inner thoughts here - avoid verbosity and be concise
Intent: state your intent in one short sentence, i.e., what you are trying to achieve Then provide the final action to take in a json formatted string.
```
'''
{
"action": <action name -- must be one of up, down, left, right>
}
'''
```
```
"action": <action name -- must be one of up, down, left, right>
}
```
```
USER: Here is the sequence of console screen recordings taken in the 2D text world. This demonstrates a suboptimal route from the start to some goal location. Analyze the sequence to understand the movements and the world structure. Keep track of the start and goal locations, and the current location in the world as you study the sequence. Then plan a shortcut route that takes you to the goal while avoiding any unnecessary detours. Think step by step.
```
start to some goal location. Analyze the sequence to undergoal locations, and the current location in the world as you while avoiding any unnecessary detours. Think step by step
```
```
Prompt 17: Shortcut discovery (DM text) - part 2
ASSISTANT: ...
USER: Now, you must navigate to the goal based on your knowledge of the 2D text world you obtained from the sequence of console screen recordings. Here is the goal description: landmark Y
USER: Here is a birds-eye view of the 5x5 area surrounding your current position. You are located at the center of this view. Your position is denoted by "*".
''''''
0,0,0,0,0
0,0,0,0,0
0,1,*,1,1
0,C,1,1,1
0,0,1,1,0
'''''''
The landmarks visible in your local context are: C. Note that the landmark locations are also navigable spaces, i.e., you can move over them. Your objective is to reach landmark: Y
ASSISTANT: ...
USER: Here is a birds-eye view of the 5x5 area surrounding your current position. You are located at the center of this view. Your position is denoted by "*".
''''''
0,0,0,0,0
0,0,0,0,0
1,1,*,1,0
C,1,1,1,0
0,1,1,0,0
'''''''
The landmarks visible in your local context are: C. Note that the landmark locations are also navigable spaces, i.e., you can move over them. Your objective is to reach landmark: Y
ASSISTANT: ...
```
Prompt 17: Shortcut discovery (DM text) - part 2
```
USER: Here is an image of a three-dimensional shape.
<IMAGE OF REFERENCE 3D SHAPE>
Which of these images show the same object rotated in 3D?
<IMAGE OF REFERENCE 3D SHAPE>
Which of these images show the same object rotated in 3D?
<IMAGE OF REFERENCE 3D SHAPE>
Choice 1
<IMAGE OF REFERENCE 3D SHAPE>
Choice 2
<IMAGE OF REFERENCE 3D SHAPE>
Choice 2
<IMAGE OF REFERENCE 3D SHAPE>
Choice 2
<IMAGE OF REFERENCE 3D SHAPE>
Choice 3
<IMAGE OF REFERENCE 3D SHAPE>
Choice 3
<IMAGE OF REFERENCE 3D SHAPE>
Choice 4
<IMAGE OF REFERENCE 3D SHAPE>
Think step by step. Then answer your question in the following json format.
```
{
"answer": <fill in one of 1/2/3/4 integer value>
}
```
```
Prompt 18: Mental rotation (vision)
```
USER: Here is a two-dimensional array.
over,none,page
such,none,free
site,none,list
Which of these options show the same array rotated in 2D? Note: It must only be rotated, not mirrored.
Choice 1:
page,free,list
none,none,none
over,such,site
Choice 2:
list,free,page
none,none,none
site,such,over
Choice 3:
page,none,over
free,none,such
list,none,site
Choice 4:
over,such,site
none,none,none
page,free,list
Think step by step. Then answer your question in the following json format.
```
{
"answer": <fill in one of 1/2/3/4 integer value>
}
```
```
Prompt 19: Mental rotation (text)
## Prompt 20: Perspective taking (vision)
```
PROMPT 20: Perspective taking (vision)
USER: Here is an image of various objects (animate and inanimate) on a two-dimensional plane.
<IMAGE OF OBJECTS>
Pretend that you are standing at the centroid of guitar and facing the centroid of bat. Visualize the world around you. At what angle
(from-180 to 180 degrees) is snake located relative to you? Clockwise rotations are positive and anti-clockwise rotations are negative.
Here are your options: 1) 45 2) 85 3) 105 4) 5
Think step by step. Then answer your question in the following json format.
{
"answer": <fill in one of 1/2/3/4 integer value>
}
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
```
## Prompt 21: Perspective taking (text)
```
Prompt 21: Perspective taking (text)
USER: Here is an array of numbers representing the birds-eye view of a two-dimensional plane.
0,7,0,9
8,0,0,0
0,0,0,5
0,0,0,3
Empty locations are indicated using 0. Important locations are indicated with a number 1 - 9. Pretend that you are standing at the location 8 and facing the location 9. Visualize the world around you. At what angle (from -180 to 180 degrees) is the location 3 relative to you? Clockwise rotations are positive and anti-clockwise rotations are negative.
Here are your options: 1) 52 2) 32 3) 72 4) 112
Think step by step. Then answer your question in the following json format.
'''
{
"answer": <fill in one of 1/2/3/4 integer value>
}
'''
```
```
USER: Here is a container filled with water.
<IMAGE OF FILLED WATER CONTAINER>
What will be the water level when it is rotated as shown here?
<IMAGE OF ROTATED EMPTY WATER CONTAINER>
Here are your choices. Which of these match the expected water level in the rotated container?
Choice 1
<IMAGE OF CHOICE 1>
Choice 2
<IMAGE OF CHOICE 2>
Choice 3
<IMAGE OF CHOICE 3>
Choice 4
<IMAGE OF CHOICE 4>
Think step by step. Then answer your question in the following json format.
```
*answer*: <fill in one of 1/2/3/4 integer value>
}
```
Prompt 22: Water level (vision)
```
Prompt 23: Minnesota paper form board (vision)
USER: This image shows the different pieces of a puzzle.
<IMAGE OF PUZZLE PIECES>
These pieces are put together by an oracle. Which one of these four options shows what it would look like when the pieces are put together? Pay close attention to not just the final fitted shape, but also the individual pieces contained within the shape.
Choice 1
<IMAGE OF CHOICE 1>
Choice 2
<IMAGE OF CHOICE 2>
Choice 3
<IMAGE OF CHOICE 3>
Choice 4
<IMAGE OF CHOICE 4>
Think step by step. Then answer your question in the following json format.
```
{
"answer": <fill in one of 1/2/3/4 integer value>
}
```
```
Prompt 23: Minnesota paper form board (vision)
```
Prompt 24: Minnesota paper form board (text)
USER: You are playing putting together a text jigsaw puzzle. Here are the pieces, where 0 represents the interiors of the puzzle piece
and I represents the boundary.
1,1,1,1,1,1
1,0,0,0,0,1
1,0,0,0,0,1
1,0,0,0,0,1
1,0,0,0,0,1
1,0,0,0,0,1
1,1,1,1,1,1
1,0,0,0,0,1
1,0,0,0,0,1
1,0,0,0,0,1
1,0,0,0,0,1
1,1,1,1,1,1
1,0,0,0,0,1
1,0,0,0,0,1
1,0,0,0,0,1
1,1,1,1,1,1
These pieces are now put together to solve the puzzle by an oracle. Which of these four options shows what it would look like when
```
Prompt 24: Minnesota paper form board (text)
```
Prompt 25: Judgement of line orientation (vision)
USER: Here is an image showing two lines. Your goal is to measure the angle between the two lines.
<IMAGE OF LINES>
Here is a legend showing a set of reference lines numbered from 1 to 11.
Which of the following reference line pairs match the angle between the original lines shown in the image?
1) Lines 1 and 9
2) Lines 1 and 7
3) Lines 1 and 10
4) Lines 1 and 3
Think step by step. Then answer your question in the following json format.
```
{
"answer": <fill in one of 1/2/3/4 integer value>
}
```
```
Prompt 25: Judgement of line orientation (vision)
```
```
Prompt 26: Judgement of line orientation (text)
```
Prompt 27: Selective attention (vision)
USER: Here is an image of a apple. Let us call this the target.
<IMAGE OF TARGET>
Here is a grid of apple images. This contains multiple instances of apple, but not all of them are the target object. The grid is indexed from top-left to bottom-right, starting from row, column = (0, 0).
<IMAGE OF GRID>
Which of these options represent the true locations of the target object in the grid? Locations are represented as (row, column).
Choice 1. (0, 3), (1, 2), (2, 2), (3, 3)
Choice 2. (0, 1), (1, 0), (2, 0), (3, 3)
Choice 3. (0, 3), (1, 2), (2, 2), (3, 1)
Choice 4. (0, 3), (1, 2), (2, 2), (3, 1)
Think step by step. Then answer your question in the following json format.
```
{
"answer": <fill in one of 1/2/3/4 integer value>
}
```
```
Prompt 27: Selective attention (vision)
## Prompt 28: Selective attention (text)
```
```
Prompt 29: Maze completion (vision)
```
```
## Prompt 30: Maze completion (text)
USER: You are a sentient living creature capable navigating in mazes, planning, and spatial reasoning. You are playing a text-based maze game. You start at some random position in the maze. You must escape the maze as quickly as possible to reach the goal. You are given a 2D array representing the maze, which contains the following:
* maze structure - 0 is obstacle space, 1 is navigable space. You can only move on 1s (i.e., navigable spaces). You cannot move through 0s (i.e., obstacles).
* your current position - marked as A
* goal position - marked as G
Goal and current positions are always navigable spaces.
Actions available: You can take five possible actions.
* left - move left from your current position by one step
* right - move right from your current position by one step
* up - move up from your current position by one step
* down - move down from your current position by one step
* stop - issue this action only after you have reached the goal position. If you execute it prematurely, you will fail. If you do not execute it after reaching the goal, you will again fail.
## Response format: Respond in the following format.
```
```
## ASSISTANT: ...
USER: Here is the current view of the maze.
```
```
0 represents obstacles. 1 represents free spaces. G is the goal. A is your current position in the maze. Your current location in the maze is row, column = (1, 1). The goal location is row, column = (21, 19). Think step-by-step about how to reach the goal. What action do you take next?
## ASSISTANT: ...
.
.
.
Prompt 31: Corsi block tapping (text)
<details>
<summary>Image 29 Details</summary>

### Visual Description
## Corsi Board Tapping Game: Sequence Identification
### Overview
The image depicts a Corsi board tapping game interface, where a player must remember and replicate a sequence of tapped boxes. The board is represented as a two-dimensional array with 6 empty locations (marked as `0`) and 7 box locations (marked as `B`). During gameplay, a sequence of 6 arrays is displayed, each showing a tapped box highlighted as `T`. The player must identify the correct sequence of tapped boxes using a numbered reference board (1–7).
---
### Components/Axes
1. **Board Representation**:
- **2D Array**: Contains 6 `0` (empty) and 7 `B` (box) locations.
- **Tapped Indicator**: A tapped box is marked as `T` instead of `B`.
- **Numbered Reference Board**:
```
1 2 3 4 5 6
7
```
- Positions 1–6 are in the first row, and position 7 is in the second row.
2. **Sequence of Arrays**:
- 6 arrays are shown, each representing a step in the tapping sequence.
- Each array has 7 elements (positions 1–7), with `T` indicating the tapped box.
3. **Answer Choices**:
- Four possible sequences of tapped boxes (1–7):
1. `(1) 1, 2, 3, 4, 5, 6`
2. `(2) 1, 2, 3, 4, 6, 5`
3. `(3) 6, 5, 3, 4, 7, 1`
4. `(4) 2, 1, 5, 4, 3, 6`
---
### Detailed Analysis
#### Sequence of Arrays
The arrays are transcribed below, with `T` positions mapped to the numbered board:
1. **First Array**:
```
0, 0, 0, 0, 0, B, 0 → Tapped at position 6 (number 6)
```
2. **Second Array**:
```
0, 0, B, 0, 0, 0, 0 → Tapped at position 3 (number 3)
```
3. **Third Array**:
```
0, 0, 0, 0, 0, 0, 0 → No tap (invalid, as all arrays must have one `T`)
```
- *Note*: This array appears inconsistent with the game rules (no `T`). Likely a transcription error or placeholder.
4. **Fourth Array**:
```
0, 0, 0, 0, 0, B, 0 → Tapped at position 6 (number 6)
```
5. **Fifth Array**:
```
0, 0, B, 0, 0, 0, 0 → Tapped at position 3 (number 3)
```
6. **Sixth Array**:
```
0, 0, 0, 0, 0, 0, B → Tapped at position 7 (number 7)
```
#### Mapped Tapped Sequence
From the arrays, the tapped positions (mapped to numbers) are:
- Step 1: 6
- Step 2: 3
- Step 3: *Invalid* (no `T`)
- Step 4: 6
- Step 5: 3
- Step 6: 7
This results in an incomplete sequence due to the invalid third step. However, assuming the third array was intended to have a `T` (e.g., at position 4), the sequence might align with one of the choices.
---
### Key Observations
1. **Repeated Taps**: Positions 3 and 6 are tapped multiple times, suggesting a possible pattern.
2. **Invalid Step**: The third array lacks a `T`, breaking the sequence logic. This may indicate a data entry error.
3. **Position 7**: Only tapped once in the sixth array, aligning with its unique position in the numbered board.
---
### Interpretation
The game tests short-term memory by requiring players to track a sequence of taps. The numbered board simplifies identifying tapped positions. However, the inconsistency in the third array disrupts the sequence, making it impossible to match any of the provided choices accurately. If the third step were corrected (e.g., tapping position 4), the sequence might align with choice **(3) 6, 5, 3, 4, 7, 1**, assuming intermediate steps fill gaps. The presence of repeated taps (positions 3 and 6) suggests a potential pattern, but the invalid step introduces ambiguity.
---
### Final Answer
```json
{
"answer": 3
}
```
*Note*: The answer assumes the third array was intended to include a `T` at position 4 (number 4), aligning with choice (3). The original data contains an inconsistency that prevents a definitive match.
</details>
```
```
Prompt 32: Corsi block tapping (vision)
```
USER: You are playing the array addition game. You have to add two arrays by following certain rules. Each array location can be empty (i.e., fully white) or filled with colored circles. Empty locations represent zeros. The colors of the circles mean specific things.
* Blue circle is a one
* Red circle is a distraction and must be ignored (i.e., it does not contribute to the array addition)
* White circle is a two
Array addition works as follows:
* sum of zeros must be a zero (i.e., an empty array cell)
* sum of one and zero (or zero and one) must be one (i.e., a blue circle)
* sum of one and one must be two (i.e., a white circle)
Here is the first array.
<IMAGE OF ARRAY 1>
Here is the second array.
<IMAGE OF ARRAY 2>
What is the sum of the two arrays? Pick from one of these four choices.
Choice 1
<IMAGE OF CHOICE 1>
Choice 2
<IMAGE OF CHOICE 2>
Choice 3
<IMAGE OF CHOICE 3>
Choice 4
<IMAGE OF CHOICE 4>
Think step by step. Then answer your question in the following json format.
```
{
"answer": <fill in one of 1/2/3/4 integer value>
}
```
```
Prompt 33: Spatial addition (vision)
```
```
Prompt 34: Spatial addition (text)
## Prompt 35: Cambridge spatial working memory (vision)
USER: You are playing the Cambridge Spatial Working Memory game. You will be shown a screen with blue boxes. A treasure is hidden in one of the blue boxes. You must identify the box containing the treasure, which is shown as an yellow square. Once you find a treasure, it will be collected and placed in the 'Treasures collected' section shown below the image. A new treasure will be hidden in one of the other boxes where the treasure did not appear before. You must again find the new treasure. This process is repeated till you find all treasures placed in each of the blue boxes once. Note: The treasure will never appear in a box where it had already been placed. Each turn, there are randomly selected numbers associated with each box. These numbers are meant to aid you with communication, i.e., specify what box you want to open in that turn. However, these numbers will change after every turn. So do NOT associate boxes with numbers over the long term. The number identity of a box can change any time. Therefore, you must remember the boxes based on their spatial positions and not the numbers.
## RESPONSE FORMAT:
Think step-by-step about where the treasure might be based on your past actions. After that, indicate the box you want to open in the following json format:
```
```
{
"action": <box integer index>
}
```
```
## ASSISTANT: ...
USER: Here is the current state of the game. You must find the next treasure. Note that the numbers of the boxes have changed, but the box locations are fixed. Decide which box you want to open next, and then use the number associated with the box as the action.
```
<IMG ALIGN=TOP WIDTH=100 HEIGHT=0>
ASSISTANT: ...
```
## Prompt 36: Cambridge spatial working memory (text)
USER: You are playing the Cambridge Spatial Working Memory game. You will be shown an array with integers. 0 represents empty locations. Locations numbered 1 - 9 represent boxes. A treasure is hidden in one of the boxes. You must identify the box containing the treasure. Once you find a treasure, the location will be momentarily shown as a 'T' indicating that the treasure was found. The treasure is then collected and a new treasure will be hidden in one of the other boxes where the treasure did not appear before. You must then find the new treasure. This process is repeated till you find all treasures placed in each of the boxes once. Note: The treasure will never appear in a box where it had already been placed.
While the boxes are represented using integers from 1 - 9, the true identity of the box is its location (row, column) in the array. The box location is always fixed (i.e., the boxes will not move and the number of boxes will not change). However, each turn, the integer id associated with the box will change randomly. These integer ids are meant to aid you with communication, i.e., specify what box you want to open in that turn. However, these numbers will change after every turn. So do NOT associate boxes with numbers over the long term. The number identity of a box can change any time. Therefore, you must remember the boxes based on their spatial positions and not the numbers.
## RESPONSE FORMAT:
Think step-by-step about where the treasure might be based on your past actions. After that, indicate the box you want to open in the following json format:
```
```
{
"action": <box integer index>
}
```
```
## ASSISTANT: ...
USER: Here is the current view of the board. You must find the next treasure. Note that the numbers of the boxes have changed, but the box locations are fixed. Decide which box location you want to open next. Then provide the number associated with the box as the action.
```
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 2, 0, 0, 0, 0
```
```
Number of treasures collected: 0 / 3
```