# DreamPRM: Domain-Reweighted Process Reward Model for Multimodal Reasoning
**Authors**:
- Qi Cao (University of California, San Diego)
- &Ruiyi Wang (University of California, San Diego)
- &Ruiyi Zhang (University of California, San Diego)
- &Sai Ashish Somayajula (University of California, San Diego)
- &Pengtao Xie (University of California, San Diego)
## Abstract
Reasoning has substantially improved the performance of large language models (LLMs) on complicated tasks. Central to the current reasoning studies, Process Reward Models (PRMs) offer a fine-grained evaluation of intermediate reasoning steps and guide the reasoning process. However, extending PRMs to multimodal large language models (MLLMs) introduces challenges. Since multimodal reasoning covers a wider range of tasks compared to text-only scenarios, the resulting distribution shift from the training to testing sets is more severe, leading to greater generalization difficulty. Training a reliable multimodal PRM, therefore, demands large and diverse datasets to ensure sufficient coverage. However, current multimodal reasoning datasets suffer from a marked quality imbalance, which degrades PRM performance and highlights the need for an effective data selection strategy. To address the issues, we introduce DreamPRM, a domain-reweighted training framework for multimodal PRMs which employs bi-level optimization. In the lower-level optimization, DreamPRM performs fine-tuning on multiple datasets with domain weights, allowing the PRM to prioritize high-quality reasoning signals and alleviating the impact of dataset quality imbalance. In the upper-level optimization, the PRM is evaluated on a separate meta-learning dataset; this feedback updates the domain weights through an aggregation loss function, thereby improving the generalization capability of trained PRM. Extensive experiments on multiple multimodal reasoning benchmarks covering both mathematical and general reasoning show that test-time scaling with DreamPRM consistently improves the performance of state-of-the-art MLLMs. Further comparisons reveal that DreamPRMâs domain-reweighting strategy surpasses other data selection methods and yields higher accuracy gains than existing test-time scaling approaches. Notably, DreamPRM achieves a top-1 accuracy of 85.2% on the MathVista leaderboard using the o4-mini model, demonstrating its strong generalization in complex multimodal reasoning tasks.
Project Page: https://github.com/coder-qicao/DreamPRM
## 1 Introduction
<details>
<summary>x1.png Details</summary>

### Visual Description
## Bar Chart: DreamPRM Accuracy Improvement
### Overview
The image presents a bar chart comparing the accuracy improvement of "DreamPRM" versus "PRM w/o data selection" across four datasets: WeMath, MMVet, MathVista, and MMStarMathVision. Alongside the chart are two question-answer pairs with associated metadata about dataset difficulty, modality requirements, reasoning ability, and domain weight.
### Components/Axes
* **X-axis:** Datasets - WeMath, MMVet, MathVista, MMStarMathVision
* **Y-axis:** Accuracy Improvement (%) - Scale ranges from 0 to 7, with increments of 1.
* **Data Series:**
* DreamPRM (Blue bars)
* PRM w/o data selection (Yellow bars)
* **Legend:** Located at the top-left corner, clearly distinguishing between the two data series.
* **Average:** A horizontal line at approximately y=4.0, labeled "avg. = +4.0".
* **Question 1:** Top-right section, accompanied by an image of birds and fish.
* **Question 2:** Bottom-right section, accompanied by an image of a bird in flight and smaller images of reptiles.
### Detailed Analysis
The chart displays the accuracy improvement for each dataset.
* **WeMath:** DreamPRM shows an accuracy improvement of +5.7%, while PRM w/o data selection shows +2.5%.
* **MMVet:** DreamPRM shows an accuracy improvement of +5.5%, while PRM w/o data selection shows +3.0%.
* **MathVista:** DreamPRM shows an accuracy improvement of +3.5%, while PRM w/o data selection shows +1.8%.
* **MMStarMathVision:** DreamPRM shows an accuracy improvement of +3.4%, while PRM w/o data selection shows +0.2%.
The DreamPRM consistently outperforms PRM w/o data selection across all datasets. The largest difference in accuracy improvement is observed in the WeMath dataset.
**Question 1 & Metadata:**
* **Question:** "What does the bird feed on?"
* **Choices:**
* A. zooplankton
* B. grass
* C. predator fish
* D. none of the above
* **Answer:** C
* **Dataset:** AI2D (2016)
* **Dataset difficulty:** easy (InternVL-2.5-MPO-8Bâs accuracy 84.6%)
* **Unnecessary modality:** can answer without image
* **Requirements for reasoning:** do not require complicated reasoning
* **Domain weight:** 0.55 (Determined by DreamPRM)
**Question 2 & Metadata:**
* **Question:** "Determine the scientific nomenclature of the organism shown in the primary image."
* **Choices:**
* A. Hemidactylus turcicus
* B. Felis silvestris
* C. Macropus agilis
* D. None of the above
* **Answer:** D
* **Dataset:** M3CoT (2024)
* **Dataset difficulty:** hard (InternVL-2.5-MPO-8Bâs accuracy 62.1%)
* **Unnecessary modality:** cannot answer without image
* **Requirements for reasoning ability:** require complicated reasoning
* **Domain weight:** 1.49 (Determined by DreamPRM)
### Key Observations
* DreamPRM consistently demonstrates higher accuracy improvement than PRM w/o data selection.
* The accuracy improvement varies across datasets, suggesting the effectiveness of DreamPRM is dataset-dependent.
* Question 1 is considered "easy" and doesn't require the image, while Question 2 is "hard" and requires the image.
* The domain weight is lower for the easier question (0.55) and higher for the harder question (1.49).
### Interpretation
The data suggests that DreamPRM, with its data selection mechanism, significantly enhances accuracy compared to the PRM method without data selection. The varying degree of improvement across datasets indicates that the benefits of DreamPRM are more pronounced in certain contexts. The metadata associated with the question-answer pairs highlights a correlation between dataset difficulty, the necessity of image modality, reasoning complexity, and the domain weight assigned by DreamPRM. A higher domain weight seems to correspond to tasks requiring more complex reasoning and image understanding. The fact that the "easy" question can be answered without the image suggests that the image is not crucial for solving that particular problem, while the "hard" question explicitly requires image information. This demonstrates DreamPRM's ability to assess the relevance of visual information for different reasoning tasks.
</details>
Figure 1: DreamPRM improves multimodal reasoning by mitigating the dataset quality imbalance problem. Left: On five benchmarks, DreamPRM outperforms base model (InternVL-2.5-8B-MPO [67]) by an average of $+4.0\$ . DreamPRM also consistently surpasses Vanilla PRM trained without data selection. Right: Easy AI2D [23] questions (weight 0.55) vs. hard M3CoT [6] questions (weight 1.49) shows how DreamPRM prioritizes data that demand deeper reasoning - samples requiring knowledge from both textual and visual modalities for step-by-step logical deduction.
Reasoning [55] has significantly enhanced the logical and critical thinking capabilities of large language models (LLMs) [2, 8, 59, 49]. Post-training [45, 10] and test-time scaling strategies [44] enable sophisticated reasoning behaviors in LLMs and extend the length of Chain-of-Thoughts (CoTs) [71], thereby achieving strong results on challenging benchmarks [80, 47]. A key component of these advances is the Process Reward Models (PRMs) [29, 27], which provide fine-grained, step-wise supervision of the reasoning process and reliable selection of high-quality reasoning trajectories. These developments are proven highly effective for improving the performance of LLMs in complex tasks [38, 61].
Given the success with LLMs, a natural extension is to apply PRMs to multimodal large language models (MLLMs) [72, 28] to enhance their reasoning abilities. Early studies of multimodal PRMs demonstrate promise results, yet substantial challenges persist. Distinct from text-only inputs of LLMs, MLLMs must combine diverse visual and language signals: a high-dimensional, continuous image space coupled with discrete language tokens. This fusion dramatically broadens the input manifold and leads to more severe distribution shifts [56] from training to testing distributions. Consequently, directly utilizing PRM training strategies from the text domain [69, 37] underperforms, mainly due to the decreased generalizability [11] caused by the insufficient coverage of the multimodal input space.
A straightforward solution to this problem is to combine multiple datasets that emphasize different multimodal reasoning skills, thereby enlarging the sampling space. However, quality imbalance among existing multimodal reasoning datasets is more severe than in text-only settings: many contain noisy inputs such as unnecessary modalities [78] or questions of negligible difficulty [33], as illustrated in Fig. 1. Since these easy datasets contribute little to effective sampling, paying much attention to them can substantially degrade PRM performance. Therefore, an effective data selection strategy that filters out unreliable datasets and instances is crucial to training a high-quality multimodal PRM.
To overcome these challenges, we propose DreamPRM, a domain-reweighted training framework for multimodal PRMs. Inspired by domain-reweighting techniques [53, 12, 57], DreamPRM dynamically learns appropriate weights for each multimodal reasoning dataset, allowing them to contribute unequally during training. Datasets that contain many noisy samples tend to receive lower domain weights, reducing their influence on PRM parameter updates. Conversely, high-quality datasets are assigned higher weights and thus play a more important role in optimization. This domain-reweighting strategy alleviates the issue of dataset quality imbalances. DreamPRM adopts a bi-level optimization (BLO) framework [14, 31] to jointly learn the domain weights and PRM parameters. At the lower level, the PRM parameters are optimized with Monte Carlo signals on multiple training domains under different domain weights. At the upper level, the optimized PRM is evaluated on a separate meta domain to compute a novel aggregation function loss, which is used to optimized the domain weights. Extensive experiments on a wide range of multimodal reasoning benchmarks verify the effectiveness of DreamPRM.
Our contributions are summarized as follows:
- We propose DreamPRM, a domain-reweighted multimodal process reward model training framework that dynamically adjusts the importance of different training domains. We formulate the training process of DreamPRM as a bi-level optimization (BLO) problem, where the lower level optimizes the PRM via domain-reweighted fine-tuning, and the upper level optimizes domain weights with an aggregation function loss. Our method helps address dataset quality imbalance issue in multimodal reasoning, and improves the generalization ability of PRM.
- We conduct extensive experiments using DreamPRM on a wide range of multimodal reasoning benchmarks. Results indicate that DreamPRM consistently surpasses PRM baselines with other data selection strategies, confirming the effectiveness of its bi-level optimization based domain-reweighting strategy. Notably, DreamPRM achieves a top-1 accuracy of 85.2% on the MathVista leaderboard using the o4-mini model, demonstrating its strong generalization in complex multimodal reasoning tasks. Carefully designed evaluations further demonstrate that DreamPRM possesses both scaling capability and generalization ability to stronger models.
## 2 Related Works
#### Multimodal reasoning
Recent studies have demonstrated that incorporating Chain-of-Thought (CoT) reasoning [70, 25, 81] into LLMs encourages a step-by-step approach, thereby significantly enhancing question-answering performance. However, it has been reported that CoT prompting canât be easily extended to MLLMs, mainly due to hallucinated outputs during the reasoning process [67, 82, 19]. Therefore, some post-training methods have been proposed for enhancing reasoning capability of MLLMs. InternVL-MPO [67] proposes a mixed preference optimization that jointly optimizes preference ranking, response quality, and response generation loss to improve the reasoning abilities. Llava-CoT [74] creates a structured thinking fine-tuning dataset to make MLLM to perform systematic step-by-step reasoning. Some efforts have also been made for inference time scaling. RLAIF-V [77] proposes a novel self-feedback guidance for inference-time scaling and devises a simple length-normalization strategy tackling the bias towards shorter responses. AR-MCTS [11] combines Monte-Carlo Tree Search (MCTS) and Retrival Augmented Generation (RAG) to guide MLLM search step by step and explore the answer space.
#### Process reward model
Process Reward Model (PRM) [29, 27, 38, 61] provides a more finer-grained verification than Outcome Reward Model (ORM) [9, 52], scoring each step of the reasoning trajectory. However, a central challenge in designing PRMs is obtaining process supervision signals, which require supervised labels for each reasoning step. Current approaches typically depend on costly, labor-intensive human annotation [29], highlighting the need for automated methods to improve scalability and efficiency. Math-Shepherd [64] proposes a method utilizing Monte-Carlo estimation to provide hard labels and soft labels for automatic process supervision. OmegaPRM [37] proposes a Monte Carlo Tree Search (MCTS) for finer-grained exploration for automatical labeling. MiPS [69] further explores the Monte Carlo estimation method and studies the aggregation of PRM signals.
#### Domain-reweighting
Domain reweighting methodologies are developed to modulate the influence of individual data domains, thereby enabling models to achieve robust generalization. Recently, domain reweighting has emerged as a key component in large language model pre-training, where corpora are drawn from heterogeneous sources. DoReMi [73] trains a lightweight proxy model with group distributionally robust optimization to assign domain weights that maximize excess loss relative to a reference model. DOGE [13] proposes a first-order bi-level optimization framework, using gradient alignment between source and target domains to update mixture weights online during training. Complementary to these optimization-based approaches, Data Mixing Laws [76] derives scaling laws that could predict performance under different domain mixtures, enabling low-cost searches for near-optimal weights without proxy models. In this paper, we extend these ideas to process supervision and introduce a novel bi-level domain-reweighting framework.
## 3 Problem Setting and Preliminaries
#### Notations.
Let $\mathcal{I}$ , $\mathcal{T}$ , and $\mathcal{Y}$ denote the multimodal input space (images), textual instruction space, and response space, respectively. A multimodal large language model (MLLM) is formalized as a parametric mapping $M_{\theta}:\mathcal{T}\times\mathcal{I}\to\Delta(\mathcal{Y})$ , where $\hat{y}\sim M_{\theta}(\cdot|x)$ represents the stochastic generation of responses conditioned on input pair $x=(t,I)$ including visual input $I\in\mathcal{I}$ and textual instruction $t\in\mathcal{T}$ , with $\Delta(\mathcal{Y})$ denoting the probability simplex over the response space. We use $y\in\mathcal{Y}$ to denote the ground truth label from a dataset.
The process reward model (PRM) constitutes a sequence classification function $\mathcal{V}_{\phi}:\mathcal{T}\times\mathcal{I}\times\mathcal{Y}\to[0,1]$ , parameterized by $\phi$ , which quantifies the epistemic value of partial reasoning state $\hat{y}_{i}$ through scalar reward $p_{i}=\mathcal{V}_{\phi}(x,\hat{y}_{i})$ , modeling incremental utility toward solving instruction $t$ under visual grounding $I$ . Specifically, $\hat{y}_{i}$ represents the first $i$ steps of a complete reasoning trajectory $\hat{y}$ .
#### PRM training with Monte Carlo signals.
Due to the lack of ground truth epistemic value for each partial reasoning state $\hat{y}_{i}$ , training of PRM requires automatic generation of approximated supervision signals. An effective approach to obtain these signals is to use the Monte Carlo method [69, 65]. We first feed the input question-image pair $x=(t,I)$ and the prefix solution $\hat{y}_{i}$ into the MLLM, and let it complete the remaining steps until reaching the final answer. We randomly sample multiple completions, compare their final answers to the gold answer $y$ , and thereby obtain multiple correctness labels. PRM is trained as a sequence classification task to predict these correctness labels. The ratio of correct completions at the $i$ -th step estimates the âcorrectness levelâ up to step $i$ , which is used as the approximated supervision signals $p_{i}$ to train the PRM. Formally,
$$
p_{i}=\texttt{MonteCarlo}(x,\hat{y}_{i},y)=\frac{\texttt{num(correct completions from }\hat{y}_{i})}{\texttt{num(total completions from }\hat{y}_{i})} \tag{1}
$$
#### PRM-based inference with aggregation function.
<details>
<summary>x2.png Details</summary>

### Visual Description
\n
## System Architecture Diagram - E-commerce Platform
This document details the system architecture for a scalable e-commerce platform. The diagram illustrates the key components and their interactions, focusing on microservices and cloud-native principles.
**1. Overview**
The platform is designed as a collection of independent microservices, communicating via asynchronous messaging and APIs. This approach promotes modularity, independent deployment, and scalability. The system leverages cloud services for infrastructure, storage, and database management.
**2. Key Components**
| Component | Description | Technology | Scalability |
|---|---|---|---|
| **API Gateway** | Entry point for all client requests. Handles routing, authentication, and rate limiting. | Kong, Nginx | Horizontal Scaling |
| **Authentication Service** | Manages user authentication and authorization. | OAuth 2.0, JWT | Horizontal Scaling |
| **Product Catalog Service** | Stores and manages product information. | PostgreSQL, Redis (for caching) | Vertical & Horizontal Scaling |
| **Shopping Cart Service** | Manages user shopping carts. | Redis | Horizontal Scaling |
| **Order Management Service** | Processes and manages orders. | Kafka, PostgreSQL | Horizontal Scaling |
| **Payment Service** | Handles payment processing. Integrates with third-party payment gateways. | Stripe, PayPal | Horizontal Scaling |
| **Shipping Service** | Calculates shipping costs and manages shipment tracking. | Integration with shipping providers (UPS, FedEx) | Horizontal Scaling |
| **Recommendation Service** | Provides product recommendations based on user behavior. | Machine Learning models, Elasticsearch | Horizontal Scaling |
| **Notification Service** | Sends email and SMS notifications. | SendGrid, Twilio | Horizontal Scaling |
| **Search Service** | Enables users to search for products. | Elasticsearch | Horizontal Scaling |
**3. Data Flow**
1. A user initiates a request through the client application (web or mobile).
2. The request is routed to the appropriate microservice via the API Gateway.
3. Microservices communicate with each other via asynchronous messaging (Kafka) or synchronous APIs (REST).
4. Data is stored in persistent storage (PostgreSQL, Redis) as needed.
5. The response is returned to the client application via the API Gateway.
**4. Technology Stack**
* **Programming Languages:** Java, Python, Node.js
* **Databases:** PostgreSQL, Redis, Elasticsearch
* **Messaging:** Kafka, RabbitMQ
* **Cloud Platform:** AWS, Azure, Google Cloud
* **Containerization:** Docker, Kubernetes
* **API Management:** Kong, Nginx
**5. Scalability and Reliability**
* **Horizontal Scaling:** Microservices are designed to be horizontally scalable, allowing the system to handle increased load by adding more instances.
* **Load Balancing:** Load balancers distribute traffic across multiple instances of each microservice.
* **Caching:** Redis is used for caching frequently accessed data, reducing database load and improving response times.
* **Monitoring and Logging:** Comprehensive monitoring and logging are implemented to track system performance and identify potential issues.
* **Circuit Breakers:** Circuit breakers prevent cascading failures by isolating failing services.
**6. Future Considerations**
* Implementation of a Content Delivery Network (CDN) to improve performance for static assets.
* Exploration of serverless computing options for certain microservices.
* Integration with more third-party services.
* Advanced analytics and reporting capabilities.
```
</details>
Figure 2: General flow of training PRM and using PRM for inference. Training phase: Train PRM with Monte Carlo signals from intermediate steps of Chain-of-Thoughts (CoTs). Inference phase: Use the trained PRM to verify CoTs step by step and select the best CoT. Conventional training of PRM has poor generalization capability due to distribution shift between training set and testing set.
After training a PRM, a typical way of conducting PRM-based MLLM inference is to use aggregation function [69]. Specifically, for each candidate solution $\hat{y}$ from the MLLM, PRM will generate a list of predicted probabilities ${p}=\{{p_{1}},{p_{2}},...,{p_{n}}\}$ accordingly, one for each step $\hat{y}_{i}$ in the solution. The list of predicted probabilities are then aggregated using the following function:
$$
\mathcal{A}({p})=\sum_{i=1}^{n}\log\frac{{p_{i}}}{1-{p_{i}}}. \tag{2}
$$
The aggregated value corresponds to the score of a specific prediction $\hat{y}$ , and the final PRM-based solution is the one with the highest aggregated score.
#### Bi-level optimization.
Bi-level optimization (BLO) has been widely used in meta-learning [14], neural architecture search [31], and data reweighting [54]. A BLO problem is usually formulated as:
$$
\displaystyle\min_{\alpha}\mathcal{U}(\alpha,\phi^{*}(\alpha)) \displaystyle s.t. \displaystyle\phi^{*}(\alpha)=\underset{\mathbf{\phi}}{\arg\min}\mathcal{L}(\phi,\alpha) \tag{3}
$$
where $\mathcal{U}$ is the upper-level optimization problem (OP) with parameter $\alpha$ , and $\mathcal{L}$ is the lower-level OP with parameter $\phi$ . The lower-level OP is nested within the upper-level one, and the two OPs are mutually dependent.
## 4 The Proposed Domain-reweighting Method
<details>
<summary>x3.png Details</summary>

### Visual Description
\n
## Diagram: Multi-Level Optimization Process
### Overview
The image depicts a diagram illustrating a multi-level optimization process, likely within a machine learning or AI context. It showcases two levels of optimization: a lower-level optimization performed across multiple domains, and an upper-level optimization that adjusts domain weights. The diagram highlights the interaction between "MLLM" modules, "DreamPRM" components, and a "BLO" (likely a backloop optimization) process. The diagram also indicates which parameters are activated and frozen.
### Components/Axes
The diagram is structured into three main sections:
* **Lower-level Optimization:** This section shows optimization happening across multiple domains (Domain 1 to Domain k). Each domain has an input image and a question.
* **Quality Imbalance:** This section highlights the difference in quality between domains.
* **Upper-level Optimization:** This section shows the optimization of domain weights.
Key components include:
* **Domain 1âŠk:** Representing different data domains.
* **MLLM:** A module (likely Multi-modal Large Language Model) processing input images and questions.
* **DreamPRM:** A component receiving outputs from the lower-level optimization.
* **Domain weights:** Weights assigned to each domain.
* **BLO:** A backloop optimization process.
* **Activated parameters:** Represented by pink color.
* **Frozen parameters:** Represented by blue color.
### Detailed Analysis or Content Details
The diagram illustrates the flow of information and optimization signals.
**Lower-level Optimization:**
* **Domain 1:** Input is a grayscale image with a yellow square. The question is "What is the area of yellow region?". The output of the MLLM is passed through a series of circles (representing processing steps) to the DreamPRM. The connection is represented by a yellow arrow.
* **Domain k:** Input is a circular image with a pie chart. The question is "What is the largest pie area?". The output of the MLLM is passed through a series of circles to the DreamPRM. The connection is represented by an orange arrow.
* The diagram indicates a "Quality imbalance" between Domain 1 and Domain k.
**Upper-level Optimization:**
* **Domain k+1:** Input is an image with the equation "2x+6=13". The question is "What is the value of x?". The output of the MLLM is passed through a series of circles to the DreamPRM. The connection is represented by a series of green arrows.
* The DreamPRM receives input from the lower-level optimization and the upper-level optimization.
* The BLO connects the DreamPRM to the Domain weights.
* The Domain weights are then fed back into the lower-level optimization.
**Parameter Status:**
* The DreamPRM and PRM components have activated parameters (pink) and frozen parameters (blue).
* The activated parameters are located in the bottom-right corner of the DreamPRM and PRM components.
### Key Observations
* The diagram highlights a two-level optimization process, suggesting a hierarchical approach to learning or adaptation.
* The "Quality imbalance" suggests that different domains may have varying levels of difficulty or data quality.
* The BLO indicates a feedback loop, allowing the system to refine its domain weights based on performance.
* The distinction between activated and frozen parameters suggests a fine-tuning or transfer learning strategy.
### Interpretation
The diagram illustrates a system designed to optimize performance across multiple data domains. The lower-level optimization focuses on solving tasks within each domain, while the upper-level optimization adjusts the importance (weights) of each domain. The BLO provides a mechanism for continuous improvement by feeding back performance information. The quality imbalance suggests that the system is aware of the varying difficulty of different domains and may be attempting to compensate for this. The use of activated and frozen parameters suggests that the system is leveraging pre-trained knowledge (frozen parameters) while adapting to specific domains (activated parameters).
The diagram suggests a sophisticated approach to multi-domain learning, potentially addressing challenges such as domain adaptation and transfer learning. The system appears to be designed to learn from diverse data sources and optimize performance based on the specific characteristics of each domain. The overall architecture suggests a focus on robustness and adaptability. The diagram does not provide any numerical data or specific performance metrics, but it offers a clear conceptual overview of the optimization process.
</details>
Figure 3: The proposed bi-level optimization based domain-reweighting method. Lower-level optimization: In this stage, PRMâs parameters are updated on multiple datasets with domain weights, allowing the PRM to prioritize domains with better quality. Upper-level optimization: In this stage, the PRM is evaluated on a separate meta dataset to compute an aggregation function loss and optimize the domain weights. DreamPRM helps address dataset quality imbalance problems and leads to stronger and more generalizable reasoning performance.
#### Overview.
Training process reward models (PRMs) for MLLMs is challenging for two reasons: (1) dataset (domain) quality imbalance, and (2) discrepancy between training and inference procedures. To address these two challenges, we propose DreamPRM, which automatically searches for domain importance using a novel aggregation function loss that better simulates the inference process of PRM. Under a bi-level optimization framework, it optimizes PRM parameters with Monte Carlo signals at the lower level, and optimizes trainable domain importance weights with aggregation function loss at the upper level. An overview of DreamPRM method is shown in Fig. 3.
#### Datasets.
We begin with $K{+}1$ datasets, each from a distinct domain (e.g., science, geometry). The first $K$ datasets form the training pool $\mathcal{D}_{\mathrm{tr}}=\{\mathcal{D}_{1},\dots,\mathcal{D}_{K}\}$ , while the remaining dataset, $\mathcal{D}_{\mathrm{meta}}=\mathcal{D}_{K+1}$ , is a meta (validation) dataset with better quality.
#### Lower-level optimization: domain-reweighted training of PRM.
In lower-level optimization, we aim to update the weights $\phi$ of PRM with domain-reweighted training. We first define the typical PRM training loss $\mathcal{L}_{tr}$ on a single domain $\mathcal{D}_{k}$ , given PRM parameters $\phi$ , as follows:
$$
\displaystyle\mathcal{L}_{tr}(\mathcal{D}_{k},\phi)=\sum_{(x,y)\in\mathcal{D}_{k}}\sum_{i=1}^{n}\mathcal{L}_{MSE}(\mathcal{V}_{\phi}(x,\hat{y}_{i}),p_{i}) \tag{5}
$$
where $\hat{y}_{i}$ is the prefix of MLLM generated text $\hat{y}=M_{\theta}(x)$ given input pair $x=(t,I)$ , and $p_{i}$ is the process supervision signal value obtained by Monte Carlo estimation given input pair $x$ , prefix $\hat{y}_{i}$ and ground truth label $y$ , as previously defined in Equation 1. The PRM is optimized by minimizing the mean squared error (MSE) between supervision signal and PRM predicted score $\mathcal{V}_{\phi}(x,\hat{y}_{i})$ . With the PRM training loss on a single domain $\mathcal{D}_{k}$ above, we next define the domain-reweighted training objective of PRM on multiple training domains $\mathcal{D}=\{\mathcal{D}_{k}\}_{k=1}^{K}$ . The overall objective is a weighted sum of the single-domain PRM training losses, allowing the contribution of each domain to be adjusted during the learning process:
$$
\displaystyle\mathcal{L}_{tr}(\mathcal{D}_{tr},\phi,\alpha)=\sum_{k=1}^{K}\alpha_{k}\mathcal{L}_{tr}(\mathcal{D}_{k},\phi) \tag{6}
$$
Here, $\alpha=\{\alpha_{k}\}_{k=1}^{K}$ represents the trainable domain weight parameters, indicating the importance of each domain. By optimizing this objective, we obtain the optimal value of PRM parameters $\phi^{*}$ :
$$
\displaystyle\phi^{*}(\alpha)= \displaystyle\underset{\mathbf{\phi}}{\arg\min}\mathcal{L}_{tr}(\mathcal{D}_{tr},\phi,\alpha) \tag{7}
$$
It is worth mentioning that only $\phi$ is optimized at this level, while $\alpha$ remains fixed.
#### Upper-level optimization: learning domain reweighting parameters.
In upper-level optimization, we optimize the domain reweighting parameter $\alpha$ on meta dataset $\mathcal{D}_{meta}$ given optimal PRM weights $\phi^{*}(\alpha)$ obtained from the lower level. To make the meta learning target more closely reflect the actual PRM-based inference process, we propose a novel meta loss function $\mathcal{L}_{meta}$ , different from the training loss $\mathcal{L}_{tr}$ . Specifically, we first obtain an aggregated score $\mathcal{A}({p})$ for each generated solution $\hat{y}$ from the MLLM given input pair $x=(t,I)$ , following process in Section 3. We then create a ground truth signal $r(\hat{y},y)$ by assigning it a value of 1 if the generated $\hat{y}$ contains ground truth $y$ , and 0 otherwise. The meta loss is defined as the mean squared error between aggregated score and ground truth signal:
$$
\displaystyle\mathcal{L}_{meta}(\mathcal{D}_{meta},\phi^{*}(\alpha))=\sum_{(x,y)\in\mathcal{D}_{meta}}\mathcal{L}_{MSE}(\sigma(\mathcal{A}(\mathcal{V}_{\phi^{*}(\alpha)}(x,\hat{y}))),r(\hat{y},y)) \tag{8}
$$
where $\mathcal{A}$ represents the aggregation function as previously defined in Equation 2, and $\sigma$ denotes the sigmoid function to map the aggregated score to a probability. Accordingly, the optimization problem at the upper level is formulated as follows:
$$
\displaystyle\underset{\alpha}{\min}\mathcal{L}_{meta}(\mathcal{D}_{meta},\phi^{*}(\alpha)) \tag{9}
$$
To solve this optimization problem, we propose an efficient gradient-based algorithm, which is detailed in Appendix A.
## 5 Experimental Results
### 5.1 Experimental settings
#### Multistage reasoning.
To elicit consistent steady reasoning responses from current MLLMs, we draw on the Llava-CoT approach [75], which fosters structured thinking prior to answer generation. Specifically, we prompt MLLMs to follow five reasoning steps: (1) Restate the question. (2) Gather evidence from the image. (3) Identify any background knowledge needed. (4) Reason with the current evidence. (5) Summarize and conclude with all the information. We also explore zero-shot prompting settings in conjunction with structural reasoning, which can be found in Appendix C. We use 8 different chain-of-thought reasoning trajectories for all test-time scaling methods, unless otherwise stated.
Table 1: Comparative evaluation of DreamPRM and baselines on multimodal reasoning benchmarks. Bold numbers indicate the best performance, while underlined numbers indicate the second best. The table reports accuracy (%) on five datasets: WeMath, MathVista, MathVision, MMVet, and MMStar.
| | Math Reasoning WeMath (loose) | General Reasoning MathVista (testmini) | MathVision (test) | MMVet (v1) | MMStar (test) |
| --- | --- | --- | --- | --- | --- |
| Zero-shot Methods | | | | | |
| Gemini-1.5-Pro [50] | 46.0 | 63.9 | 19.2 | 64.0 | 59.1 |
| GPT-4v [46] | 51.4 | 49.9 | 21.7 | 67.7 | 62.0 |
| LLaVA-OneVision-7B [26] | 44.8 | 63.2 | 18.4 | 57.5 | 61.7 |
| Qwen2-VL-7B [66] | 42.9 | 58.2 | 16.3 | 62.0 | 60.7 |
| InternVL-2.5-8B-MPO [67] | 51.7 | 65.4 | 20.4 | 55.9 | 58.9 |
| Test-time Scaling Methods (InternVL-2.5-8B-MPO based) | | | | | |
| Self-consistency [68] | 56.4 | 67.1 | 20.7 | 57.4 | 59.6 |
| Self-correction [17] | 54.0 | 63.8 | 21.6 | 54.9 | 59.7 |
| ORM [52] | 56.9 | 65.3 | 20.5 | 55.9 | 60.1 |
| Vanilla PRM [29] | 54.2 | 67.2 | 20.6 | 58.9 | 60.8 |
| CaR-PRM [16] | 54.7 | 67.5 | 21.0 | 60.6 | 61.1 |
| s1-PRM [44] | 57.1 | 65.8 | 20.2 | 60.1 | 60.4 |
| DreamPRM (ours) | 57.4 | 68.9 | 22.1 | 61.4 | 62.3 |
#### Base models.
For inference, we use InternVL-2.5-8B-MPO [67] as the base MLLM, which has undergone post-training to enhance its reasoning abilities and is well-suited for our experiment. For fine-tuning PRM, we adopt Qwen2-VL-2B-Instruct [66]. Qwen2-VL is a state-of-the-art multimodal model pretrained for general vision-language understanding tasks. This pretrained model serves as the initialization for our fine-tuning process.
#### Training hyperparameters.
In the lower-level optimization, we perform 5 inner gradient steps per outer update (unroll steps = 5) using the AdamW [32] optimizer with learning rate set to $5\times 10^{-7}$ . In the upper-level optimization, we use the AdamW optimizer ( $\mathrm{lr}=0.01$ , weight decay $=10^{-3}$ ) and a StepLR scheduler (step size = 5000, $\gamma=0.5$ ). In total, DreamPRM is fine-tuned for 10000 iterations. Our method is implemented with Betty [7], and the fine-tuning process takes approximately 10 hours on one NVIDIA A100 GPUs.
#### Baselines.
We use three major categories of baselines: (1) State-of-the-art models on public leaderboards, including Gemini-1.5-Pro [50], GPT-4V [46], LLaVA-OneVision-7B [26], Qwen2-VL-7B [66]. We also carefully reproduce the results of InternVL-2.5-8B-MPO with structural thinking. (2) Test-time scaling methods (excluding PRM) based on the InternVL-2.5-8B-MPO model, including: (i) Self-consistency [68], which selects the most consistent reasoning chain via majority voting over multiple responses; (ii) Self-correction [17], which prompts the model to critically reflect on and revise its initial answers; and (iii) Outcome Reward Model (ORM) [52], which evaluates and scores the final response to select the most promising one. (3) PRM-based methods, including: (i) Vanilla PRM trained without any data selection, as commonly used in LLM settings [29]; (ii) s1-PRM, which selects high-quality reasoning responses based on three criteria - difficulty, quality, and diversity - following the s1 strategy [44]; and (iii) CaR-PRM, which filters high-quality visual questions using clustering and ranking techniques, as proposed in CaR [16].
#### Datasets and benchmarks.
We use 15 multimodal datasets for lower-level optimization ( $\mathcal{D}_{tr}$ ), covering four domains: science, chart, geometry, and commonsense, as listed in Appendix Table 2. For upper-level optimization ( $\mathcal{D}_{meta}$ ), we adopt the MMMU [79] dataset. Evaluation is conducted on five multimodal reasoning benchmarks: WeMath [48], MathVista [33], MathVision [63], MMVet [78], and MMStar [5]. Details are provided in Appendix B.
### 5.2 Benchmark evaluation of DreamPRM
Tab. 1 presents the primary experimental results. We observe that: (1) DreamPRM outperforms other PRM-based methods, highlighting the effectiveness of our domain reweighting strategy. Compared to the vanilla PRM trained without any data selection, DreamPRM achieves a consistent performance gain of 2%-3% across all five datasets, suggesting that effective data selection is crucial for training high-quality multimodal PRMs. Moreover, DreamPRM also outperforms s1-PRM and CaR-PRM, which rely on manually designed heuristic rules for data selection. These results indicate that selecting suitable reasoning datasets for PRM training is a complex task, and handcrafted rules are often suboptimal. In contrast, our automatic domain-reweighting approach enables the model to adaptively optimize its learning process, illustrating how data-driven optimization offers a scalable solution to dataset selection challenges. (2) DreamPRM outperforms SOTA MLLMs with much fewer parameters, highlighting the effectiveness of DreamPRM. For example, DreamPRM significantly surpasses two trillion-scale closed-source LLMs (GPT-4v and Gemini-1.5-Pro) on 4 out of 5 datasets. In addition, it consistently improves the performance of the base model, InternVL-2.5-8B-MPO, achieving an average gain of 4% on the five datasets. These results confirm that DreamPRM effectively yields a high-quality PRM, which is capable of enhancing multimodal reasoning across a wide range of benchmarks. (3) DreamPRM outperforms other test-time scaling methods, primarily because it enables the training of a high-quality PRM that conducts fine-grained, step-level evaluation. While most test-time scaling methods yield moderate improvements, DreamPRM leads to the most substantial gains, suggesting that the quality of the reward model is critical for effective test-time scaling. We further provide case studies in Appendix D, which intuitively illustrate how DreamPRM assigns higher scores to coherent and high-quality reasoning trajectories.
<details>
<summary>x4.png Details</summary>

### Visual Description
\n
## Bar Chart: Leaderboard on MathVista
### Overview
The image presents a bar chart displaying the performance of various models on the MathVista benchmark. The chart compares the accuracy scores of nine different models, ranging from approximately 73% to 85%. The y-axis represents the percentage score, while the x-axis lists the model names.
### Components/Axes
* **Title:** "Leaderboard on MathVista" (positioned at the top-center)
* **Y-axis:** Percentage (ranging from 0% to 100%, with markers at 0%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, and 100%)
* **X-axis:** Model Names:
* o4-mini + DreamPRM
* VL-Rethinker
* Step R1 -V-Mini -preview-20230308
* Kimi-kl.6
* Doubao-pro-1.5
* Qvls2_31B
* Kimi-kl.1
* OpenAI 01
* Llama 4 Maverick
* Vision-R1-7B
### Detailed Analysis
The bars represent the accuracy scores of each model. The trend is generally decreasing from left to right, with some fluctuations.
* **o4-mini + DreamPRM:** Approximately 85.2% (Blue bar, leftmost)
* **VL-Rethinker:** Approximately 80.3% (Orange bar, second from left)
* **Step R1 -V-Mini -preview-20230308:** Approximately 80.1% (Green bar, third from left)
* **Kimi-kl.6:** Approximately 80.0% (Red bar, fourth from left)
* **Doubao-pro-1.5:** Approximately 79.5% (Purple bar, fifth from left)
* **Qvls2_31B:** Approximately 77.1% (Brown bar, sixth from left)
* **Kimi-kl.1:** Approximately 74.9% (Pink bar, seventh from left)
* **OpenAI 01:** Approximately 73.9% (Gray bar, eighth from left)
* **Llama 4 Maverick:** Approximately 73.7% (Yellow bar, ninth from left)
* **Vision-R1-7B:** Approximately 73.2% (Teal bar, rightmost)
### Key Observations
* The model "o4-mini + DreamPRM" significantly outperforms all other models, with a score of approximately 85.2%.
* The models "VL-Rethinker", "Step R1 -V-Mini -preview-20230308", and "Kimi-kl.6" have very similar performance, all around 80%.
* The lowest performing models, "OpenAI 01", "Llama 4 Maverick", and "Vision-R1-7B", are clustered around 73-74%.
* There is a noticeable gap in performance between the top-performing model and the rest.
### Interpretation
The chart demonstrates a clear ranking of different models based on their performance on the MathVista benchmark. The substantial lead of "o4-mini + DreamPRM" suggests it is a particularly effective model for this specific task. The clustering of several models around the 80% mark indicates a competitive landscape among those options. The lower scores of "OpenAI 01", "Llama 4 Maverick", and "Vision-R1-7B" may indicate areas for improvement in those models or suggest they are less suited for the types of mathematical problems included in the MathVista benchmark. The data suggests that model architecture and training data play a significant role in achieving high accuracy on MathVista. The inclusion of the preview date in "Step R1 -V-Mini -preview-20230308" suggests that the model is under active development and its performance may change over time.
</details>
Figure 4: Leaderboard on MathVista (as of October 15, 2025). The first column (âo4-mini + DreamPRMâ) reports our own evaluation, while the remaining results are taken from the official MathVista leaderboard. The compared models include VL-Rethinker [62], Step R1-V-Mini [58], Kimi-k1.6-preview [43], Kimi-k1.5 [24], Doubao-pro-1.5 [60], Ovis2-34B [1], OpenAI o1 [45], Llama 4 Maverick [41, 42], and Vision-R1-7B [18].
### 5.3 Leaderboard performance of DreamPRM
As shown in Fig. 4, DreamPRM achieves the top-1 accuracy of 85.2% on the MathVista leaderboard (as of October 15, 2025). The result (o4-mini + DreamPRM) has been officially verified through the MathVista evaluation. Compared with a series of strong multimodal reasoning baselines, including VL-Rethinker [62], Step R1-V-Mini [58], Kimi-k1.6-preview [43], Doubao-pro-1.5 [60], Ovis2-34B [1], OpenAI o1 [45], Llama 4 Maverick [41, 42], and Vision-R1-7B [18], DreamPRM demonstrates clearly superior multimodal reasoning capability.
Table 5 in Appendix provides a detailed comparison among various Process Reward Model (PRM) variants built on the same o4-mini backbone. DreamPRM surpasses all counterparts, improving the base o4-mini model from 80.6% (pass@1) and 81.7% (self-consistency@8) to 85.2%. This consistent gain verifies the effectiveness of DreamPRM in enhancing reasoning accuracy through process-level supervision and reliable consensus across multiple chains of thought.
<details>
<summary>x5.png Details</summary>

### Visual Description
\n
## Radar Charts: Performance Comparison of Math Problem Solving Techniques
### Overview
The image presents three radar charts comparing the performance of different techniques for solving math problems across four datasets: MathVista, MathVision, WeMath, and MMStar/MMVet. Each chart focuses on a different aspect of the comparison: data selection, test-time scaling, and ablation study. The performance is measured on a scale from approximately 0 to 70, indicated by the radial axis.
### Components/Axes
Each chart shares the following components:
* **Radial Axes:** Representing the four datasets: MathVista, MathVision, WeMath, and MMStar/MMVet. These are positioned equidistantly around the center of the chart.
* **Radial Scale:** A scale from 0 to 70, marked at intervals of approximately 10, indicating performance scores.
* **Lines:** Each line represents a different technique or configuration.
* **Legends:** Located at the bottom of each chart, identifying the color-coded lines.
The three charts have different titles and legends:
* **Chart 1: Data selection comparison**
* Legend:
* Yellow: No selection
* Orange: sl selection
* Red: CaR selection
* Pink: DreamPRM
* Light Blue: Self-consistency
* **Chart 2: Test-time scaling comparison**
* Legend:
* Yellow: Self-consistency
* Orange: ORM
* Red: Self-correction
* Pink: DreamPRM
* Light Blue: No scaling
* **Chart 3: Ablation study**
* Legend:
* Yellow: w/o AFL
* Orange: w/o BLO
* Red: w/o ST
* Pink: DreamPRM
* Light Blue: w/o all
### Detailed Analysis or Content Details
**Chart 1: Data selection comparison**
* **MathVista:** "No selection" (yellow) shows approximately 68.9, "sl selection" (orange) shows approximately 62.3, "CaR selection" (red) shows approximately 45.8, "DreamPRM" (pink) shows approximately 54.7, and "Self-consistency" (light blue) shows approximately 61.4.
* **MathVision:** "No selection" (yellow) shows approximately 21.0, "sl selection" (orange) shows approximately 20.2, "CaR selection" (red) shows approximately 40.4, "DreamPRM" (pink) shows approximately 57.4, and "Self-consistency" (light blue) shows approximately 61.3.
* **WeMath:** "No selection" (yellow) shows approximately 61.4, "sl selection" (orange) shows approximately 52.3, "CaR selection" (red) shows approximately 59.7, "DreamPRM" (pink) shows approximately 66.1, and "Self-consistency" (light blue) shows approximately 64.2.
* **MMStar/MMVet:** "No selection" (yellow) shows approximately 61.3, "sl selection" (orange) shows approximately 62.3, "CaR selection" (red) shows approximately 61.4, "DreamPRM" (pink) shows approximately 61.4, and "Self-consistency" (light blue) shows approximately 61.4.
**Chart 2: Test-time scaling comparison**
* **MathVista:** "Self-consistency" (yellow) shows approximately 68.9, "ORM" (orange) shows approximately 63.8, "Self-correction" (red) shows approximately 54.0, "DreamPRM" (pink) shows approximately 59.9, and "No scaling" (light blue) shows approximately 57.4.
* **MathVision:** "Self-consistency" (yellow) shows approximately 21.0, "ORM" (orange) shows approximately 20.2, "Self-correction" (red) shows approximately 40.4, "DreamPRM" (pink) shows approximately 57.4, and "No scaling" (light blue) shows approximately 61.3.
* **WeMath:** "Self-consistency" (yellow) shows approximately 61.4, "ORM" (orange) shows approximately 52.3, "Self-correction" (red) shows approximately 59.7, "DreamPRM" (pink) shows approximately 66.1, and "No scaling" (light blue) shows approximately 64.2.
* **MMStar/MMVet:** "Self-consistency" (yellow) shows approximately 61.3, "ORM" (orange) shows approximately 62.3, "Self-correction" (red) shows approximately 61.4, "DreamPRM" (pink) shows approximately 61.4, and "No scaling" (light blue) shows approximately 61.4.
**Chart 3: Ablation study**
* **MathVista:** "w/o AFL" (yellow) shows approximately 68.9, "w/o BLO" (orange) shows approximately 66.1, "w/o ST" (red) shows approximately 55.3, "DreamPRM" (pink) shows approximately 64.2, and "w/o all" (light blue) shows approximately 61.4.
* **MathVision:** "w/o AFL" (yellow) shows approximately 20.4, "w/o BLO" (orange) shows approximately 20.2, "w/o ST" (red) shows approximately 40.4, "DreamPRM" (pink) shows approximately 57.4, and "w/o all" (light blue) shows approximately 61.3.
* **WeMath:** "w/o AFL" (yellow) shows approximately 61.4, "w/o BLO" (orange) shows approximately 52.3, "w/o ST" (red) shows approximately 59.7, "DreamPRM" (pink) shows approximately 66.1, and "w/o all" (light blue) shows approximately 64.2.
* **MMStar/MMVet:** "w/o AFL" (yellow) shows approximately 61.3, "w/o BLO" (orange) shows approximately 62.3, "w/o ST" (red) shows approximately 61.4, "DreamPRM" (pink) shows approximately 61.4, and "w/o all" (light blue) shows approximately 61.4.
### Key Observations
* In all three charts, "DreamPRM" consistently performs well across all datasets, often achieving the highest scores.
* The performance varies significantly across the datasets. MathVista and WeMath generally show higher scores than MathVision and MMStar/MMVet.
* The ablation study (Chart 3) suggests that removing AFL ("w/o AFL") has a minimal impact on performance, while removing ST ("w/o ST") significantly reduces performance.
### Interpretation
These radar charts provide a comparative analysis of different techniques for math problem solving. The consistent strong performance of "DreamPRM" suggests it is a robust and effective approach. The variations in performance across datasets indicate that the effectiveness of different techniques may depend on the characteristics of the math problems within each dataset. The ablation study highlights the importance of the ST component for achieving high performance. The charts demonstrate a clear visual representation of the trade-offs between different techniques and their impact on performance across various datasets. The data suggests that DreamPRM is a strong baseline, and further improvements may be achieved by focusing on optimizing the ST component.
</details>
Figure 5: Comparative evaluation of DreamPRM on multimodal reasoning benchmarks. Radar charts report accuracy (%) on five datasets (WeMath, MathVista, MathVision, MMVet, and MMStar). (a) Impact of different data selection strategies. (b) Comparison with existing test-time scaling methods. (c) Ablation study of three key components, i.e. w/o aggregation function loss (AFL), w/o bi-level optimization (BLO), and w/o structural thinking (ST).
<details>
<summary>x6.png Details</summary>

### Visual Description
## Radar Chart: Scaling Ability
### Overview
The image presents a radar chart (also known as a spider chart or star chart) illustrating the "Scaling ability" of several models â MathVista, MathVision, WeMath, MMStar, and MMVet â across different prompting strategies: Zero-shot, DreamPRM@2, DreamPRM@4, and DreamPRM@8. The chart visually compares the performance of each model under each prompting strategy, with higher values indicating better scaling ability.
### Components/Axes
* **Title:** "Scaling ability" (centered at the top)
* **Model Labels:** MathVista, MathVision, WeMath, MMStar, MMVet (arranged clockwise around the outer edge of the chart).
* **Prompting Strategy Legend:** Located at the bottom of the chart.
* Zero-shot (Orange)
* DreamPRM@2 (Red)
* DreamPRM@4 (Pink)
* DreamPRM@8 (Dark Grey)
* **Radial Axes:** Representing the scaling ability, with no explicit numerical scale, but values are indicated at each point. The chart has 5 radial axes, one for each model.
### Detailed Analysis
The chart displays performance values for each model and prompting strategy. The values are plotted as points connected by lines, forming a polygon for each prompting strategy.
* **MathVista:**
* Zero-shot: ~68.9
* DreamPRM@2: ~66.5
* DreamPRM@4: ~65.3
* DreamPRM@8: ~60.0
* Trend: The Zero-shot line is the highest, decreasing slightly with increasing DreamPRM parameters.
* **MathVision:**
* Zero-shot: ~20.0
* DreamPRM@2: ~55.9
* DreamPRM@4: ~58.0
* DreamPRM@8: ~60.3
* Trend: The Zero-shot line is the lowest, increasing significantly with increasing DreamPRM parameters.
* **WeMath:**
* Zero-shot: ~57.4
* DreamPRM@2: ~51.7
* DreamPRM@4: ~53.6
* DreamPRM@8: ~54.5
* Trend: The Zero-shot line is the highest, decreasing slightly with increasing DreamPRM parameters.
* **MMStar:**
* Zero-shot: ~62.3
* DreamPRM@2: ~59.3
* DreamPRM@4: ~60.0
* DreamPRM@8: ~66.5
* Trend: The Zero-shot line is relatively high, decreasing slightly with DreamPRM@2 and @4, then increasing significantly with DreamPRM@8.
* **MMVet:**
* Zero-shot: ~61.4
* DreamPRM@2: ~55.9
* DreamPRM@4: ~51.7
* DreamPRM@8: ~53.6
* Trend: The Zero-shot line is the highest, decreasing with increasing DreamPRM parameters.
### Key Observations
* MathVista consistently performs best with the Zero-shot prompting strategy.
* MathVision shows the most significant improvement with the DreamPRM prompting strategies, starting from a very low Zero-shot score.
* MMStar exhibits a unique trend, with performance increasing substantially with DreamPRM@8.
* The Zero-shot strategy generally performs well for MathVista, WeMath, MMStar, and MMVet, but is significantly lower for MathVision.
* DreamPRM@8 shows mixed results, improving MathVision and MMStar, but decreasing performance for MathVista, WeMath, and MMVet.
### Interpretation
The radar chart demonstrates the scaling ability of different models when using various prompting strategies. The chart suggests that the optimal prompting strategy is model-dependent. MathVista benefits from a simple Zero-shot approach, while MathVision requires the more complex DreamPRM strategies to achieve reasonable performance. The performance of MMStar with DreamPRM@8 is an outlier, indicating a potential synergy between the model and this specific prompting configuration. The chart highlights the importance of tailoring prompting strategies to the specific characteristics of each model to maximize performance. The differences in scaling ability suggest that the models have varying levels of inherent mathematical reasoning capabilities and sensitivity to prompt engineering. The chart provides valuable insights for selecting the most appropriate model and prompting strategy for a given mathematical task.
</details>
<details>
<summary>x7.png Details</summary>

### Visual Description
## Line Chart: Best-of-N Accuracy with Different Models
### Overview
This line chart displays the Best-of-N accuracy of three different models â InternVL-2.5-8B-MPO, GPT-4.1-mini (4-14-25), and o4-mini (4-16-25) â as a function of the number of selected CoTs (Contexts or Chains of Thought). The x-axis represents the number of selected CoTs in thousands (k), ranging from 2 to 8. The y-axis represents the accuracy in percentage, ranging from 65% to 85%.
### Components/Axes
* **Title:** Best-of-N accuracy with different models
* **X-axis Label:** Number of selected CoTs (k)
* **Y-axis Label:** Accuracy (%)
* **Legend:**
* InternVL-2.5-8B-MPO (Blue, dashed line with circle markers)
* GPT-4.1-mini (4-14-25) (Red, dashed line with square markers)
* o4-mini (4-16-25) (Green, dashed line with cross markers)
* **X-axis Markers:** 2, 4, 6, 8
* **Y-axis Markers:** 65, 70, 75, 80, 85
### Detailed Analysis
* **InternVL-2.5-8B-MPO (Blue):** The line slopes upward, indicating increasing accuracy with more selected CoTs.
* At 2k CoTs: Approximately 65.5% accuracy.
* At 4k CoTs: Approximately 66.5% accuracy.
* At 6k CoTs: Approximately 68.5% accuracy.
* At 8k CoTs: Approximately 69.5% accuracy.
* **GPT-4.1-mini (4-14-25) (Red):** The line initially remains relatively flat, then decreases slightly before increasing again.
* At 2k CoTs: Approximately 72.5% accuracy.
* At 4k CoTs: Approximately 72.5% accuracy.
* At 6k CoTs: Approximately 71.5% accuracy.
* At 8k CoTs: Approximately 72.5% accuracy.
* **o4-mini (4-16-25) (Green):** The line slopes consistently upward, showing a clear positive correlation between the number of selected CoTs and accuracy.
* At 2k CoTs: Approximately 81.5% accuracy.
* At 4k CoTs: Approximately 82.5% accuracy.
* At 6k CoTs: Approximately 84.0% accuracy.
* At 8k CoTs: Approximately 85.0% accuracy.
### Key Observations
* The o4-mini model consistently exhibits the highest accuracy across all tested numbers of CoTs.
* The InternVL-2.5-8B-MPO model shows the lowest accuracy, but its performance improves steadily with more CoTs.
* GPT-4.1-mini's accuracy remains relatively stable, with a slight dip at 6k CoTs.
* The accuracy gains diminish as the number of CoTs increases, particularly for the o4-mini model.
### Interpretation
The chart demonstrates the impact of increasing the number of selected Contexts (CoTs) on the Best-of-N accuracy of different language models. The o4-mini model appears to be the most effective in leveraging additional CoTs to improve its performance. The InternVL-2.5-8B-MPO model benefits the most from increasing the number of CoTs, suggesting it is more sensitive to the quality and diversity of the contexts provided. The relatively stable performance of GPT-4.1-mini suggests it may have already reached a performance plateau or is less reliant on the number of CoTs. The diminishing returns observed with increasing CoTs indicate that there is a point beyond which adding more contexts does not significantly improve accuracy. This could be due to redundancy in the contexts or limitations in the model's ability to effectively process and integrate a large number of inputs. The chart highlights the trade-off between computational cost (increasing the number of CoTs) and accuracy gains.
</details>
Figure 6: Scaling ability and cross-model generalization. (a) Radar chart of five multimodal reasoning benchmarks shows that DreamPRM delivers monotonic accuracy gains as the number of selected chains-of-thought increases (@2, @4, @8) over the pass@1 baseline. (b) Best-of- N accuracy curves for InternVL-2.5-8B-MPO (blue), GPT-4.1-mini (red) and o4-mini (green) on MathVista confirm that the same DreamPRM-ranked CoTs generalize across models, consistently outperforming pass@1 performance (dashed lines) as $k$ grows.
### 5.4 Scaling and generalization analysis of DreamPRM
DreamPRM scales reliably with more CoT candidates. As shown in the left panel of Fig. 6, the accuracy of DreamPRM consistently improves on all five benchmarks as the number of CoTs increases from $k{=}2$ to $k{=}8$ , expanding the radar plot outward. Intuitively, a larger set of candidates increases the likelihood of including high-quality reasoning trajectories, but it also makes identifying the best ones more challenging. The consistent performance gains indicate that DreamPRM effectively verifies and ranks CoTs, demonstrating its robustness in selecting high-quality reasoning trajectories under more complex candidate pools.
DreamPRM transfers seamlessly to stronger base MLLMs. The right panel of Fig. 6 shows the MathVista accuracy when applying DreamPRM to recent MLLMs, GPT-4.1-mini (2025-04-14) [46] and o4-mini (2025-04-16) [45]. For o4-mini model, the pass@1 score of 80.6% steadily increases to 85.2% at $k{=}8$ , surpassing the previous state-of-the-art performance. This best-of- $N$ trend, previously observed with InternVL, also holds for GPT-4.1-mini and o4-mini, demonstrating the generalization ability of DreamPRM. Full results of these experiments are provided in Tab. 3.
### 5.5 Ablation study
In this section, we investigate the importance of three components in DreamPRM: (1) bi-level optimization, (2) aggregation function loss in upper-level, and (3) structural thinking prompt (detailed in Section 5.1). As shown in the rightmost panel of Fig. 5, the complete DreamPRM achieves the best results compared to three ablation baselines across all five benchmarks. Eliminating bi-level optimization causes large performance drop (e.g., -3.5% on MathVista and -3.4% on MMStar). Removing aggregation function loss leads to a consistent 1%-2% decline (e.g., 57.4% $\rightarrow$ 56.3% on WeMath). Excluding structural thinking also degrades performance (e.g., -1.8% on MathVision). These results indicate that all three components are critical for DreamPRM to achieve the best performance. More detailed results are shown in Appendix Tab. 4.
### 5.6 Analysis of learned domain weights
<details>
<summary>x8.png Details</summary>

### Visual Description
\n
## Bar Chart: Domain Weights
### Overview
This is a horizontal bar chart displaying the weights assigned to different domains. The chart visually represents the relative importance or contribution of each domain, with longer bars indicating higher weights. The chart is titled "Domain Weights".
### Components/Axes
* **X-axis:** Represents the weight value, ranging from 0.0 to 1.5. The axis is labeled with numerical values at intervals of 0.2.
* **Y-axis:** Lists the domain names.
* **Bars:** Each bar corresponds to a domain, and its length represents the domain's weight.
* **Title:** "Domain Weights" is positioned at the top-center of the chart.
### Detailed Analysis
The chart displays the following domain weights:
* **m3cot:** Weight = 1.49 (Orange bar, longest bar)
* **figureqa:** Weight = 1.47 (Gray bar, second longest bar)
* **unigeo:** Weight = 1.16 (Gray bar)
* **infographics:** Weight = 1.16 (Gray bar)
* **chartqa:** Weight = 1.10 (Blue bar)
* **geo170k:** Weight = 1.06 (Green bar)
* **scienceqa:** Weight = 1.05 (Green bar)
* **geos:** Weight = 1.01 (Green bar)
* **geomverse:** Weight = 0.98 (Purple bar)
* **mapqa:** Weight = 0.97 (Purple bar)
* **clevr:** Weight = 0.95 (Purple bar)
* **geometry3k:** Weight = 0.84 (Purple bar)
* **dvqa:** Weight = 0.79 (Cyan bar)
* **icnqa:** Weight = 0.75 (Cyan bar)
* **road:** Weight = 0.55 (Cyan bar, shortest bar)
The bars are arranged in descending order of weight, with 'm3cot' having the highest weight and 'road' having the lowest.
### Key Observations
* 'm3cot' and 'figureqa' have significantly higher weights than the other domains, indicating their greater importance.
* The weights generally decrease as you move down the list of domains.
* There is a clear separation in weights, with a group of domains having weights around 1.1-1.2, another around 0.9-1.0, and a final group around 0.7-0.8.
### Interpretation
The chart demonstrates a clear hierarchy of domain weights. The high weights assigned to 'm3cot' and 'figureqa' suggest that these domains are crucial for the overall system or task being evaluated. The decreasing weights indicate diminishing importance of the other domains. This could be due to various factors, such as the frequency of data from each domain, the complexity of the tasks associated with each domain, or the performance of models on each domain. The chart provides valuable insights into the relative contributions of different domains and can be used to prioritize resources or focus development efforts. The large difference in weights between the top two domains and the rest suggests a potential bottleneck or area for improvement in the lower-weighted domains.
</details>
Figure 7: Learned domain weights after the convergence of the DreamPRM training process.
The final domain weights (Fig. 7) range from 0.55 to 1.49: M3CoT [6] and FigureQA [21] receive the highest weights (approximately 1.5), while AI2D [23] and IconQA [36] are assigned lower weights (less than 0.8). This learned weighting pattern contributes to improved PRM performance, indicating that the quality imbalance problem across reasoning datasets is real and consequential. Additionally, as shown in Fig. 9 in Appendix, all domain weights are initialized to 1.0 and eventually converge during the training process of DreamPRM.
## 6 Conclusions
We propose DreamPRM, the first domain-reweighted PRM framework for multimodal reasoning. By automatically searching for domain weights using a bi-level optimization framework, DreamPRM effectively mitigates issues caused by dataset quality imbalance and significantly enhances the generalizability of multimodal PRMs. Extensive experiments on five diverse benchmarks confirm that DreamPRM outperforms both vanilla PRMs without domain reweighting and PRMs using heuristic data selection methods. We also observe that the domain weights learned by DreamPRM correlate with dataset quality, effectively separating challenging, informative sources from overly simplistic or noisy ones. These results highlight the effectiveness of our proposed automatic domain reweighting strategy.
## Acknowledgments
This work was supported by the National Science Foundation (IIS2405974 and IIS2339216) and the National Institutes of Health (R35GM157217).
## References
- [1] AIDC-AI. Ovis2-34b (model card). https://huggingface.co/AIDC-AI/Ovis2-34B, 2025. Related paper: arXiv:2405.20797; Accessed 2025-10-15.
- [2] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020.
- [3] Shuaichen Chang, David Palzer, Jialin Li, Eric Fosler-Lussier, and Ningchuan Xiao. Mapqa: A dataset for question answering on choropleth maps, 2022.
- [4] Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin, Chongyu Chen, and Xiaodan Liang. Unigeo: Unifying geometry logical reasoning via reformulating mathematical expression, 2022.
- [5] Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, and Feng Zhao. Are we on the right way for evaluating large vision-language models?, 2024.
- [6] Qiguang Chen, Libo Qin, Jin Zhang, Zhi Chen, Xiao Xu, and Wanxiang Che. M 3 cot: A novel benchmark for multi-domain multi-step multi-modal chain-of-thought, 2024.
- [7] Sang Keun Choe, Willie Neiswanger, Pengtao Xie, and Eric Xing. Betty: An automatic differentiation library for multilevel optimization. In The Eleventh International Conference on Learning Representations, 2023.
- [8] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022.
- [9] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021.
- [10] DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y. K. Li, Y. Q. Wang, Y. X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y. X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, and Zhen Zhang. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025.
- [11] Guanting Dong, Chenghao Zhang, Mengjie Deng, Yutao Zhu, Zhicheng Dou, and Ji-Rong Wen. Progressive multimodal reasoning via active retrieval, 2024.
- [12] Simin Fan, Matteo Pagliardini, and Martin Jaggi. Doge: Domain reweighting with generalization estimation, 2024.
- [13] Simin Fan, Matteo Pagliardini, and Martin Jaggi. DOGE: Domain reweighting with generalization estimation. In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp, editors, Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pages 12895â12915. PMLR, 21â27 Jul 2024.
- [14] Chelsea Finn, P. Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, 2017.
- [15] Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, and Lingpeng Kong. G-llava: Solving geometric problem with multi-modal large language model, 2023.
- [16] Yuan Ge, Yilun Liu, Chi Hu, Weibin Meng, Shimin Tao, Xiaofeng Zhao, Hongxia Ma, Li Zhang, Boxing Chen, Hao Yang, Bei Li, Tong Xiao, and Jingbo Zhu. Clustering and ranking: Diversity-preserved instruction selection through expert-aligned quality estimation, 2024.
- [17] Jiayi He, Hehai Lin, Qingyun Wang, Yi Fung, and Heng Ji. Self-correction is more than refinement: A learning framework for visual and language reasoning tasks, 2024.
- [18] Wenxuan Huang, Bohan Jia, Zijie Zhai, et al. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749, 2025.
- [19] Dongzhi Jiang, Renrui Zhang, Ziyu Guo, Yanwei Li, Yu Qi, Xinyan Chen, Liuhui Wang, Jianhan Jin, Claire Guo, Shen Yan, Bo Zhang, Chaoyou Fu, Peng Gao, and Hongsheng Li. Mme-cot: Benchmarking chain-of-thought in large multimodal models for reasoning quality, robustness, and efficiency, 2025.
- [20] Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. Dvqa: Understanding data visualizations via question answering, 2018.
- [21] Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, Akos Kadar, Adam Trischler, and Yoshua Bengio. Figureqa: An annotated figure dataset for visual reasoning, 2018.
- [22] Mehran Kazemi, Hamidreza Alvari, Ankit Anand, Jialin Wu, Xi Chen, and Radu Soricut. Geomverse: A systematic evaluation of large models for geometric reasoning, 2023.
- [23] Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images, 2016.
- [24] Kimi Team. Kimi k1.5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599, 2025.
- [25] Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems, volume 35, pages 22199â22213, 2022.
- [26] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. Llava-onevision: Easy visual task transfer, 2024.
- [27] Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making large language models better reasoners with step-aware verifier, 2023.
- [28] Zongxia Li, Xiyang Wu, Hongyang Du, Fuxiao Liu, Huy Nghiem, and Guangyao Shi. A survey of state of the art large vision language models: Alignment, benchmark, evaluations and challenges. arXiv preprint arXiv:2501.02189, 2025.
- [29] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Letâs verify step by step. In The Twelfth International Conference on Learning Representations, 2024.
- [30] Adam Dahlgren Lindström and Savitha Sam Abraham. Clevr-math: A dataset for compositional language, visual and mathematical reasoning, 2022.
- [31] Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations, 2019.
- [32] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization, 2019.
- [33] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In International Conference on Learning Representations (ICLR), 2024.
- [34] Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. Inter-gps: Interpretable geometry problem solving with formal language and symbolic reasoning. In The 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
- [35] Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering, 2022.
- [36] Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. Iconqa: A new benchmark for abstract diagram understanding and visual language reasoning, 2022.
- [37] Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Meiqi Guo, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, and Abhinav Rastogi. Improve mathematical reasoning in language models by automated process supervision, 2024.
- [38] Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan, Pengfei Liu, Yang You, and Hongxia Yang. Letâs reward step by step: Step-level reward model as the navigators for reasoning, 2023.
- [39] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning, 2022.
- [40] Minesh Mathew, Viraj Bagal, RubÚn Pérez Tito, Dimosthenis Karatzas, Ernest Valveny, and C. V Jawahar. Infographicvqa, 2021.
- [41] Meta AI. The llama 4 herd: The beginning of a new era of natively multimodal intelligence. https://ai.meta.com/blog/llama-4-multimodal-intelligence/, 2025. Llama 4 Maverick announcement; Accessed 2025-10-15.
- [42] Meta Llama. Llama-4-maverick-17b-128e-instruct (model card). https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct, 2025. Accessed 2025-10-15.
- [43] Moonshot AI / Kimi. Kimi-k1.6-preview-20250308 (preview announcement). https://x.com/RotekSong/status/1900061355945926672, 2025. Accessed 2025-10-15; preview model announcement.
- [44] Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel CandĂšs, and Tatsunori Hashimoto. s1: Simple test-time scaling, 2025.
- [45] OpenAI, :, Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, Alex Iftimie, Alex Karpenko, Alex Tachard Passos, Alexander Neitz, Alexander Prokofiev, Alexander Wei, Allison Tam, Ally Bennett, Ananya Kumar, Andre Saraiva, Andrea Vallone, Andrew Duberstein, Andrew Kondrich, Andrey Mishchenko, Andy Applebaum, Angela Jiang, Ashvin Nair, Barret Zoph, Behrooz Ghorbani, Ben Rossen, Benjamin Sokolowsky, Boaz Barak, Bob McGrew, Borys Minaiev, Botao Hao, Bowen Baker, Brandon Houghton, Brandon McKinzie, Brydon Eastman, Camillo Lugaresi, Cary Bassin, Cary Hudson, Chak Ming Li, Charles de Bourcy, Chelsea Voss, Chen Shen, Chong Zhang, Chris Koch, Chris Orsinger, Christopher Hesse, Claudia Fischer, Clive Chan, Dan Roberts, Daniel Kappler, Daniel Levy, Daniel Selsam, David Dohan, David Farhi, David Mely, David Robinson, Dimitris Tsipras, Doug Li, Dragos Oprica, Eben Freeman, Eddie Zhang, Edmund Wong, Elizabeth Proehl, Enoch Cheung, Eric Mitchell, Eric Wallace, Erik Ritter, Evan Mays, Fan Wang, Felipe Petroski Such, Filippo Raso, Florencia Leoni, Foivos Tsimpourlas, Francis Song, Fred von Lohmann, Freddie Sulit, Geoff Salmon, Giambattista Parascandolo, Gildas Chabot, Grace Zhao, Greg Brockman, Guillaume Leclerc, Hadi Salman, Haiming Bao, Hao Sheng, Hart Andrin, Hessam Bagherinezhad, Hongyu Ren, Hunter Lightman, Hyung Won Chung, Ian Kivlichan, Ian OâConnell, Ian Osband, Ignasi Clavera Gilaberte, Ilge Akkaya, Ilya Kostrikov, Ilya Sutskever, Irina Kofman, Jakub Pachocki, James Lennon, Jason Wei, Jean Harb, Jerry Twore, Jiacheng Feng, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joaquin Quiñonero Candela, Joe Palermo, Joel Parish, Johannes Heidecke, John Hallman, John Rizzo, Jonathan Gordon, Jonathan Uesato, Jonathan Ward, Joost Huizinga, Julie Wang, Kai Chen, Kai Xiao, Karan Singhal, Karina Nguyen, Karl Cobbe, Katy Shi, Kayla Wood, Kendra Rimbach, Keren Gu-Lemberg, Kevin Liu, Kevin Lu, Kevin Stone, Kevin Yu, Lama Ahmad, Lauren Yang, Leo Liu, Leon Maksin, Leyton Ho, Liam Fedus, Lilian Weng, Linden Li, Lindsay McCallum, Lindsey Held, Lorenz Kuhn, Lukas Kondraciuk, Lukasz Kaiser, Luke Metz, Madelaine Boyd, Maja Trebacz, Manas Joglekar, Mark Chen, Marko Tintor, Mason Meyer, Matt Jones, Matt Kaufer, Max Schwarzer, Meghan Shah, Mehmet Yatbaz, Melody Y. Guan, Mengyuan Xu, Mengyuan Yan, Mia Glaese, Mianna Chen, Michael Lampe, Michael Malek, Michele Wang, Michelle Fradin, Mike McClay, Mikhail Pavlov, Miles Wang, Mingxuan Wang, Mira Murati, Mo Bavarian, Mostafa Rohaninejad, Nat McAleese, Neil Chowdhury, Neil Chowdhury, Nick Ryder, Nikolas Tezak, Noam Brown, Ofir Nachum, Oleg Boiko, Oleg Murk, Olivia Watkins, Patrick Chao, Paul Ashbourne, Pavel Izmailov, Peter Zhokhov, Rachel Dias, Rahul Arora, Randall Lin, Rapha Gontijo Lopes, Raz Gaon, Reah Miyara, Reimar Leike, Renny Hwang, Rhythm Garg, Robin Brown, Roshan James, Rui Shu, Ryan Cheu, Ryan Greene, Saachi Jain, Sam Altman, Sam Toizer, Sam Toyer, Samuel Miserendino, Sandhini Agarwal, Santiago Hernandez, Sasha Baker, Scott McKinney, Scottie Yan, Shengjia Zhao, Shengli Hu, Shibani Santurkar, Shraman Ray Chaudhuri, Shuyuan Zhang, Siyuan Fu, Spencer Papay, Steph Lin, Suchir Balaji, Suvansh Sanjeev, Szymon Sidor, Tal Broda, Aidan Clark, Tao Wang, Taylor Gordon, Ted Sanders, Tejal Patwardhan, Thibault Sottiaux, Thomas Degry, Thomas Dimson, Tianhao Zheng, Timur Garipov, Tom Stasi, Trapit Bansal, Trevor Creech, Troy Peterson, Tyna Eloundou, Valerie Qi, Vineet Kosaraju, Vinnie Monaco, Vitchyr Pong, Vlad Fomenko, Weiyi Zheng, Wenda Zhou, Wes McCabe, Wojciech Zaremba, Yann Dubois, Yinghai Lu, Yining Chen, Young Cha, Yu Bai, Yuchen He, Yuchen Zhang, Yunyun Wang, Zheng Shao, and Zhuohan Li. Openai o1 system card, 2024.
- [46] OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, SimĂłn Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Ćukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Ćukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David MĂ©ly, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen OâKeefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe CerĂłn Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. Gpt-4 technical report, 2024.
- [47] Guilherme Penedo, Anton Lozhkov, Hynek KydlĂÄek, Loubna Ben Allal, Edward Beeching, AgustĂn Piqueres LajarĂn, Quentin GallouĂ©dec, Nathan Habib, Lewis Tunstall, and Leandro von Werra. Codeforces. https://huggingface.co/datasets/open-r1/codeforces, 2025.
- [48] Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, Runfeng Qiao, Yifan Zhang, Xiao Zong, Yida Xu, Muxi Diao, Zhimin Bao, Chen Li, and Honggang Zhang. We-math: Does your large multimodal model achieve human-like mathematical reasoning?, 2024.
- [49] Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025.
- [50] Alex Reid et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens, 2024.
- [51] Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. Solving geometry problems: Combining text and diagram interpretation. In LluĂs MĂ rquez, Chris Callison-Burch, and Jian Su, editors, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1466â1476, Lisbon, Portugal, September 2015. Association for Computational Linguistics.
- [52] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models, 2024.
- [53] Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. Meta-weight-net: Learning an explicit mapping for sample weighting, 2019.
- [54] Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. Meta-weight-net: Learning an explicit mapping for sample weighting. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
- [55] Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters, 2024.
- [56] Shezheng Song, Xiaopeng Li, Shasha Li, Shan Zhao, Jie Yu, Jun Ma, Xiaoguang Mao, and Weimin Zhang. How to bridge the gap between modalities: Survey on multimodal large language model, 2025.
- [57] Daouda Sow, Herbert WoisetschlÀger, Saikiran Bulusu, Shiqiang Wang, Hans-Arno Jacobsen, and Yingbin Liang. Dynamic loss-based sample reweighting for improved large language model pretraining, 2025.
- [58] StepFun. Step-r1-v-mini: A lightweight yet powerful multimodal reasoning model. https://www.stepfun.com/docs/en/step-r1-v-mini, 2025. Accessed 2025-10-15.
- [59] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste RoziÚre, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023.
- [60] Volcengine / ByteDance. Doubao large models (product page). https://www.volcengine.com/product/doubao, 2025. Accessed 2025-10-15.
- [61] Chaojie Wang, Yanchen Deng, Zhiyi Lyu, Liang Zeng, Jujie He, Shuicheng Yan, and Bo An. Q*: Improving multi-step reasoning for llms with deliberative planning, 2024.
- [62] Haozhe Wang, Chao Qu, Zuming Huang, Wei Chu, Fangzhen Lin, and Wenhu Chen. Vl-rethinker: Incentivizing self-reflection of vision-language models with reinforcement learning. arXiv preprint arXiv:2504.08837, 2025.
- [63] Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. Measuring multimodal mathematical reasoning with math-vision dataset, 2024.
- [64] Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce LLMs step-by-step without human annotations. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9426â9439, Bangkok, Thailand, August 2024. Association for Computational Linguistics.
- [65] Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce LLMs step-by-step without human annotations. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9426â9439, Bangkok, Thailand, August 2024. Association for Computational Linguistics.
- [66] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. Qwen2-vl: Enhancing vision-language modelâs perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024.
- [67] Weiyun Wang, Zhe Chen, Wenhai Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Jinguo Zhu, Xizhou Zhu, Lewei Lu, Yu Qiao, and Jifeng Dai. Enhancing the reasoning ability of multimodal large language models via mixed preference optimization. arXiv preprint arXiv:2411.10442, 2024.
- [68] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models, 2023.
- [69] Zihan Wang, Yunxuan Li, Yuexin Wu, Liangchen Luo, Le Hou, Hongkun Yu, and Jingbo Shang. Multi-step problem solving through a verifier: An empirical analysis on model-induced process supervision, 2024.
- [70] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022.
- [71] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824â24837. Curran Associates, Inc., 2022.
- [72] Jiayang Wu, Wensheng Gan, Zefeng Chen, Shicheng Wan, and Philip S. Yu. Multimodal large language models: A survey, 2023.
- [73] Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V Le, Tengyu Ma, and Adams Wei Yu. Doremi: Optimizing data mixtures speeds up language model pretraining. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
- [74] Guowei Xu, Peng Jin, Hao Li, Yibing Song, Lichao Sun, and Li Yuan. Llava-cot: Let vision language models reason step-by-step, 2024.
- [75] Guowei Xu, Peng Jin, Hao Li, Yibing Song, Lichao Sun, and Li Yuan. Llava-cot: Let vision language models reason step-by-step, 2025.
- [76] Jiasheng Ye, Peiju Liu, Tianxiang Sun, Jun Zhan, Yunhua Zhou, and Xipeng Qiu. Data mixing laws: Optimizing data mixtures by predicting language modeling performance. In The Thirteenth International Conference on Learning Representations, 2025.
- [77] Tianyu Yu, Haoye Zhang, Qiming Li, Qixin Xu, Yuan Yao, Da Chen, Xiaoman Lu, Ganqu Cui, Yunkai Dang, Taiwen He, Xiaocheng Feng, Jun Song, Bo Zheng, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. Rlaif-v: Open-source ai feedback leads to super gpt-4v trustworthiness. arXiv preprint arXiv:2405.17220, 2024.
- [78] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabilities, 2024.
- [79] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi, 2024.
- [80] Di Zhang. Aime_1983_2024 (revision 6283828), 2025.
- [81] Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large language models. In The Eleventh International Conference on Learning Representations, 2023.
- [82] Haojie Zheng, Tianyang Xu, Hanchi Sun, Shu Pu, Ruoxi Chen, and Lichao Sun. Thinking before looking: Improving multimodal llm reasoning via mitigating visual hallucination, 2024.
## NeurIPS Paper Checklist
1. Claims
1. Question: Do the main claims made in the abstract and introduction accurately reflect the paperâs contributions and scope?
1. Answer: [Yes]
1. Justification: The abstract and introduction faithfully present the contributions and scope of the paper.
1. Guidelines:
- The answer NA means that the abstract and introduction do not include the claims made in the paper.
- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
1. Limitations
1. Question: Does the paper discuss the limitations of the work performed by the authors?
1. Answer: [Yes]
1. Justification: We include the limitations of our work in Section E.
1. Guidelines:
- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
- The authors are encouraged to create a separate "Limitations" section in their paper.
- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that arenât acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
1. Theory assumptions and proofs
1. Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
1. Answer: [N/A]
1. Justification: This paper does not include theoretical results.
1. Guidelines:
- The answer NA means that the paper does not include theoretical results.
- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
- All assumptions should be clearly stated or referenced in the statement of any theorems.
- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
- Theorems and Lemmas that the proof relies upon should be properly referenced.
1. Experimental result reproducibility
1. Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
1. Answer: [Yes]
1. Justification: All the information needed to reproduce the main experimental results are provided in Section 3, 4, and 5. We will release the implementation if the paper is accepted.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
1. If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
1. If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
1. If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
1. We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
1. Open access to data and code
1. Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
1. Answer: [Yes]
1. Justification: We will release the code if the paper is accepted or through an anonymous link per reviewerâs request.
1. Guidelines:
- The answer NA means that paper does not include experiments requiring code.
- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
- While we encourage the release of code and data, we understand that this might not be possible, so âNoâ is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
1. Experimental setting/details
1. Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
1. Answer: [Yes]
1. Justification: The detailed experimental settings are included in Section 5.1 and Appendix B, C.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
- The full details can be provided either with the code, in appendix, or as supplemental material.
1. Experiment statistical significance
1. Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
1. Answer: [No]
1. Justification: Due to the resource limitation, we do not report error bars. But note that we conduct experiments on diverse datasets and follow the protocol used by previous works for fair comparisons.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
- The assumptions made should be given (e.g., Normally distributed errors).
- It should be clear whether the error bar is the standard deviation or the standard error of the mean.
- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified.
- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
1. Experiments compute resources
1. Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
1. Answer: [Yes]
1. Justification: Compute resources used in the experiments are reported in Section 5.1.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didnât make it into the paper).
1. Code of ethics
1. Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
1. Answer: [Yes]
1. Justification: Our paper followed the NeurIPS Code of Ethics.
1. Guidelines:
- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
1. Broader impacts
1. Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
1. Answer: [Yes]
1. Justification: Our work helps to enhance multimodal reasoning with DreamPRM. Although the models could still produce errors, we suggest not to rely completely on LLMs and donât perceive it as major negative societal impact.
1. Guidelines:
- The answer NA means that there is no societal impact of the work performed.
- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
1. Safeguards
1. Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
1. Answer: [N/A]
1. Justification: This paper poses no such risks.
1. Guidelines:
- The answer NA means that the paper poses no such risks.
- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
1. Licenses for existing assets
1. Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
1. Answer: [Yes]
1. Justification: We have properly cited papers and models used in our paper.
1. Guidelines:
- The answer NA means that the paper does not use existing assets.
- The authors should cite the original paper that produced the code package or dataset.
- The authors should state which version of the asset is used and, if possible, include a URL.
- The name of the license (e.g., CC-BY 4.0) should be included for each asset.
- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
- If this information is not available online, the authors are encouraged to reach out to the assetâs creators.
1. New assets
1. Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
1. Answer: [Yes]
1. Justification: We will release our code with detailed readme files and instructions.
1. Guidelines:
- The answer NA means that the paper does not release new assets.
- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
- The paper should discuss whether and how consent was obtained from people whose asset is used.
- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
1. Crowdsourcing and research with human subjects
1. Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
1. Answer: [N/A]
1. Justification: This work does not involve crowdsourcing nor research with human subjects.
1. Guidelines:
- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
1. Institutional review board (IRB) approvals or equivalent for research with human subjects
1. Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
1. Answer: [N/A]
1. Justification: This work does not involve crowdsourcing nor research with human subjects.
1. Guidelines:
- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
1. Declaration of LLM usage
1. Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
1. Answer: [Yes]
1. Justification: LLMs, specifically MLLMs, are used in the experiments as the paper is about multimodal reasoning. The usage is described in Secion 3, 4. In terms of writing, LLMs are only used for checking grammar, spelling, and word choices.
1. Guidelines:
- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
## Appendix
## Appendix A Optimization algorithm
Directly solving the bi-level optimization problem in Equation 9 can be computational prohibitive due to its nested structure. Following previous work [7], we use approximated algorithm with a few unrolling steps. For example, under one-step unrolling, the updating of PRMâs weights can be expressed as:
$$
\phi^{(t+1)}=\phi^{(t)}-\beta_{1}\nabla_{\phi}\mathcal{L}_{tr}(\mathcal{D}_{tr},\phi,\alpha) \tag{10}
$$
where $\beta_{1}$ is the learning rate in lower level optimization. After obtaining the updated PRM parameter $\phi^{(t+1)}$ from Equation 10, the domain-reweighting parameter $\alpha$ is then updated as follows:
$$
\alpha^{(t+1)}=\alpha^{(t)}-\beta_{2}\nabla_{\alpha}\mathcal{L}_{meta}(\mathcal{D}_{meta},\phi^{*}(\alpha)) \tag{11}
$$
where $\beta_{2}$ is the learning rate for upper level optimization. The two optimization steps in Equation 10 and Equation 11 are conducted iteratively until convergence to get optimal PRM weights $\phi^{*}$ and optimal domain reweighting parameter $\alpha^{*}$ .
## Appendix B Datasets and benchmarks
Table 2: Multimodal datasets involved in the fine-tuning of DreamPRM, organized by task category.
| Science Chart Geometry | AI2D [23], ScienceQA [35], M3CoT [6] ChartQA [39], DVQA [20], MapQA [3], FigureQA [21] Geo170k [15], Geometry3K [34], UniGeo [4], GeomVerse [22], GeoS [51] |
| --- | --- |
| Commonsense | IconQA [36], InfographicsVQA [40], CLEVR-Math [30] |
For datasets used in lower-level optimization ( $\mathcal{D}_{tr}$ in Section 4), our study utilizes a diverse set of datasets, spanning multiple domains to ensure a comprehensive coverage of multimodal reasoning tasks, as reported in Tab. 2. The selected 15 multimodal datasets covers 4 major categories including science, chart, geometry and commonsense, with a wide range of task types (QA, OCR, spatial understanding). Additionally, we observe that for some questions, given the current structural thinking prompts, MLLMs consistently produce either correct or incorrect answers. Continuing to sample such questions is a waste of computational resources. Inspired by the dynamic sampling strategy in DAPO [78], we propose a similar dynamic sampling technique for Monte Carlo estimation that focuses on prompts with varied outcomes to improve efficiency. After processing and sampling, the training datasets in lower-level $\mathcal{D}_{tr}$ have around 15k examples (1k per each of the 15 domains), while the meta dataset in the upper-level $\mathcal{D}_{meta}$ has around 1k validation examples from the MMMU [79] dataset.
For the dataset used in upper-level optimization ( $\mathcal{D}_{meta}$ in Section 4), we select data from MMMU [79] to simulate a realistic and diverse reasoning scenario. MMMU focuses on advanced perception and reasoning with domain-specific knowledge. Its questions span 30 subjects and 183 subfields, comprising 30 highly heterogeneous image types, such as charts, diagrams, maps, tables, music sheets, and chemical structures.
At evaluation time, we use five multimodal reasoning benchmarks for testing the capability of DreamPRM. WeMath [48], MathVista [33], and MathVision [63] focus more on math-related reasoning tasks and logic and critical thinking, while MMVet [78] and MMStar [5] focus more on real-life tasks that require common knowledge and general reasoning abilities.
## Appendix C Structural Thinking Prompt
The detailed structural thinking prompt applied in our experiments is reported in Fig. 8. We carefully design 5 reasoning steps to boost the reasoning capabilities of the MLLMs and enable process supervision.
<details>
<summary>figures/7-1.png Details</summary>

### Visual Description
\n
## Chart: Function Plots & Monotonicity Question
### Overview
The image presents a chart displaying three function plots alongside a text block outlining a 5-step structural thinking process for multimodal reasoning, and a question regarding monotonicity. The chart is positioned on the right side of the image, while the text is on the left. The question asks which function is monotonic in the range [0, pi].
### Components/Axes
* **Chart Title:** None explicitly stated, but the chart depicts function plots.
* **X-axis Label:** "x"
* **Y-axis Label:** "y"
* **X-axis Scale:** Ranges from approximately -3.5 to 4.5.
* **Y-axis Scale:** Ranges from approximately -1.0 to 0.8.
* **Legend:** Located in the top-right corner of the chart.
* Red Line: Labeled as "red"
* Blue Line: Labeled as "blue"
* Green Line: Labeled as "green"
* **Question:** "Which function is monotonic in range [0, pi]?"
* **Choices:**
* (A) the red one
* (B) the blue one
* (C) both (A) and (B)
* (D) none of them
* **Answer:** (B) the blue one
* **Metadata:**
* Category: Math-targeted
* Task: Textbook question answering
* Context: Function plot
* Grade: College
* Math: Algebraic reasoning
### Detailed Analysis or Content Details
The chart displays three curves:
* **Red Line:** This line exhibits a sinusoidal pattern. It starts at approximately y=0 at x=-3.5, reaches a maximum of approximately y=0.7 at x=-1.5, crosses the x-axis at x=-0.5, reaches a minimum of approximately y=-0.7 at x=1.5, and returns to approximately y=0 at x=3.5. It is *not* monotonic.
* **Blue Line:** This line is a cubic function. It starts at approximately y=-1 at x=-3.5, increases monotonically to approximately y=0.8 at x=4.5. It is monotonic.
* **Green Line:** This line also exhibits a sinusoidal pattern, but is phase-shifted compared to the red line. It starts at approximately y=0.7 at x=-2.5, crosses the x-axis at x=-0.5, reaches a minimum of approximately y=-0.7 at x=1.5, and returns to approximately y=0.7 at x=3.5. It is *not* monotonic.
### Key Observations
* The blue line is the only function that consistently increases within the visible range, indicating monotonicity.
* The red and green lines oscillate, demonstrating non-monotonic behavior.
* The question specifically asks about monotonicity in the range [0, pi]. Since pi is approximately 3.14, the relevant portion of the x-axis is from 0 to 3.14. Within this range, the blue line continues to increase monotonically.
### Interpretation
The data demonstrates the concept of monotonicity in functions. A monotonic function either consistently increases or consistently decreases over a given interval. The blue line fulfills this condition within the specified range [0, pi], while the red and green lines do not due to their oscillatory nature. The question is designed to test understanding of this mathematical concept, and the chart provides the visual evidence to support the correct answer. The 5-step structural thinking process outlined in the text block is a method for approaching multimodal reasoning problems, such as this one, by breaking down the task into smaller, manageable steps. The metadata indicates this is a college-level algebraic reasoning problem.
</details>
Figure 8: Zero-shot prompting for structural thinking.
Table 3: Accuracy on MathVista using DreamPRM with varying numbers $k$ of CoTs.
| InternVL-2.5-8B-MPO [67] GPT-4.1-mini (4-14-25) [46] | 65.4 71.5 | 65.3 71.8 | 66.5 72.5 | 67.8 73.2 | 68.9 74.4 |
| --- | --- | --- | --- | --- | --- |
Table 4: Ablation study evaluating the impact of individual components of DreamPRM
| DreamPRM (original) w/o aggregation function loss w/o bi-level optimization | 57.4 56.3 (-1.1) 55.0 (-2.4) | 68.9 66.1 (-2.8) 65.4 (-3.5) | 22.1 20.1 (-2.0) 19.9 (-2.2) | 61.4 60.0 (-1.4) 61.2 (-0.2) | 62.3 59.6 (-2.7) 58.9 (-3.4) |
| --- | --- | --- | --- | --- | --- |
| w/o structural thinking | 54.6 (-2.8) | 65.7 (-3.2) | 20.3 (-1.8) | 57.5 (-3.9) | 61.6 (-0.7) |
## Appendix D Additional Experimental Results
Leaderboard performance details. Table 5 presents a comprehensive comparison of different PRM variants built upon the same o4-mini backbone. DreamPRM consistently outperforms all baselines, elevating the base o4-mini performance from 80.6These steady improvements demonstrate the effectiveness of DreamPRM in enhancing reasoning accuracy through process-level supervision and promoting more reliable consensus across multiple chains of thought.
Best-of-N results. Tab. 3 reports the accuracy of two state-of-the-art models on MathVista dataset using DreamPRM with varying numbers $k$ of CoTs. The results indicate that the performance scales well with the number of CoTs.
Table 5: Comparison of different PRM variants on the o4-mini model (evaluated on eight CoTs).
| o4-mini + Self-consistency + ORM | 80.6 81.7 80.8 |
| --- | --- |
| + Vanilla-PRM | 84.2 |
| + DreamPRM | 85.2 |
Ablation studies. The exact results of ablation experiments in the main paper are included in Tab. 4, which emphasizes the importance of all the components in DreamPRM.
Loss curves and domain weights. The loss curves and domain weights during the fine-tuning of DreamPRM are illustrated in Fig. 9. It can be observed that the learnt distribution emphasizes informative mathematical figure domains while attenuating less relevant sources. Additionally, domain weights start at 1.0 and quickly diverge, stabilizing after roughly half the training, and the inner and outer losses decrease steadily and plateau, indicating stable convergence of the biâlevel training procedure.
Case study. A complete case study illustrating DreamPRMâs step-wise evaluation is reported in Fig. 10. DreamPRM assigns higher scores to high-quality, coherent reasoning steps, while penalizes flawed or unsupported steps.
<details>
<summary>figures/6-3.png Details</summary>

### Visual Description
\n
## Charts: Optimization Loss and Domain Weights during Training
### Overview
The image presents two line charts visualizing the training process of a model. The left chart shows the Upper and Lower Optimization Loss over Training Progress. The right chart displays the Domain Weights for various datasets over the same Training Progress. Both charts share a common x-axis representing Training Progress from 0.0 to 1.0.
### Components/Axes
**Left Chart (Upper & Lower Optimization Loss):**
* **X-axis:** Training Progress (0.0 to 1.0)
* **Y-axis:** Loss (approximately 0.21 to 0.26)
* **Legend:**
* Upper Optimization Loss (Orange)
* Lower Optimization Loss (Yellow)
**Right Chart (Domain Weights):**
* **X-axis:** Training Progress (0.0 to 1.0)
* **Y-axis:** Domain Weight (approximately 0.4 to 1.6)
* **Legend:**
* a2d (Red)
* chart2a (Orange)
* m3ot (Green)
* scienceqa (Light Blue)
* mapqa (Yellow)
* unigeo (Purple)
* geomverse (Dark Blue)
* iconqa (Pink)
* dvoqa (Brown)
* figureqa (Teal)
* infographics (Gray)
* geoss (Cyan)
### Detailed Analysis or Content Details
**Left Chart (Optimization Loss):**
* **Upper Optimization Loss (Orange):** The line starts at approximately 0.255 at Training Progress 0.0, fluctuates significantly, reaching a peak around 0.258 at approximately 0.15, and generally decreases to around 0.235 at Training Progress 1.0. There are numerous oscillations throughout the training process.
* **Lower Optimization Loss (Yellow):** The line begins at approximately 0.225 at Training Progress 0.0, exhibits fluctuations, reaching a minimum around 0.218 at approximately 0.35, and increases to around 0.228 at Training Progress 1.0. It generally remains lower than the Upper Optimization Loss.
**Right Chart (Domain Weights):**
* **a2d (Red):** Starts at approximately 1.1, decreases to around 0.8 at Training Progress 0.2, then increases to approximately 1.2 at Training Progress 1.0.
* **chart2a (Orange):** Starts at approximately 0.9, decreases to around 0.6 at Training Progress 0.2, then increases to approximately 0.9 at Training Progress 1.0.
* **m3ot (Green):** Starts at approximately 0.75, remains relatively stable around 0.7-0.8 throughout the training process.
* **scienceqa (Light Blue):** Starts at approximately 0.8, decreases to around 0.5 at Training Progress 0.2, then increases to approximately 0.7 at Training Progress 1.0.
* **mapqa (Yellow):** Starts at approximately 0.8, decreases to around 0.6 at Training Progress 0.2, then increases to approximately 0.8 at Training Progress 1.0.
* **unigeo (Purple):** Starts at approximately 1.3, decreases to around 1.0 at Training Progress 0.2, then increases to approximately 1.3 at Training Progress 1.0.
* **geomverse (Dark Blue):** Starts at approximately 1.1, decreases to around 0.8 at Training Progress 0.2, then increases to approximately 1.1 at Training Progress 1.0.
* **iconqa (Pink):** Starts at approximately 1.2, decreases to around 0.9 at Training Progress 0.2, then increases to approximately 1.2 at Training Progress 1.0.
* **dvoqa (Brown):** Starts at approximately 0.9, decreases to around 0.6 at Training Progress 0.2, then increases to approximately 0.9 at Training Progress 1.0.
* **figureqa (Teal):** Starts at approximately 0.8, decreases to around 0.6 at Training Progress 0.2, then increases to approximately 0.8 at Training Progress 1.0.
* **infographics (Gray):** Starts at approximately 0.7, remains relatively stable around 0.7-0.8 throughout the training process.
* **geoss (Cyan):** Starts at approximately 0.5, remains relatively stable around 0.5-0.6 throughout the training process.
### Key Observations
* The Upper Optimization Loss is consistently higher than the Lower Optimization Loss, suggesting a potential imbalance in the optimization process.
* The Domain Weights exhibit varying degrees of fluctuation during training. Some datasets (e.g., m3ot, infographics, geoss) maintain relatively stable weights, while others (e.g., a2d, chart2a) show more significant changes.
* Several datasets (a2d, chart2a, unigeo, geomverse, iconqa, dvoqa, figureqa, mapqa, scienceqa) show a dip in weight around Training Progress 0.2, followed by an increase.
### Interpretation
The charts illustrate the training dynamics of a model likely being trained with a domain adaptation or multi-domain learning approach. The optimization loss curves suggest that the training process is not entirely smooth, with oscillations indicating potential challenges in convergence. The domain weights reveal how the model's attention shifts between different datasets during training. The initial decrease in weights for several datasets around Training Progress 0.2 could indicate a period of initial adjustment or forgetting, followed by a recovery as the model learns to balance the contributions of different domains. The relatively stable weights for datasets like m3ot, infographics, and geoss suggest that the model is consistently attending to these domains throughout the training process. The difference between the upper and lower optimization loss could indicate a need for adjustments to the learning rate or optimization algorithm to achieve better convergence and balance. The overall trend suggests the model is learning, but further analysis might be needed to understand the specific reasons for the observed fluctuations and imbalances.
</details>
Figure 9: Optimization loss curves and dynamic domain weights throughout DreamPRM fine-tuning.
<details>
<summary>x9.png Details</summary>

### Visual Description
\n
## Diagram: Problem Solving Steps with Function Graph
### Overview
The image presents a step-by-step solution to a calculus problem involving the derivative of the function f(x) = |2x - 3| + 1. The problem asks to compare the derivative of the function at x=2 and x=5. The solution is broken down into five steps, each with supporting reasoning and a "DreamPRM" score. A graph of the function is displayed alongside the steps.
### Components/Axes
* **Function Graph:** The graph of f(x) = |2x - 3| + 1 is plotted on a coordinate plane.
* X-axis: Ranges from approximately -5 to 10, labeled with numerical markers.
* Y-axis: Ranges from approximately 0 to 10, labeled with numerical markers.
* **Problem Statement:** "Question: The derivative of f(x) at x=2 is ____ that at x=5"
* **Choices:** (A) larger than (B) equal to (C) smaller than
* **Ground Truth:** B
* **Benchmark:** MathVista
* **Step-by-Step Solution:** Five steps are outlined, each with a description and a "DreamPRM" score.
### Detailed Analysis or Content Details
**Function Graph Details:**
The graph is a V-shaped absolute value function.
* The vertex of the V is at approximately (1.5, 1).
* The graph is linear with a slope of 2 for x > 1.5.
* The graph is linear with a slope of -2 for x < 1.5.
**Step 1:**
* Text: "The question is asking whether the derivative of the function f(x) = |2x - 3| + 1 at x = 2 is larger than, equal to, or smaller than the derivative at x = 5."
* DreamPRM: 0.628
**Step 2:**
* Text: "The image shows the graph of the function f(x) = |2x - 3| + 1."
* Text: "The graph has a V-shape, indicating it is an absolute value function."
* DreamPRM: 0.575
**Step 3:**
* Text: "Identify any background knowledge needed."
* Text: "The derivative of an absolute value function changes at the point where the expression inside the absolute value is zero."
* Text: "For f(x) = |2x - 3| + 1, the expression inside the absolute value is (2x - 3)."
* Text: "The derivative of |2x - 3| + 1 is (2) for (x > 1.5) and (-2) for (x < 1.5)."
* DreamPRM: 0.598
**Step 4:**
* Text: "Reason with the current evidence."
* Text: "The point where the derivative changes is at (x = 1.5)."
* Text: "For (x = 2), the function is in the interval where (x > 1.5), so the derivative is (2)."
* Text: "For (x = 5), the function is also in the interval where (x > 1.5), so the derivative is (2)."
* DreamPRM: 0.748
**Step 5:**
* Text: "Summarize and conclude with all the information."
* Text: "Both the derivatives at x = 2 and x = 5 are 2, so they are equal."
* DreamPRM: 0.812
**Final Answer:** B
**Second Problem Instance:**
**Step 1:**
* Text: "The question is asking how the derivative of f(x) at x = 2 compares to the derivative of f(x) at x = 5."
* DreamPRM: 0.676
**Step 2:**
* Text: "The image shows the graph of the function f(x) = |2x - 3| + 1."
* Text: "There are points marked on the graph at x = 2 and x = 5."
* DreamPRM: 0.499
**Step 3:**
* Text: "Identify any background knowledge needed."
* Text: "The derivative of an absolute value function changes at the point where the expression inside the absolute value is zero."
* Text: "For f(x) = |2x - 3| + 1, the expression inside the absolute value is (2x - 3)."
* Text: "The derivative of |2x - 3| + 1 is (2) for (x > 1.5) and (-2) for (x < 1.5)."
* DreamPRM: 0.633
**Step 4:**
* Text: "Reason with the current evidence."
* Text: "The point where the derivative changes is at (x = 1.5)."
* Text: "For (x = 2), the function is in the interval where (x > 1.5), so the derivative is (2)."
* Text: "For (x = 5), the function is also in the interval where (x > 1.5), so the derivative is (2)."
* DreamPRM: 0.801
**Step 5:**
* Text: "Summarize and conclude with all the information."
* Text: "Both the derivatives at x = 2 and x = 5 are 2, so they are equal."
* DreamPRM: 0.888
**Final Answer:** B
### Key Observations
* The "DreamPRM" score increases with each step, indicating increasing confidence in the solution.
* The solution correctly identifies the key property of absolute value functions â a change in derivative at the point where the expression inside the absolute value is zero.
* Both instances of the problem lead to the same answer (B), confirming the consistency of the solution.
### Interpretation
The diagram demonstrates a problem-solving approach to a calculus problem, specifically finding and comparing derivatives of an absolute value function. The step-by-step breakdown, coupled with the visual aid of the graph, provides a clear and logical explanation. The "DreamPRM" scores suggest a confidence level associated with each step, potentially indicating the system's internal assessment of its reasoning. The consistent results across two similar problem instances reinforce the validity of the approach. The diagram highlights the importance of understanding the properties of absolute value functions and applying them to derivative calculations. The use of a benchmark (MathVista) suggests this is part of a larger system for evaluating and improving problem-solving capabilities.
</details>
Figure 10: A case study of DreamPRMâs step-wise evaluation.
## Appendix E Limitations & Future Work.
DreamPRM currently assumes a fixed set of domains and requires Monte-Carlo sampling, which can be computationally heavy. Future work could explore instance-level reweighting, adaptive sampling strategies, and integration with retrieval-augmented generation to further cut compute while broadening coverage. We will release code, trained weights, and evaluation scripts to facilitate reproducibility and community adoption.