# Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models
**Authors**: Xihe Qiu, Haoyu Wang, Xiaoyu Tan, Chao Qu, Yujie Xiong, Yuan Cheng, Yinghui Xu, Wei Chu, Yuan Qi
## Abstract
Effective collaboration in multi-agent systems requires communicating goals and intentions between agents. Current agent frameworks often suffer from dependencies on single-agent execution and lack robust inter-module communication, frequently leading to suboptimal multi-agent reinforcement learning (MARL) policies and inadequate task coordination. To address these challenges, we present a framework for training large language models (LLMs) as collaborative agents to enable coordinated behaviors in cooperative MARL. Each agent maintains a private intention consisting of its current goal and associated sub-tasks. Agents broadcast their intentions periodically, allowing other agents to infer coordination tasks. A propagation network transforms broadcast intentions into teammate-specific communication messages, sharing relevant goals with designated teammates. The architecture of our framework is structured into planning, grounding, and execution modules. During execution, multiple agents interact in a downstream environment and communicate intentions to enable coordinated behaviors. The grounding module dynamically adapts comprehension strategies based on emerging coordination patterns, while feedback from execution agents influnces the planning module, enabling the dynamic re-planning of sub-tasks. Results in collaborative environment simulation demonstrate intention propagation reduces miscoordination errors by aligning sub-task dependencies between agents. Agents learn when to communicate intentions and which teammates require task details, resulting in emergent coordinated behaviors. This demonstrates the efficacy of intention sharing for cooperative multi-agent RL based on LLMs.
Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models
## 1 Introduction
With the recent advancements of large language models (LLMs), developing intelligent agents that can perform complex reasoning and long-horizon planning has attracted increasing research attention Sharan et al. (2023); Huang et al. (2022). A variety of agent frameworks have been proposed, such as ReAct Yao et al. (2022), LUMOS Yin et al. (2023), Chameleon Lu et al. (2023) and BOLT Chiu et al. (2024). These frameworks typically consist of modules for high-level planning, grounding plans into executable actions, and interacting with environments or tools to execute actions Rana et al. (2023).
Despite their initial success, existing agent frameworks may experience some limitations. Firstly, most of them rely on a single agent for execution Song et al. (2023); Hartmann et al. (2022). However, as tasks become more complex, the action dimension can be increased exponentially, and it poses significant challenges for a single agent to handle all execution functionalities Chebotar et al. (2023); Wen et al. (2023). Secondly, existing frameworks lack inter-module communication mechanisms. Typically, the execution results are directly used as input in the planning module without further analysis or coordination Zeng et al. (2023); Wang et al. (2024b). When execution failures occur, the agent may fail to adjust its strategies accordingly Chaka (2023). Thirdly, the grounding module in existing frameworks operates statically, without interactions with downstream modules. It grounds plans independently without considering feedback or states of the execution module Xi et al. (2023). LLMs struggle to handle emergent coordination behaviors and lack common grounding on shared tasks. Moreover, existing multi-agent reinforcement learning (MARL) methods often converge on suboptimal policies that fail to exhibit a certain level of cooperation Gao et al. (2023); Yu et al. (2023).
How can the agents with LLMs effectively communicate and collaborate with each other? we propose a novel approach, Re cursive M ulti- A gent L earning with I ntention S haring (ReMALIS The code can be accessed at the following URL: https://github.com/AnonymousBoy123/ReMALIS.) to address the limitations of existing cooperative artificial intelligence (AI) multi-agent frameworks with LLMs. ReMALIS employs intention propagation between LLM agents to enable a shared understanding of goals and tasks. This common grounding allows agents to align intentions and reduce miscoordination. Additionally, we introduce bidirectional feedback loops between downstream execution agents and upstream planning and grounding modules. This enables execution coordination patterns to guide adjustments in grounding strategies and planning policies, resulting in more flexible emergent behaviors Topsakal and Akinci (2023). By integrating these mechanisms, ReMALIS significantly improves the contextual reasoning and adaptive learning capabilities of LLM agents during complex collaborative tasks. The execution module utilizes specialized agents that collaboratively execute actions, exchange information, and propagate intentions via intention networks. These propagated intentions reduce miscoordination errors and guide grounding module adjustments to enhance LLM comprehension based on coordination patterns Dong et al. (2023). Furthermore, execution agents can provide feedback to prompt collaborative re-planning in the planning module when necessary.
Compared to single-agent frameworks, the synergistic work of multiple specialized agents enhances ReMALIS ’s collective intelligence and leads to emerging team-level behaviors Wang et al. (2023). The collaborative design allows for dealing with more complex tasks that require distributed knowledge and skills. We demonstrate that:
- Intention propagation between execution agents enables emergent coordination behaviors and reduces misaligned sub-tasks.
- Grounding module strategies adjusted by intention sharing improve LLM scene comprehension.
- Planning module re-planning guided by execution feedback increases goal-oriented coordination.
Compared to various single-agent baselines and existing state-of-the-art MARL Hu and Sadigh (2023); Zou et al. (2023) methods using LLMs, our ReMALIS framework demonstrates improved performance on complex collaborative tasks, utilizing the publicly available large-scale traffic flow prediction (TFP) dataset and web-based activities dataset. This demonstrates its effectiveness in deploying LLMs as collaborative agents capable of intention communication, strategic adjustments, and collaborative re-planning Du et al. (2023).
## 2 Preliminary
In this section, we introduce the methods of the proposed ReMALIS framework in detail. As illustrated in Figure 1, ReMALIS consists of four key components:
<details>
<summary>x1.png Details</summary>

### Visual Description
## Diagram: Multi-Stage AI System Architecture with Human-in-the-Loop Supervision
### Overview
The diagram illustrates a complex AI system architecture divided into four interconnected modules: Planning Module, Grounding Module, Execution Learning, and Effective Supervision and Guidance. The system emphasizes iterative feedback loops, human oversight, and adaptive learning through confidence/uncertainty thresholds.
### Components/Axes
1. **Planning Module** (Bottom Left)
- **Goal Decomposition**: Breaks down high-level goals into sub-goals
- **Situation Analysis**: Evaluates current state/context
- **Task Requirements**: Defines operational constraints
- **LLM (Large Language Model)**: Core processing unit
- **Knowledge Elicitation**: Extracts domain-specific knowledge
- **External Information**: Integrates real-world data
- **Intentional Transmission**: Predicts next sub-goal (s(t+1))
2. **Grounding Module** (Top Left)
- **Iterative Update Reward Feedback**: Adjusts performance metrics
- **Collaborative Communication**: Enables agent coordination
- **Progress Broadcast**: Shares status updates
- **Type-level Attention**: Focuses on critical elements
- **Dropout**: Introduces stochasticity for robustness
- **Action History Trajectory**: Tracks past decisions
3. **Execution Learning** (Top Right)
- **Confidence Thresholds (T_p, T_n)**: Determines action validity
- **Uncertainty Thresholds (K_p, K_n)**: Manages risk assessment
- **Meets Expectations**: Successful outcomes
- **Incorrect Behavior**: Failure cases
4. **Effective Supervision and Guidance** (Bottom Right)
- **Pre-training LLM Intervention**: Human-guided model training
- **Manually labeled data**: Ground truth dataset
- **Pseudo-labeled data**: Automated annotations
- **Expand the Task Prompt**: Enhances task understanding
- **Effective Prompt**: Optimized input formulation
- **Redefine Planning**: Iterative strategy adjustment
### Detailed Analysis
- **Flow Direction**:
- Planning → Grounding → Execution → Supervision
- Feedback loops connect Execution Learning back to Planning Module
- Human intervention occurs at multiple stages (pre-training, supervision)
- **Color Coding**:
- Blue: Intentional Transmission/Planning Guidance
- Purple: Collaborative Communication/Action History
- Green: Type-level Attention/Effective Supervision
- Orange: Knowledge Elicitation/External Information
- **Key Connections**:
- Planning Module's LLM output feeds into Grounding Module's Type-level Attention
- Execution Learning's confidence thresholds influence Planning Module's sub-goal prediction
- Human supervision directly impacts all modules through data labeling and prompt engineering
### Key Observations
1. **Iterative Nature**: The system emphasizes continuous improvement through feedback loops
2. **Human-AI Collaboration**: Multiple points of human intervention (pre-training, supervision, prompt engineering)
3. **Risk Management**: Explicit confidence/uncertainty thresholds suggest safety-critical applications
4. **Modular Design**: Clear separation of concerns between planning, execution, and supervision
### Interpretation
This architecture represents a sophisticated AI system designed for complex, dynamic environments requiring:
1. **Adaptive Planning**: Continuous sub-goal prediction and adjustment
2. **Robust Execution**: Confidence-based decision making with uncertainty quantification
3. **Human Oversight**: Critical intervention points for training and supervision
4. **Collaborative Intelligence**: Agent coordination through progress broadcasting
The system's strength lies in its ability to balance autonomous operation with human guidance, particularly through the integration of manually labeled data and pseudo-labeling techniques. The confidence/uncertainty thresholds suggest applications where safety and reliability are paramount, such as autonomous systems or medical diagnostics.
The diagram implies a Peircean investigative approach where the AI system continuously tests hypotheses (sub-goals) against reality (execution feedback), refining its understanding through iterative cycles of action and observation.
</details>
Figure 1: This framework introduces a multi-agent learning strategy designed to enhance the capabilities of LLMs through cooperative coordination. It enables agents to collaborate and share intentions for effective coordination, and utilizes recursive reasoning to model and adapt to each other’s strategies.
Planning Module $p_{\theta}$ predicts the next pending sub-goal $s_{t+1}$ , given the current sub-goal $s_{t}$ and other inputs $s_{t+1}=p_{\theta}(s_{t},I_{t},e_{t},f_{t}),$ where $I_{t}$ is the current intention, $e_{t}$ is the grounded embedding, and $f_{t}$ is agent feedback. $p_{\theta}$ first encode information through encoding layers $h_{t}=Encoder(s_{t},I_{t},e_{t},f_{t})$ and subsequently predict the sub-goal through $s_{t+1}=Softmax(T_{\theta}(h_{t}))$ , where $T_{\theta}$ utilizes the graph neural network (GNN) architecture.
The module is trained to maximize the likelihood of all sub-goals along the decision sequences given the current information on time step $t$ . This allows the dynamic re-planning of sub-task dependencies based on agent feedback.
$$
\theta^{*}=\arg\max_{\theta}\prod_{t=1}^{T}p_{\theta}(s_{t+1}|s_{t},I_{t},e_{t
},f_{t}). \tag{1}
$$
Grounding Module $g_{\phi}$ contextualizes symbol embeddings $e_{t}=g_{\phi}(s_{t},I_{t},f_{1:t})$ , where $s_{t}$ , $I_{t}$ , and $f_{1:t}$ represent the states, intention, and feedback up to time step $t$ , respectively. These embeddings are processed by encoders $h_{t}=\text{Encoder}(s_{t},I_{t},f_{1:t})$ and then by cross-attention layers and convolutional feature extractors: $e_{t}=Conv(Attn(h_{t},V))+P_{t}$ over vocabulary $V$ . Here, $P_{t}$ includes agent feedback to enhance grounding accuracy based on coordination signals for more accurate contextual understanding. The module maps language symbols to physical environment representations through:
$$
g(x)=f_{\theta}\left(\sum_{i=1}^{N}w_{i}g(x_{i})\right), \tag{2}
$$
where $g(x)$ is the grounded embeddings of policy set $x$ and $g(x_{i})$ represents its individual action embedding on agent $i$ , respectively, and $w_{i}$ are learnable weights. The grounding function $f_{\theta}$ utilizes a GNN architecture for structural composition. Additionally, we employ an uncertainty modeling module that represents ambiguities in grounding:
$$
q_{\phi}(z|x)=\text{Normal}\big{(}z;\mu_{\phi}(x),\sigma^{2}_{\phi}(x)\big{)}, \tag{3}
$$
where $z$ is a latent variable modeled as a normal distribution, enabling the capture of multimodal uncertainties in grounding.
Cooperative Execution Module comprises $N$ specialized agents $\{A_{1},...,A_{N}\}$ . This architecture avoids using a single agent to handle all tasks. Instead, each agent is dedicated to a distinct semantic domain, cultivating expertise specific to that domain. For instance, agents $A_{1},A_{2},$ and $A_{3}$ may be dedicated to query processing, information retrieval, and arithmetic operations, respectively. This specialization promotes an efficient distribution of tasks and reduces overlap in capabilities.
Decomposing skills into specialized agents risks creating isolated capabilities that lack coordination. To address this, it is essential that agents not only excel individually but also comprehend the capacities and limitations of their peers. We propose an integrated training approach where specialized agents are trained simultaneously to foster collaboration and collective intelligence. We represent the parameters of agent $A_{i}$ as $\theta_{i}$ . Each agent’s policy, denoted as $y_{i}\sim\pi_{\theta_{i}}(\cdot|s)$ , samples an output $y_{i}$ from a given input state $s$ . The training objective for our system is defined by the following equation:
$$
L_{exe}=\sum_{i=1}^{N}\mathbb{E}_{(s,y^{\star})\sim\mathcal{D}}{\ell(\pi_{
\theta_{i}}(y_{i}|s),y^{\star})}, \tag{4}
$$
where $\ell(\cdot)$ represents the task-specific loss function, comparing the agent-generated output $y_{i}$ with the ground-truth label $y^{\star}$ . $\mathcal{D}$ denotes the distribution of training data. By optimizing this objective collectively across all agents, each agent not only improves its own output accuracy but also enhances the overall team’s ability to produce coherent and well-coordinated results.
During training, we adjust the decomposition of grounding tasks to enhance collaboration, which is represented by the soft module weights $\{w_{1},...,w_{N}\}$ . These weights indicate how the distribution of grounding commands can be optimized to better utilize the capabilities of different agents. The objective of this training is defined by the following loss function: $L_{com}=\ell(d,w^{\star})$ , where $\ell$ represents the loss function, $d$ is expressed as subgoal task instruction data, and $w^{\star}$ signifies the optimal set of weights.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Flowchart: Collaborative AI-Agent Workflow with Feedback Loops
### Overview
The flowchart illustrates a multi-stage collaborative workflow for AI agents, emphasizing iterative planning, grounding, execution, and feedback-driven adaptation. Four vertical sections represent distinct phases, interconnected by bidirectional arrows and decision nodes.
---
### Components/Axes
1. **Process Section** (Leftmost, Yellow):
- **Labels**:
- "Process" (top)
- "Task coding" (orange grid)
- "Goal decomposition" (purple tree)
- "Optimize Planning Process" (blue globe)
- "Establish Priority Policy" (blue star)
- "Respond Dynamic Changes" (purple robot)
- "LLM" (green knot)
- "Logical judgment" (blue lungs)
- "Data analysis" (green chart)
- "Prior knowledge" (blue clipboard)
- **Flow**:
- Task coding → Goal decomposition → Optimize Planning Process → Establish Priority Policy → Respond Dynamic Changes → LLM → Logical judgment/Data analysis/Prior knowledge (feedback loop).
2. **Planning Section** (Second, Purple):
- **Labels**:
- "Specific plans for the implementation of goals" (top)
- "Collaboration with neighbors" (pink cube)
- "Collaboration with functions" (pink cube)
- "Collaboration with targets" (pink cube)
- "Functional assessment" (green face)
- "Specialty analysis" (green hospital)
- "Current status" (pink heart)
- "Pre-diagnosis problems" (pink stethoscope)
- "Expert guidance" (pink doctor)
- "Future deductions" (green microscope)
- "Pland deployment" (pink pill)
- **Flow**:
- Top-down hierarchy with lateral connections between collaboration types. Feedback loops to "LLM correction" and "Task design."
3. **Grounding Section** (Third, Pink):
- **Labels**:
- "Instruction fine-tuning" (blue checkmarks)
- "Task design" (pink checklist)
- "LLM correction" (green knot)
- "Agents execution" (pink hospital)
- "Guided by experts" (pink doctor)
- **Flow**:
- Top-down with bidirectional arrows between "Task design" and "LLM correction." Feedback to "Planning" and "Execution."
4. **Execution Section** (Rightmost, Blue):
- **Labels**:
- "Coordinate for Cooperation" (top)
- "Collaborative evaluation" (orange hourglass)
- "Communication between agents" (red arrows)
- "Historical process" (blue monitor)
- "Expert guidance" (pink graduate)
- **Flow**:
- Top-down with feedback loops to "Grounding" and "Planning."
---
### Detailed Analysis
- **Feedback Mechanisms**:
- All sections include feedback loops (marked by dashed arrows) to earlier stages, enabling iterative refinement.
- Decision nodes (circles with "A" and "?") trigger re-planning or grounding based on error magnitude or feedback availability.
- **Collaboration Patterns**:
- Collaboration occurs at three levels: neighbors (spatial), functions (operational), and targets (goal-oriented).
- Expert guidance is integrated at multiple stages (Planning, Grounding, Execution).
- **LLM Role**:
- Central to all phases, with corrections and optimizations feeding back into task design and planning.
---
### Key Observations
1. **Iterative Nature**: Feedback loops dominate, emphasizing adaptability over linear execution.
2. **Expert Integration**: Human expertise is embedded in problem diagnosis, guidance, and evaluation.
3. **Collaboration Granularity**: Collaboration is stratified (neighbors → functions → targets), suggesting hierarchical coordination.
4. **LLM as Conductor**: The LLM orchestrates workflows but relies on external inputs (data, knowledge, expert feedback).
---
### Interpretation
This workflow models a **closed-loop AI system** where collaboration and feedback are critical. The emphasis on "grounding" (real-world validation via experts) and iterative correction suggests a focus on robustness and alignment with human goals. The stratification of collaboration implies scalability, while the LLM’s central role highlights its function as a dynamic coordinator rather than a static planner. The decision nodes ("A" and "?") introduce uncertainty management, critical for real-world deployment where errors and feedback variability are inevitable.
</details>
Figure 2: Overview of the proposed ReMALIS: This framework comprises a planning module, grounding module, cooperative execution module, and intention coordination channels.
## 3 Approach
The collaborative MARL of ReMALIS focuses on three key points: intention propagation for grounding, bidirectional coordination channels, and integration with recursive reasoning agents. Detailed parameter supplements and pseudocode details can be found in Appendix C and Appendix F.
### 3.1 Planning with Intention Propagation
We formulate a decentralized, partially observable Markov game for multi-agent collaboration. Each agent $i$ maintains a private intention $\mathcal{I}_{i}$ encoded as a tuple $\mathcal{I}_{i}=(\gamma_{i},\Sigma_{i},\pi_{i},\delta_{i})$ , where $\gamma_{i}$ is the current goal, $\Sigma_{i}=\{\sigma_{i1},\sigma_{i2},\ldots\}$ is a set of related sub-goals, $\pi_{i}(\sigma)$ is a probability distribution over possible next sub-goals, and $\delta_{i}(\sigma)$ is the desired teammate assignment for sub-goal $\sigma$ .
Intentions are propagated through a communication channel $f_{\Lambda}$ parameterized by $\Lambda$ . For a received message $m_{ij}$ from agent $j$ , agent $i$ infers a belief over teammate $j$ ’s intention $b_{i}(\mathcal{I}_{j}|m_{ij})=f_{\Lambda}(m_{ij})$ , where $\Lambda$ is a recurrent neural network. The channel $f_{\theta}$ is trained in an end-to-end manner to maximize the coordination reward function $R_{c}$ . This propagates relevant sub-task dependencies to enhance common grounding on collaborative goals.
$$
\Lambda^{*}=\arg\max_{\Lambda}\mathbb{E}_{\mathcal{I},m\sim f_{\Lambda}}[R_{c}
(\mathcal{I},m)]. \tag{5}
$$
At each time-step $t$ , the LLM witll processinputs comprising the agent’s state $s_{t}$ , the intention $\mathcal{I}_{t}$ , and the feedback $f_{1:t}$ .
### 3.2 Grounding with Bidirectional Coordination Channels
The execution agent policies, denoted by $\pi_{\xi}(a_{i}|s_{i},\mathcal{I}_{i})$ , are parameterized by $\xi$ and conditioned on the agent’s state $s_{i}$ and intention $\mathcal{I}_{i}$ . Emergent coordination patterns are encoded in a summary statistic $c_{t}$ and passed to upstream modules to guide planning and grounding adjustments. For example, frequent miscoordination on sub-goal $\sigma$ indicates the necessity to re-plan $\sigma$ dependencies in $\mathcal{I}$ .
This bidirectional feedback aligns low-level execution with high-level comprehension strategies. In addition to the downstream propagation of intents, execution layers provide bidirectional feedback signals $\psi(t)$ to upstream modules $\psi(t)=\Phi(h^{\text{exec}}_{t})$ :
$$
h^{\text{exec}}_{t}=[\phi_{1}(o_{1}),\ldots,\phi_{N}(o_{N})], \tag{6}
$$
where $\Phi(\cdot)$ aggregates agent encodings to summarize emergent coordination, and $\phi_{i}(\cdot)$ encodes the observation $o_{i}$ for agent $i$ .
Execution agents generate feedback $f_{t}$ to guide upstream LLM modules through: $f_{t}=g_{\theta}(\tau_{1:t})$ , where $g_{\theta}$ processes the action-observation history $\tau_{1:t}$ . These signals include coordination errors $\mathcal{E}_{t}$ which indicate misalignment of sub-tasks; grounding uncertainty $\mathcal{U}_{t}$ , measured as entropy over grounded symbol embeddings; and re-planning triggers $\mathcal{R}_{t}$ , which flag the need for sub-task reordering. These signals can reflect inconsistencies between sub-task objectives, the ambiguity of symbols in different contexts, and the need to adjust previous sub-task sequencing.
Algorithm 1 ReMALIS: Recursive Multi-Agent Learning with Intention Sharing
1: Initialize LLM parameters $\theta,\phi,\omega$
2: Initialize agent policies $\pi_{\xi}$ , communication channel $f_{\theta}$
3: Initialize grounding confusion matrix $C$ , memory $M$
4: for each episode do
5: for each time step $t$ do
6: Observe states $s_{t}$ and feedback $f_{1:t}$ for all agents
7: Infer intentions $\mathcal{I}_{t}$ from $s_{t},f_{1:t}$ using $\text{LLM}_{\theta}$
8: Propagate intentions $\mathcal{I}_{t}$ through channel $f_{\theta}$
9: Compute grounded embeddings $e_{t}=g_{\phi}(s_{t},\mathcal{I}_{t},f_{1:t})$
10: Predict sub-tasks $\Sigma_{t+1}=p_{\theta}(\mathcal{I}_{t},e_{t},f_{1:t})$
11: Generate actions $a_{t}=a_{\omega}(e_{t},\Sigma_{t+1},f_{1:t})$
12: Execute actions $a_{t}$ and observe rewards $r_{t}$ , new states $s_{t+1}$
13: Encode coordination patterns $c_{t}=\Phi(h^{\text{exec}}_{t})$
14: Update grounding confusion $C_{t},M_{t}$ using $c_{t}$
15: Update policies $\pi_{\xi}$ using $R$ and auxiliary loss $\mathcal{L}_{\text{aux}}$
16: Update LLM $\theta,\phi,\omega$ using $\mathcal{L}_{\text{RL}},\mathcal{L}_{\text{confusion}}$
17: end for
18: end for
### 3.3 Execution: Integration with Reasoning Agents
#### 3.3.1 Agent Policy Generation
We parameterize agent policies $\pi_{\theta}(a_{t}|s_{t},\mathcal{I}_{t},c_{1:t})$ using an LLM with weights $\theta$ . At each time step, the LLM takes as input the agent’s state $s_{t}$ , intention $\mathcal{I}_{t}$ , and coordination feedback $c_{1:t}$ . The output is a distribution over the next actions $a_{t}$ :
$$
\pi_{\theta}(a_{t}|s_{t},\mathcal{I}_{t},c_{1:t})=\text{LLM}_{\theta}(s_{t},
\mathcal{I}_{t},c_{1:t}). \tag{7}
$$
To leverage agent feedback $f_{1:t}$ , we employ an auxiliary regularization model $\hat{\pi}_{\phi}(a_{t}|s_{t},f_{1:t})$ :
$$
\mathcal{L}_{\text{aux}}(\theta;s_{t},f_{1:t})=\text{MSE}(\pi_{\theta}(s_{t}),
\hat{\pi}_{\phi}(s_{t},f_{1:t})), \tag{8}
$$
where $\hat{\pi}_{\phi}$ is a feedback-conditioned policy approximation. The training loss to optimize $\theta$ is:
$$
\mathcal{L}(\theta)=\mathcal{L}_{\text{RL}}(\theta)+\lambda\mathcal{L}_{\text{
aux}}(\theta), \tag{9}
$$
where $\mathcal{L}_{\text{RL}}$ is the reinforcement learning objective and $\lambda$ a weighting factor.
#### 3.3.2 Grounding Strategy Adjustment
We model action dependencies using a graph neural policy module $h_{t}^{a}=\text{GNN}(s_{t},a)$ , where $h_{t}^{a}$ models interactions between action $a$ and the state $s_{t}$ . The policy is then given by $\pi_{\theta}(a_{t}|s_{t})=\prod_{i=1}^{|A|}h_{t}^{a_{i}}$ . This captures the relational structure in the action space, enabling coordinated action generation conditioned on agent communication.
The coordination feedback $c_{t}$ is used to guide adjustments in the grounding module’s strategies. We define a grounding confusion matrix $C_{t}$ , where $C_{t}(i,j)$ represents grounding errors between concepts $i$ and $j$ . The confusion matrix constrains LLM grounding as:
$$
f_{\phi}(s_{t},\mathcal{I}_{t})=\text{LLM}_{\phi}(s_{t},\mathcal{I}_{t})\odot
\lambda C_{t} \tag{10}
$$
where $\odot$ is element-wise multiplication and $\lambda$ controls the influence of $C_{t}$ , reducing uncertainty on error-prone concept pairs.
We propose a modular regularization approach, with the grounding module $g_{\phi}$ regularized by a coordination confusion estimator:
$$
\mathcal{L}_{\text{confusion}}=\frac{1}{N}\sum_{i,j}A_{\psi}(c_{i},c_{j})\cdot
\text{Conf}(c_{i},c_{j}) \tag{11}
$$
where $\mathcal{L}_{\text{task}}$ is the task reward, $\text{Conf}(c_{i},c_{j})$ measures confusion between concepts $c_{i}$ and $c_{j}$ , and $A_{\psi}(c_{i},c_{j})$ are attention weights assigning importance based on grounding sensitivity.
An episodic confusion memory $M_{t}$ accumulates long-term grounding uncertainty statistics:
$$
M_{t}(i,j)=M_{t-1}(i,j)+\mathbb{I}(\text{Confuse}(c_{i},c_{j})_{t}), \tag{12}
$$
where $\mathbb{I}(\cdot)$ are indicator functions tracking confusion events. By regularizing with a coordination-focused confusion estimator and episodic memory, the grounding module adapts to avoid miscoordination.
### 3.4 Collective Learning and Adaptation
The coordination feedback signals $c_{t}$ and interpretability signals $\mathcal{E}_{t},\mathcal{U}_{t},\mathcal{R}_{t}$ play a crucial role in enabling the LLM agents to adapt and learn collectively. By incorporating these signals into the training process, the agents can adjust their strategies and policies to better align with the emerging coordination patterns and requirements of the collaborative tasks.
The collective learning process can be formalized as an optimization problem, where the goal is to minimize the following objective function $\mathcal{L}(\eta,\gamma,\zeta,\xi)=\mathbb{E}_{s_{t},\mathcal{I}_{t},f_{1:t}} \left[\alpha\mathcal{U}_{t}+\beta\mathcal{E}_{t}-\mathcal{R}\right]+\Omega( \eta,\gamma,\zeta,\xi)$ . Here, $\alpha$ and $\beta$ are weighting factors that balance the contributions of the grounding uncertainty $\mathcal{U}_{t}$ and coordination errors $\mathcal{E}_{t}$ , respectively. The team reward $\mathcal{R}$ is maximized to encourage collaborative behavior. The term $\Omega(\eta,\gamma,\zeta,\xi)$ represents regularization terms or constraints on the model parameters to ensure stable and robust learning.
The objective function $\mathcal{L}$ is defined over the current state $s_{t}$ , the interpretability signals $\mathcal{I}_{t}=\{\mathcal{E}_{t},\mathcal{U}_{t},\mathcal{R}_{t}\}$ , and the trajectory of feedback signals $f_{1:t}=\{c_{1},\mathcal{I}_{1},\ldots,c_{t},\mathcal{I}_{t}\}$ up to the current time step $t$ . The expectation $\mathbb{E}_{s_{t},\mathcal{I}_{t},f_{1:t}}[\cdot]$ is taken over the distribution of states, interpretability signals, and feedback signal trajectories encountered during training.
| Method | Web | TFP | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Easy | Medium | Hard | All | Easy | Medium | Hard | Hell | |
| GPT-3.5-Turbo | | | | | | | | |
| CoT | 65.77 | 51.62 | 32.45 | 17.36 | 81.27 | 68.92 | 59.81 | 41.27 |
| Zero-Shot Plan | 57.61 | 52.73 | 28.92 | 14.58 | 82.29 | 63.77 | 55.39 | 42.38 |
| Llama2-7B | | | | | | | | |
| CoT | 59.83 | 54.92 | 30.38 | 15.62 | 82.73 | 65.81 | 57.19 | 44.58 |
| ReAct | 56.95 | 41.86 | 27.59 | 13.48 | 81.15 | 61.65 | 53.97 | 43.25 |
| ART | 62.51 | 52.34 | 33.81 | 18.53 | 81.98 | 63.23 | 51.78 | 46.83 |
| ReWOO | 63.92 | 53.17 | 34.95 | 19.37 | 82.12 | 71.38 | 61.23 | 47.06 |
| AgentLM | 62.14 | 46.75 | 30.84 | 15.98 | 82.96 | 66.03 | 57.16 | 43.91 |
| FireAct | 64.03 | 50.68 | 32.78 | 17.49 | 83.78 | 68.19 | 58.94 | 45.06 |
| LUMOS | 66.27 | 53.81 | 35.37 | 19.53 | 84.03 | 71.75 | 62.57 | 51.49 |
| Llama3-8B | | | | | | | | |
| Code-Llama (PoT) | 64.85 | 49.49 | 32.16 | 17.03 | 83.34 | 68.47 | 59.15 | 52.64 |
| AgentLM | 66.77 | 51.45 | 31.59 | 16.58 | 85.26 | 71.81 | 58.68 | 53.39 |
| FiReAct | 68.92 | 53.27 | 32.95 | 17.64 | 84.11 | 72.15 | 58.63 | 51.65 |
| DGN | 69.15 | 54.78 | 33.63 | 18.17 | 83.42 | 71.08 | 62.34 | 53.57 |
| LToS | 68.48 | 55.03 | 33.06 | 17.71 | 85.77 | 74.61 | 59.37 | 54.81 |
| AUTOACT | 67.62 | 56.25 | 31.84 | 16.79 | 87.89 | 76.29 | 58.94 | 52.87 |
| ReMALIS(Ours) | 73.92 | 58.64 | 38.37 | 21.42 | 89.15 | 77.62 | 64.53 | 55.37 |
Table 1: Comparative analysis of the ReMALIS framework against single-agent baselines and contemporary methods across two datasets
## 4 Experiments
### 4.1 Datasets
To assess the performance of our models, we conducted evaluations using two large-scale real-world datasets: the traffic flow prediction (TFP) dataset and the web-based activities dataset.
TFP dataset comprises 100,000 traffic scenarios, each accompanied by corresponding flow outcomes. Each example is detailed with descriptions of road conditions, vehicle count, weather, and traffic control measures, and is classified as traffic flow: smooth, congested, or jammed. The raw data was sourced from traffic cameras, incident reports, and simulations, and underwent preprocessing to normalize entities and eliminate duplicates.
Web activities dataset contains over 500,000 examples of structured web interactions such as booking flights, scheduling appointments, and making reservations. Each activity follows a template with multiple steps like searching, selecting, filling forms, and confirming. User utterances and system responses were extracted to form the input-output pairs across 150 domains, originating from real anonymized interactions with chatbots, virtual assistants, and website frontends.
### 4.2 Implementation Details
To handle the computational demands of training our framework with LLMs, we employ 8 Nvidia A800-80G GPUs Chen et al. (2024) under the DeepSpeed Aminabadi et al. (2022) training framework, which can effectively accommodate the extensive parameter spaces and activations required by our framework’s LLM components and multi-agent architecture Rasley et al. (2020).
For the TFP dataset, we classified the examples into four difficulty levels: “Easy”, “Medium”, “Hard”, and “Hell”. The “Easy” level comprises small grid networks with low, stable vehicle arrival rates. The “Medium” level includes larger grids with variable arrival rates. “Hard” tasks feature large, irregular networks with highly dynamic arrival rates and complex intersection configurations. The “Hell” level introduces challenges such as partially observable states, changing road conditions, and fully decentralized environments.
For the web activities dataset, we divided the tasks into “Easy”, “Medium”, “Hard”, and “All” levels. “Easy” tasks required basic single-click or short phrase interactions. “Medium” involved complex multi-page sequences like form submissions. “Hard” tasks demanded significant reasoning through ambiguous, dense websites. The “All” level combined tasks across the full difficulty spectrum.
The dataset was divided into 80% for training, 10% for validation, and 10% for testing, with examples shuffled. These large-scale datasets offer a challenging and naturalistic benchmark to evaluate our multi-agent framework on complex, real-world prediction and interaction tasks.
### 4.3 Results and Analysis
Table 1 displays the principal experimental results of our ReMALIS framework in comparison with various single-agent baselines and contemporary methods using the web activities dataset. We evaluated the models across four levels of task difficulty: “Easy”, “Medium”, “Hard”, and “All”.
The results from our comparative analysis indicate that ReMALIS (7B), equipped with a 7B parameter LLM backbone, significantly outperforms competing methods. On the comprehensive “All” difficulty level, which aggregates tasks across a range of complexities, ReMALIS achieved a notable score of 55.37%, surpassing the second-highest scoring method, LUMOS, which scored 51.49%. Additionally, ReMALIS (7B) also excelled against AUTOACT, which utilizes a larger 13B parameter model, by achieving a score that is over 3 percentage points higher at 52.87%. These findings highlight the efficacy of ReMALIS ’s parameter-efficient design and its advanced multi-agent collaborative training approach, which allow it to outperform larger single-agent LLMs significantly.
Notably, ReMALIS (7B) also exceeded the performance of GPT-3.5 (Turbo), a substantially larger foundation model, across all difficulty levels. On “Hard” tasks, ReMALIS ’s 21.42% surpassed GPT-3.5’s 17.36% by over 4 points. This indicates that ReMALIS ’s coordination mechanisms transform relatively modest LLMs into highly capable collaborative agents.
Despite their larger sizes, single-agent approaches like GPT-3.5 CoT, ReAct, and AgentLM significantly underperformed. Notably, even the advanced single-agent method LUMOS (13B) could not rival the performance of ReMALIS (7B). The superiority of ReMALIS, attributed to its specialized multi-agent design and novel features such as intention propagation, bidirectional feedback, and recursive reasoning, was particularly evident. On complex “Hard” tasks that required extensive reasoning, ReMALIS achieved a notable performance of 21.42%, surpassing LUMOS by over 2 percentage points, thus highlighting the benefits of its multi-agent architecture and collaborative learning mechanisms.
The exceptional performance of our proposed ReMALIS framework on the Traffic Flow Prediction (TFP) dataset can also be attributed to its innovative design and the effective integration of advanced techniques. On the "Easy" difficulty level, ReMALIS achieved an impressive accuracy of 89.15%, outperforming the second-best method, AUTOACT, by a substantial margin of 1.26%. In the "Medium" category, ReMALIS secured an accuracy of 77.62%, surpassing AUTOACT’s 76.29% by 1.33%. Even in the most challenging "Hard" and "Hell" levels, ReMALIS maintained its lead with accuracies of 64.53% and 55.37%, respectively, outperforming the next best methods, DGN (62.34%) and LToS (54.81%), by 2.19% and 0.56%.
### 4.4 Ablation Studies
1)The Impact on Improving Multi-Agent Coordination Accuracy We conduct ablation studies to evaluate the impact of each component within the ReMALIS framework. The observations can be found in Table 2. Excluding intention propagation results in a decrease in accuracy by over 6% across both datasets, highlighting difficulties in achieving common grounding among agents without shared local beliefs This highlights the importance of intention sharing for emergent team behaviors.
The absence of bidirectional coordination channels leads to a 4.37% decline in performance across various metrics, illustrating the importance of execution-level signals in shaping planning and grounding strategies. Without feedback coordination, agents become less responsive to new scenarios that require re-planning.
Table 2: Ablation studies on Traffic and Web datasets
| Traffic | Single Agent Baseline | 42.5% | 0.217 | 0.384 |
| --- | --- | --- | --- | --- |
| Intention Propagation | 47.3% | 0.251 | 0.425 | |
| Bidirectional Feedback | 49.8% | 0.278 | 0.461 | |
| Recursive Reasoning | 53.2% | 0.311 | 0.503 | |
| ReMALIS (Full) | 58.7% | 0.342 | 0.538 | |
| Web | Single Agent Baseline | 38.9% | 0.255 | 0.416 |
| Intention Propagation | 42.7% | 0.283 | 0.453 | |
| Bidirectional Feedback | 46.3% | 0.311 | 0.492 | |
| Recursive Reasoning | 50.6% | 0.345 | 0.531 | |
| ReMALIS (Full) | 55.4% | 0.379 | 0.567 | |
Substituting recursive reasoning with convolutional and recurrent neural networks reduces contextual inference accuracy by 5.86%. Non-recursive agents display short-sighted behavior compared to the holistic reasoning enabled by recursive transformer modeling. This emphasizes that recursive architectures are vital for complex temporal dependencies.
<details>
<summary>extracted/5737747/m1.png Details</summary>

### Visual Description
## Line Chart: Performance Comparison Across Models
### Overview
The image contains two side-by-side line charts comparing the performance of various AI models over time (measured in milliseconds). The left subplot shows performance values ranging from 60 to 85, while the right subplot ranges from 15 to 22. Both charts track performance fluctuations across 80 ms intervals.
### Components/Axes
- **Left Subplot**:
- **Y-axis**: "Performance" (60–85)
- **X-axis**: "ms" (0–80)
- **Legend**: Positioned on the left, with color-coded labels for models:
- Blue: GPT-3.5
- Orange: ReAct
- Red: ReWOO
- Green: ART
- Purple: AgentLM
- Brown: FireAct
- Pink: LUMOS
- Gray: ReMALIS
- **Right Subplot**:
- **Y-axis**: "Performance" (15–22)
- **X-axis**: "ms" (0–80)
- **Legend**: Same color-coded labels as the left subplot.
### Detailed Analysis
#### Left Subplot Trends
- **GPT-3.5 (Blue)**: Consistently highest performance, fluctuating between **84–85** with minimal variation.
- **ReAct (Orange)**: Lowest performance, stable around **60** with slight oscillations.
- **ReWOO (Red)**: Fluctuates between **65–67**, showing moderate variability.
- **ART (Green)**: Ranges from **64–66**, with minor fluctuations.
- **AgentLM (Purple)**: Peaks at **67–68**, dips to **64–65**.
- **FireAct (Brown)**: Stable between **65–66**.
- **LUMOS (Pink)**: Fluctuates between **66–68**, with a slight upward trend.
- **ReMALIS (Gray)**: Flat line at **75**, showing no variation.
#### Right Subplot Trends
- **ReAct (Orange)**: Highest performance, peaking at **22** with sharp fluctuations.
- **GPT-3.5 (Blue)**: Ranges from **18–20**, with moderate variability.
- **ReWOO (Red)**: Peaks at **22**, dips to **19–20**.
- **ART (Green)**: Stable between **20–21**, with minor fluctuations.
- **AgentLM (Purple)**: Ranges from **17–19**, showing significant variability.
- **FireAct (Brown)**: Fluctuates between **18–20**.
- **LUMOS (Pink)**: Stable around **20–21**.
- **ReMALIS (Gray)**: Peaks at **22**, dips to **20–21**.
### Key Observations
1. **Performance Disparity**: The left subplot shows higher absolute performance values (60–85) compared to the right (15–22), suggesting different metrics or tasks.
2. **Model Strengths**:
- **GPT-3.5** dominates the left subplot, maintaining near-peak performance.
- **ReAct** excels in the right subplot, achieving the highest values despite volatility.
3. **Volatility**: The right subplot exhibits greater fluctuations across all models, indicating potential sensitivity to time or task complexity.
4. **Stability**: ReMALIS (gray) shows consistent performance in both subplots, suggesting robustness.
### Interpretation
The charts highlight model-specific strengths and weaknesses across two performance metrics. GPT-3.5’s dominance in the left subplot may reflect its general-purpose capabilities, while ReAct’s performance in the right subplot could indicate specialization in a particular task. The right subplot’s lower absolute values and higher volatility suggest the metric might involve more dynamic or resource-intensive processes. ReMALIS’s stability across both subplots implies it is a reliable baseline for comparison. The data underscores the importance of context-specific evaluation when benchmarking AI models.
</details>
Figure 3: Comparative performance evaluation across varying task difficulty levels for the web activities dataset, which indicates the accuracy scores achieved by ReMALIS and several state-of-the-art baselines.
Table 3: Ablation on agent coordination capabilities
| No Communication | 31% | 23% | 17% | 592 | 873 | 1198 |
| --- | --- | --- | --- | --- | --- | --- |
| REACT | 42% | 34% | 29% | 497 | 732 | 984 |
| AgentLM | 48% | 39% | 32% | 438 | 691 | 876 |
| FiReAct | 58% | 47% | 37% | 382 | 569 | 745 |
| Basic Propagation | 68% | 53% | 41% | 314 | 512 | 691 |
| Selective Propagation | 79% | 62% | 51% | 279 | 438 | 602 |
| Full Intention Sharing | 91% | 71% | 62% | 248 | 386 | 521 |
2)The Impact on Improving Multi-Agent Coordination Capability As presented in Table 3, on aligned sub-task percentage, the proposed Basic Propagation, Selective Propagation, and Full Intention Sharing methods consistently outperform baseline models like REACT and AgentLM across varying difficulty levels (“easy”, “medium”, and “hard”). For example, Full Intention Sharing achieves alignment of 91%, 71%, and 62% across these levels, respectively. These results are substantially higher compared to scenarios with no communication (31%, 23%, and 17%).
Similarly, coordination time metrics exhibit major efficiency gains from intention propagation. On “Hard” tasks, Full Intention Sharing reduces coordination time to 521 ms, 57% faster than the 1198 ms for No Communication. As task complexity increases from easy to hard, the coordination time savings compared to baselines grows from 138 ms to 677 ms. This reveals that intention sharing mitigates growing coordination delays for difficult scenarios.
The highlighted propagation mechanisms also demonstrate clear incremental performance improvements over increasingly selective information sharing. As agents propagate more precise intentions to relevant teammates, both sub-task alignment and coordination efficiency improve. Moving from Basic to Selective to Full sharing provides gains on top of gains.
## 5 Conclusion
In this paper, we introduce a novel framework, ReMALIS, designed to enhance collaborative capabilities within multi-agent systems using LLMs. Our approach incorporates three principal innovations: intention propagation for establishing a shared understanding among agents, bidirectional coordination channels to adapt reasoning processes in response to team dynamics, and recursive reasoning architectures that provide agents with advanced contextual grounding and planning capabilities necessary for complex coordination tasks. Experimental results indicate that ReMALIS significantly outperforms several baseline methods, underscoring the efficacy of cooperative multi-agent AI systems. By developing frameworks that enable LLMs to acquire cooperative skills analogous to human team members, we advance the potential for LLM agents to manage flexible coordination in complex collaborative environments effectively.
## 6 Limitiation
While ReMALIS demonstrates promising results in collaborative multi-agent tasks, our framework relies on a centralized training paradigm, which may hinder scalability in fully decentralized environments. The current implementation does not explicitly handle dynamic agent arrival or departure during execution, which could impact coordination in real-world applications, the recursive reasoning component may struggle with long-term dependencies and planning horizons beyond a certain time frame.
## References
- Aminabadi et al. (2022) Reza Yazdani Aminabadi et al. 2022. Deepspeed-inference: Enabling efficient inference of transformer models at unprecedented scale. In SC22: International Conference for High Performance Computing, Networking, Storage and Analysis.
- Chaka (2023) Chaka Chaka. 2023. Generative ai chatbots-chatgpt versus youchat versus chatsonic: Use cases of selected areas of applied english language studies. International Journal of Learning, Teaching and Educational Research, 22(6):1–19.
- Chebotar et al. (2023) Yevgen Chebotar et al. 2023. Q-transformer: Scalable offline reinforcement learning via autoregressive q-functions. In Conference on Robot Learning. PMLR.
- Chen et al. (2023) Baian Chen et al. 2023. Fireact: Toward language agent fine-tuning. arXiv preprint arXiv:2310.05915.
- Chen et al. (2024) Yushuo Chen et al. 2024. Towards coarse-to-fine evaluation of inference efficiency for large language models. arXiv preprint arXiv:2404.11502.
- Chiu et al. (2024) Yu Ying Chiu et al. 2024. A computational framework for behavioral assessment of llm therapists. arXiv preprint arXiv:2401.00820.
- Dong et al. (2023) Yihong Dong et al. 2023. Codescore: Evaluating code generation by learning code execution. arXiv preprint arXiv:2301.09043.
- Du et al. (2023) Yali Du et al. 2023. A review of cooperation in multi-agent learning. arXiv preprint arXiv:2312.05162.
- Fan et al. (2020) Cheng Fan et al. 2020. Statistical investigations of transfer learning-based methodology for short-term building energy predictions. Applied Energy, 262:114499.
- Foerster et al. (2018) Jakob Foerster et al. 2018. Counterfactual multi-agent policy gradients. In Proceedings of the AAAI conference on artificial intelligence, volume 32.
- Gao et al. (2023) Yunfan Gao et al. 2023. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997.
- Hartmann et al. (2022) Valentin N. Hartmann et al. 2022. Long-horizon multi-robot rearrangement planning for construction assembly. IEEE Transactions on Robotics, 39(1):239–252.
- He et al. (2021) Junxian He et al. 2021. Towards a unified view of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366.
- Hu and Sadigh (2023) Hengyuan Hu and Dorsa Sadigh. 2023. Language instructed reinforcement learning for human-ai coordination. arXiv preprint arXiv:2304.07297.
- Huang et al. (2022) Baichuan Huang, Abdeslam Boularias, and Jingjin Yu. 2022. Parallel monte carlo tree search with batched rigid-body simulations for speeding up long-horizon episodic robot planning. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE.
- Khamparia et al. (2021) Aditya Khamparia et al. 2021. An internet of health things-driven deep learning framework for detection and classification of skin cancer using transfer learning. Transactions on Emerging Telecommunications Technologies, 32(7):e3963.
- Lee and Perret (2022) Irene Lee and Beatriz Perret. 2022. Preparing high school teachers to integrate ai methods into stem classrooms. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36.
- Li et al. (2020) Chuan Li et al. 2020. A systematic review of deep transfer learning for machinery fault diagnosis. Neurocomputing, 407:121–135.
- Li et al. (2022) Weihua Li et al. 2022. A perspective survey on deep transfer learning for fault diagnosis in industrial scenarios: Theories, applications and challenges. Mechanical Systems and Signal Processing, 167:108487.
- Loey et al. (2021) Mohamed Loey et al. 2021. A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the covid-19 pandemic. Measurement, 167:108288.
- Lotfollahi et al. (2022) Mohammad Lotfollahi et al. 2022. Mapping single-cell data to reference atlases by transfer learning. Nature biotechnology, 40(1):121–130.
- Lu et al. (2023) Pan Lu et al. 2023. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842.
- Lyu et al. (2021) Xueguang Lyu et al. 2021. Contrasting centralized and decentralized critics in multi-agent reinforcement learning. arXiv preprint arXiv:2102.04402.
- Mao et al. (2022) Weichao Mao et al. 2022. On improving model-free algorithms for decentralized multi-agent reinforcement learning. In International Conference on Machine Learning. PMLR.
- Martini et al. (2021) Franziska Martini et al. 2021. Bot, or not? comparing three methods for detecting social bots in five political discourses. Big data & society, 8(2):20539517211033566.
- Miao et al. (2023) Ning Miao, Yee Whye Teh, and Tom Rainforth. 2023. Selfcheck: Using llms to zero-shot check their own step-by-step reasoning. arXiv preprint arXiv:2308.00436.
- Qiu et al. (2024) Xihe Qiu et al. 2024. Chain-of-lora: Enhancing the instruction fine-tuning performance of low-rank adaptation on diverse instruction set. IEEE Signal Processing Letters.
- Raman et al. (2022) Shreyas Sundara Raman et al. 2022. Planning with large language models via corrective re-prompting. In NeurIPS 2022 Foundation Models for Decision Making Workshop.
- Rana et al. (2023) Krishan Rana et al. 2023. Sayplan: Grounding large language models using 3d scene graphs for scalable task planning. arXiv preprint arXiv:2307.06135.
- Rashid et al. (2020) Tabish Rashid et al. 2020. Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning. In Advances in neural information processing systems 33, pages 10199–10210.
- Rasley et al. (2020) Jeff Rasley et al. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
- Saber et al. (2021) Abeer Saber et al. 2021. A novel deep-learning model for automatic detection and classification of breast cancer using the transfer-learning technique. IEEE Access, 9:71194–71209.
- Schroeder de Witt et al. (2019) Christian Schroeder de Witt et al. 2019. Multi-agent common knowledge reinforcement learning. In Advances in Neural Information Processing Systems 32.
- Schuchard and Crooks (2021) Ross J. Schuchard and Andrew T. Crooks. 2021. Insights into elections: An ensemble bot detection coverage framework applied to the 2018 us midterm elections. Plos one, 16(1):e0244309.
- Schumann et al. (2024) Raphael Schumann et al. 2024. Velma: Verbalization embodiment of llm agents for vision and language navigation in street view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38.
- Shanahan et al. (2023) Murray Shanahan, Kyle McDonell, and Laria Reynolds. 2023. Role play with large language models. Nature, 623(7987):493–498.
- Sharan et al. (2023) S. P. Sharan, Francesco Pittaluga, and Manmohan Chandraker. 2023. Llm-assist: Enhancing closed-loop planning with language-based reasoning. arXiv preprint arXiv:2401.00125.
- Shen et al. (2020) Sheng Shen et al. 2020. Deep convolutional neural networks with ensemble learning and transfer learning for capacity estimation of lithium-ion batteries. Applied Energy, 260:114296.
- Singh et al. (2023) Ishika Singh et al. 2023. Progprompt: Generating situated robot task plans using large language models. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE.
- Song et al. (2023) Chan Hee Song et al. 2023. Llm-planner: Few-shot grounded planning for embodied agents with large language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision.
- Topsakal and Akinci (2023) Oguzhan Topsakal and Tahir Cetin Akinci. 2023. Creating large language model applications utilizing langchain: A primer on developing llm apps fast. In International Conference on Applied Engineering and Natural Sciences, volume 1.
- Valmeekam et al. (2022) Karthik Valmeekam et al. 2022. Large language models still can’t plan (a benchmark for llms on planning and reasoning about change). arXiv preprint arXiv:2206.10498.
- Wang et al. (2024a) Haoyu Wang et al. 2024a. Carbon-based molecular properties efficiently predicted by deep learning-based quantum chemical simulation with large language models. Computers in Biology and Medicine, page 108531.
- Wang et al. (2024b) Haoyu Wang et al. 2024b. Subequivariant reinforcement learning framework for coordinated motion control. arXiv preprint arXiv:2403.15100.
- Wang et al. (2023) Lei Wang et al. 2023. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432.
- Wang et al. (2020) Tonghan Wang et al. 2020. Roma: Multi-agent reinforcement learning with emergent roles. arXiv preprint arXiv:2003.08039.
- Wen et al. (2023) Hao Wen et al. 2023. Empowering llm to use smartphone for intelligent task automation. arXiv preprint arXiv:2308.15272.
- Xi et al. (2023) Zhiheng Xi et al. 2023. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864.
- Yao et al. (2022) Shunyu Yao et al. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629.
- Yin et al. (2023) Da Yin et al. 2023. Lumos: Learning agents with unified data, modular design, and open-source llms. arXiv preprint arXiv:2311.05657.
- Yu et al. (2023) Shengcheng Yu et al. 2023. Llm for test script generation and migration: Challenges, capabilities, and opportunities. In 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security (QRS). IEEE.
- Zeng et al. (2023) Fanlong Zeng et al. 2023. Large language models for robotics: A survey. arXiv preprint arXiv:2311.07226.
- Zhang and Gao (2023) Xuan Zhang and Wei Gao. 2023. Towards llm-based fact verification on news claims with a hierarchical step-by-step prompting method. arXiv preprint arXiv:2310.00305.
- Zhao et al. (2024) Andrew Zhao et al. 2024. Expel: Llm agents are experiential learners. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38.
- Zhu et al. (2023) Zhuangdi Zhu et al. 2023. Transfer learning in deep reinforcement learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence.
- Zhuang et al. (2020) Fuzhen Zhuang et al. 2020. A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1):43–76.
- Zimmer et al. (2021a) Matthieu Zimmer et al. 2021a. Learning fair policies in decentralized cooperative multi-agent reinforcement learning. In International Conference on Machine Learning. PMLR.
- Zimmer et al. (2021b) Matthieu Zimmer et al. 2021b. Learning fair policies in decentralized cooperative multi-agent reinforcement learning. In International Conference on Machine Learning. PMLR.
- Zou et al. (2023) Hang Zou et al. 2023. Wireless multi-agent generative ai: From connected intelligence to collective intelligence. arXiv preprint arXiv:2307.02757.
## Appendix A Related Work
### A.1 Single Agent Frameworks
Early agent frameworks such as Progprompt Singh et al. (2023) directly prompt large language models (LLMs) to plan, execute actions, and process feedback in a chained manner within one model Song et al. (2023). Despite its conceptual simplicity Valmeekam et al. (2022), an integrated framework imposes a substantial burden on a single LLM, leading to challenges in managing complex tasks Raman et al. (2022); Wang et al. (2024a).
To reduce the reasoning burden, recent works explore modular designs by separating high-level planning and low-level execution into different modules. For example, LUMOS Yin et al. (2023) consists of a planning module, a grounding module, and an execution module. The planning and grounding modules break down complex tasks into interpretable sub-goals and executable actions. FiReAct Chen et al. (2023) introduces a similar hierarchical structure, with a focus on providing step-by-step explanations Zhang and Gao (2023). Although partitioning into modules specializing for different skills is reasonable, existing modular frameworks still rely on a single agent for final action execution Miao et al. (2023); Qiu et al. (2024). Our work pushes this idea further by replacing the single execution agent with a cooperative team of multiple agents.
### A.2 Multi-Agent Reinforcement Learning
Collaborative multi-agent reinforcement learning has been studied to solve complex control or game-playing tasks. Representative algorithms include COMA Foerster et al. (2018), QMIX Rashid et al. (2020) and ROMA Wang et al. (2020). These methods enable decentralized execution of different agents but allow centralized training by sharing experiences or parameters Lyu et al. (2021). Drawing on this concept, our ReMALIS framework places greater emphasis on integrating modular LLMs to address complex language tasks. In ReMALIS, each execution agent specializes in specific semantic domains such as query, computation, or retrieval, and is coordinated through a communication module Mao et al. (2022).
The concept of multi-agent RL has recently influenced the design of conversational agents Zimmer et al. (2021a); Schumann et al. (2024). EnsembleBot Schuchard and Crooks (2021) utilizes multiple bots trained on distinct topics, coordinated by a routing model. However, this approach primarily employs a divide-and-conquer strategy with independent skills Martini et al. (2021), and communication within EnsembleBot predominantly involves one-way dispatching rather than bidirectional coordination. In contrast, our work focuses on fostering a more tightly integrated collaborative system for addressing complex problems Schroeder de Witt et al. (2019); Zimmer et al. (2021b).
### A.3 Integrated & Collaborative Learning
Integrated learning techniques originate from transfer learning Zhuang et al. (2020); Zhu et al. (2023), aiming to improve a target model by incorporating additional signals from other modalities Lotfollahi et al. (2022); Shanahan et al. (2023). For multi-agent systems, Li et al. (2022); Zhao et al. (2024) find joint training of multiple agents simultaneously boosts performance over separately trained independent agents Lee and Perret (2022). Recently, integrated learning has been used in single agent frameworks like Shen et al. (2020) and Loey et al. (2021), where auxiliary losses of interpretable outputs facilitate main model training through multi-tasking Khamparia et al. (2021); Saber et al. (2021).
Our work adopts integrated learning to train specialized execution agents that are semantically consistent. At the team level, a communication module learns to attentively aggregate and propagate messages across agents, which indirectly coordinates their strategies and behaviors Fan et al. (2020). The integrated and collaborative learning synergizes individual skills and leads to emerged collective intelligence, enhancing the overall reasoning and planning capabilities when dealing with complex tasks He et al. (2021); Li et al. (2020).
## Appendix B Methodology and Contributions
Based on the motivations and inspirations above, we propose recursive multi-agent learning with intention sharing framework (ReMALIS), an innovative multi-agent framework empowered by integrated learning for communication and collaboration. The main contributions are:
1. We design a cooperative execution module with multiple agents trained by integrated learning. Different execution agents specialize in different semantic domains while understanding peer abilities, which reduces redundant capacities and improves efficient division of labor.
2. We propose an attentive communication module that propagates informative cues across specialized agents. The module coordinates agent execution strategies without explicit supervision, acting as the role of team leader.
3. The collaborative design allows ReMALIS to handle more complex tasks compared to single-agent counterparts. Specialized agents focus on their specialized domain knowledge while collaborating closely through communicative coordination, leading to strong emergent team intelligence.
4. We enable dynamic feedback loops from communication to the grounding module and re-planning of the planning module, increasing adaptability when execution difficulties arise.
We expect the idea of integrating specialized collaborative agents with dynamic coordination mechanisms to inspire more future research toward developing intelligent collaborative systems beyond conversational agents.
## Appendix C Key variables and symbols
Table 4: Key variables and symbols in the proposed recursive multi-agent learning framework.
| $p_{\theta}$ $s_{t}$ $I_{t}$ | Planning module parameterized by $\theta$ Current sub-goal at time $t$ Current intention at time $t$ |
| --- | --- |
| $e_{t}$ | Grounded embedding at time $t$ |
| $f_{t}$ | Agent feedback at time $t$ |
| $g_{\phi}$ | Grounding module parameterized by $\phi$ |
| $\pi_{\xi_{i}}$ | Execution policy of agent $i$ parameterized by $\xi_{i}$ |
| $f_{\Lambda}$ | Intention propagation channel parameterized by $\Lambda$ |
| $m_{ij}$ | Message sent from agent $j$ to agent $i$ |
| $b_{i}(I_{j}|m_{ij})$ | Agent $i$ ’s belief over teammate $j$ ’s intention $I_{j}$ given message $m_{ij}$ |
| $R_{c}$ | Coordination reward |
| $\pi_{\xi}(a_{i}|s_{i},I_{i})$ | Execution agent policy conditioned on state $s_{i}$ and intention $I_{i}$ |
| $a_{i}$ | Action of agent $i$ |
| $s_{i}$ | State of agent $i$ |
| $I_{i}=(\gamma_{i},\Sigma_{i},\pi_{i},\delta_{i})$ | Intention of agent $i$ |
| $\gamma_{i}$ | Current goal of agent $i$ |
| $\Sigma_{i}=\{\sigma_{i1},\sigma_{i2},\ldots\}$ | Set of sub-goals for agent $i$ |
| $\pi_{i}(\sigma)$ | Probability distribution over possible next sub-goals for agent $i$ |
| $\delta_{i}(\sigma)$ | Desired teammate assignment for sub-goal $\sigma$ of agent $i$ |
Table 4 summarizes the key variables and symbols used in the proposed recursive multi-agent learning framework called ReMALIS. It includes symbols representing various components like the planning module, grounding module, execution policies, intentions, goals, sub-goals, and the intention propagation channel.
Table 5: Comparison of Traffic Network Complexity Levels
| Difficulty Level | Grid Size | Intersections | Arrival Rates | Phases per Intersection |
| --- | --- | --- | --- | --- |
| Easy | 3x3 | 9 | Low and stable (0.5 vehicles/s) | Less than 10 |
| Medium | 5x5 | 25 | Fluctuating (0.5-2 vehicles/s) | 10-15 |
| Hard | 8x8 | 64 | Highly dynamic (0.1 to 3 vehicles/s) | More than 15 |
| Hell | Irregular | 100+ | Extremely dynamic with spikes | $>$ 25 |
Table 6: Training hyperparameters and configurations
| Hyperparameter/Configuration Language Model Size Optimizer | ReMALIS 7B AdamW | LUMOS 13B Adam | AgentLM 6B AdamW | GPT-3.5 175B Adam |
| --- | --- | --- | --- | --- |
| Learning Rate | 1e-4 | 2e-5 | 1e-4 | 2e-5 |
| Batch Size | 32 | 64 | 32 | 64 |
| Dropout | 0 | 0.1 | 0 | 0.1 |
| Number of Layers | 12 | 8 | 6 | 48 |
| Model Dimension | 768 | 512 | 768 | 1024 |
| Number of Heads | 12 | 8 | 12 | 16 |
| Training Epochs | 15 | 20 | 10 | 20 |
| Warmup Epochs | 1 | 2 | 1 | 2 |
| Weight Decay | 0.01 | 0.001 | 0.01 | 0.001 |
| Network Architecture | GNN | Transformer | Transformer | Transformer |
| Planning Module | GNN, 4 layers, 512 hidden size | 2-layer GNN, 1024 hidden size | - | - |
| Grounding Module | 6-layer Transformer, $d_{\text{model}}=768$ | 4-layer Transformer, $d_{\text{model}}=512$ | - | - |
| Execution Agents | 7 specialized, integrated training | Single agent | 8 agent | 4 agent |
| Intention Propagation | 4-layer GRU, 256 hidden size | - | - | - |
| Coordination Feedback | GAT, 2 heads, $\alpha=0.2$ | - | - | - |
| Trainable Parameters | 5.37B | 6.65B | 4.61B | 17.75B |
## Appendix D Tasks Setup
### D.1 Traffic Control
We define four levels of difficulty for our traffic control tasks: Easy, Medium, Hard, and Hell in Table 5.
### D.2 Web Tasks
Similarly, we categorize the web tasks in our dataset into four levels of difficulty: Easy, Medium, Hard, and All.
Easy: The easy web tasks involve basic interactions like clicking on a single link or typing a short phrase. They require navigating simple interfaces with clear options to reach the goal.
Medium: The medium-difficulty tasks demand more complex sequences of actions across multiple pages, such as selecting filters or submitting forms. They test the agent’s ability to understand the site structure and flow.
Hard: The hard web tasks feature more open-ended exploration through dense sites with ambiguity. Significant reasoning is needed to chain obscure links and controls to achieve aims.
All: The all-level combines tasks across the spectrum of difficulty. Both simple and complex interactions are blended to assess generalized web agent skills. The performance here correlates to readiness for real-world web use cases.
## Appendix E Experimental Setups
In this study, we compare the performance of several state-of-the-art language models, including ReMALIS, LUMOS, AgentLM, and GPT-3.5. These models vary in size, architecture, and training configurations, reflecting the diversity of approaches in the field of natural language processing in Table 6.
ReMALIS is a 7 billion parameter model trained using the AdamW optimizer with a learning rate of 1e-4, a batch size of 32, and no dropout. It has 12 layers, a model dimension of 768, and 12 attention heads. The model was trained for 15 epochs with a warmup period of 1 epoch and a weight decay of 0.01. ReMALIS employs a Graph Neural Network (GNN) architecture, which is particularly suited for modeling complex relationships and structures.
LUMOS, a larger model with 13 billion parameters, was trained using the Adam optimizer with a learning rate of 2e-5, a batch size of 64, and a dropout rate of 0.1. It has 8 layers, a model dimension of 512, and 8 attention heads. The model was trained for 20 epochs with a warmup period of 2 epochs and a weight decay of 0.001. LUMOS follows a Transformer architecture, which has proven effective in capturing long-range dependencies in sequential data.
AgentLM, a 6 billion parameter model, was trained using the AdamW optimizer with a learning rate of 1e-4, a batch size of 32, and no dropout. It has 6 layers, a model dimension of 768, and 12 attention heads. The model was trained for 10 epochs with a warmup period of 1 epoch and a weight decay of 0.01. AgentLM also uses a Transformer architecture.
GPT-3.5, the largest model in this study with 175 billion parameters, was trained using the Adam optimizer with a learning rate of 2e-5, a batch size of 64, and a dropout rate of 0.1. It has 48 layers, a model dimension of 1024, and 16 attention heads. The model was trained for 20 epochs with a warmup period of 2 epochs and a weight decay of 0.001. GPT-3.5 follows the Transformer architecture, which has been widely adopted for large language models.
In addition to the base language models, the table provides details on the specialized modules and configurations employed by ReMALIS and LUMOS. ReMALIS incorporates a planning module with a 4-layer GNN and a 512 hidden size, a grounding module with a 6-layer Transformer and a model dimension of 768, 7 specialized and integrated execution agents, a 4-layer Gated Recurrent Unit (GRU) with a 256 hidden size for intention propagation, and a Graph Attention Network (GAT) with 2 heads and an alpha value of 0.2 for coordination feedback.
LUMOS, on the other hand, employs a 2-layer GNN with a 1024 hidden size for planning, a 4-layer Transformer with a model dimension of 512 for grounding, and a single integrated execution agent.
## Appendix F Pseudo-code
This algorithm 2 presents the hierarchical planning and grounding processes in the proposed recursive multi-agent learning framework. The planning module $p_{\theta}$ takes the current sub-goal $s_{t}$ , intention $I_{t}$ , grounded embedding $e_{t}$ , and feedback $f_{t}$ as inputs, and predicts the next sub-goal $s_{t+1}$ . It first encodes the inputs using an encoder, and then passes the encoded representation through a graph neural network $T_{\theta}$ parameterized by $\theta$ . The output of $T_{\theta}$ is passed through a softmax layer to obtain the probability distribution over the next sub-goal.
The grounding module $g_{\phi}$ takes the current state $s_{t}$ , intention $I_{t}$ , and feedback trajectory $f_{1:t}$ as inputs, and produces the grounded embedding $e_{t}$ . It encodes the inputs using an encoder, and then applies cross-attention over the vocabulary $V$ , followed by a convolutional feature extractor. The output is combined with agent feedback $P_{t}$ to enhance the grounding accuracy. The grounding module is parameterized by $\phi$ .
This algorithm 3 describes the intention propagation mechanism in the proposed recursive multi-agent learning framework. The goal is for each agent $i$ to infer a belief $b_{i}(I_{j}|m_{ij})$ over the intention $I_{j}$ of a teammate $j$ , given a message $m_{ij}$ received from $j$ .
Algorithm 2 Hierarchical Planning and Grounding
1: Input: Current sub-goal $s_{t}$ , intention $I_{t}$ , grounded embedding $e_{t}$ , feedback $f_{t}$
2: Output: Next sub-goal $s_{t+1}$
3: $h_{t}=\text{Encoder}(s_{t},I_{t},e_{t},f_{t})$ {Encode inputs}
4: $s_{t+1}=\text{Softmax}(T_{\theta}(h_{t}))$ {Predict next sub-goal}
5: $T_{\theta}$ is a graph neural network parameterized by $\theta$ {Planning module $p_{\theta}$ }
6: Input: Current state $s_{t}$ , intention $I_{t}$ , feedback $f_{1:t}$
7: Output: Grounded embedding $e_{t}$
8: $h_{t}=\text{Encoder}(s_{t},I_{t},f_{1:t})$ {Encode inputs}
9: $e_{t}=\text{Conv}(\text{Attn}(h_{t},V))+P_{t}$ {Grounded embedding}
10: $\text{Attn}(\cdot,\cdot)$ is a cross-attention layer over vocabulary $V$
11: $\text{Conv}(\cdot)$ is a convolutional feature extractor
12: $P_{t}$ includes agent feedback to enhance grounding accuracy
13: $g_{\phi}$ is the grounding module parameterized by $\phi$
It initializes an intention propagation channel $f_{\Lambda}$ , parameterized by $\Lambda$ , which is implemented as a recurrent neural network.
The intention inference process works as follows:
1. The received message $m_{ij}$ is encoded using an encoder to obtain a representation $h_{ij}$ .
1. The encoded message $h_{ij}$ is passed through the propagation channel $f_{\Lambda}$ to infer the belief $b_{i}(I_{j}|m_{ij})$ over teammate $j$ ’s intention $I_{j}$ .
The objective is to train the parameters $\Lambda$ of the propagation channel $f_{\Lambda}$ to maximize the coordination reward $R_{c}$ over sampled intentions $I$ and messages $m$ from the distribution defined by $f_{\Lambda}$ .
Algorithm 3 Intention Propagation Mechanism
0: Current intention $I_{i}$ of agent $i$ , message $m_{ij}$ from teammate $j$
0: Belief $b_{i}(I_{j}|m_{ij})$ over teammate $j$ ’s intention $I_{j}$
1: Initialization:
2: Intention propagation channel $f_{\Lambda}$ parameterized by $\Lambda$
3: $f_{\Lambda}$ is a recurrent neural network
4: Intention Inference:
5: Encode message: $h_{ij}\leftarrow\text{Encoder}(m_{ij})$
6: Infer intention belief: $b_{i}(I_{j}|m_{ij})\leftarrow f_{\Lambda}(m_{ij})$
7: Objective:
8: Sample intentions $I$ and messages $m$ from $f_{\Lambda}$
9: Maximize coordination reward $R_{c}$ over intentions and messages:
10: $\Lambda^{*}\leftarrow\arg\max_{\Lambda}\mathbb{E}_{I,m\sim f_{\Lambda}}[R_{c}( I,m)]$
Algorithm 4 Bidirectional Coordination
0: Experience tuples $(s_{t},a_{t},r_{t},s_{t+1})$ for all agents
0: Execution policies $\pi_{\xi_{i}}(a_{i}|s_{i},I_{i})$ and coordination feedback $c_{t}$
1: Execution Policy:
2: for each agent $i$ do
3: Get agent state $s_{i,t}$ and intention $I_{i,t}$
4: $a_{i,t}\sim\pi_{\xi_{i}}(a_{i}|s_{i,t},I_{i,t})$ {Execution policy}
5: end for
6: Coordination Feedback:
7: Collect execution encodings $h^{exec}_{t}=[\phi_{1}(o_{1}),\ldots,\phi_{N}(o_{N})]$ {Encode observations}
8: $c_{t}\leftarrow\Phi(h^{exec}_{t})$ {Summarize coordination patterns}
9: Objective:
10: Maximize team reward $R$ and auxiliary loss $L_{aux}$ :
11: $\xi^{*}\leftarrow\arg\max_{\xi}\mathbb{E}_{(s,a)\sim\pi_{\xi}}[R+\lambda L_{ aux}]$
This algorithm 4 describes the bidirectional coordination mechanism in the proposed recursive multi-agent learning framework. It involves executing actions based on the agents’ policies and generating coordination feedback from the execution experiences.
Our algorithm takes experience tuples $(s_{t},a_{t},r_{t},s_{t+1})$ for all agents as input, where $s_{t}$ is the state, $a_{t}$ is the action taken, $r_{t}$ is the reward received, and $s_{t+1}$ is the next state.
The execution policy part works as follows:
1. For each agent $i$ , get the agent’s state $s_{i,t}$ and intention $I_{i,t}$ .
1. Sample an action $a_{i,t}$ from the execution policy $\pi_{\xi_{i}}(a_{i}|s_{i,t},I_{i,t})$ , parameterized by $\xi_{i}$ .
The coordination feedback part works as follows:
1. Collect execution encodings $h^{exec}_{t}=[\phi_{1}(o_{1}),\ldots,\phi_{N}(o_{N})]$ by encoding the observations $o_{i}$ of each agent $i$ using an encoder $\phi_{i}$ .
1. Summarize the coordination patterns $c_{t}$ from the execution encodings $h^{exec}_{t}$ using a function $\Phi$ .
The objective is to maximize the team reward $R$ and an auxiliary loss $L_{aux}$ by optimizing the execution policy parameters $\xi$ . The auxiliary loss $L_{aux}$ is used to incorporate additional regularization or constraints.
The bidirectional coordination mechanism allows execution agents to act based on their policies and intentions, while also generating coordination feedback $c_{t}$ that summarizes the emerging coordination patterns. This feedback can be used to guide the planning and grounding modules in the recursive multi-agent learning framework.
## Appendix G Discussion
The results demonstrate the efficacy of the proposed ReMALIS framework in enabling coordinated multi-agent collaboration for complex tasks. By propagating intentions between agents, establishing bidirectional feedback channels, and integrating recursive reasoning architectures, ReMALIS outperformed single-agent baselines and concurrent methods across difficulty levels on both the traffic flow prediction and web activities datasets.
The performance gains highlight the importance of fostering a shared understanding of goals and sub-tasks among agents through intention propagation. Communicating local beliefs allows agents to align their actions towards common objectives, leading to emergent coordinated behaviors that reduce misaligned sub-tasks and miscoordination errors. Furthermore, the bidirectional feedback channels play a crucial role in shaping the reasoning strategies of the planning and grounding modules based on the coordination patterns observed during execution. This adaptability enables the agents to adjust their comprehension and planning policies dynamically, resulting in more flexible and responsive behaviors.
The integration of recursive reasoning architectures also contributes to the superior performance of ReMALIS. By modeling the intentions and strategies of other agents, the execution agents can engage in more contextual and holistic reasoning, enhancing their ability to handle complex temporal dependencies and long-term planning horizons. This recursive reasoning capability further amplifies the benefits of intention propagation and bidirectional feedback, as agents can better interpret and leverage the shared information and coordination signals.
It is important to note that while ReMALIS demonstrates substantial improvements over single-agent frameworks, there are still limitations and potential areas for further research. For instance, the current implementation relies on a centralized training paradigm, which may hinder scalability in fully decentralized environments. Additionally, the framework does not explicitly handle dynamic agent arrival or departure during execution, which could impact coordination in real-world applications with fluid team compositions.
Future work could explore decentralized training approaches that maintain the benefits of multi-agent collaboration while addressing scalability concerns. Moreover, developing mechanisms to adaptively handle changes in the agent team during execution could enhance the robustness and flexibility of the framework in dynamic environments.
## Appendix H Supplementary application description of the overall framework
To further illustrate the practical applicability and versatility of our proposed ReMALIS framework, we present a supplementary application scenario. Figure 2 depicts a high-level overview of how ReMALIS can be employed in a real-world setting to tackle complex, multi-step tasks that require orchestrating multiple agents with diverse capabilities. This exemplary use case demonstrates the framework’s ability to decompose intricate problems into manageable sub-tasks, dynamically allocate appropriate agents, and seamlessly coordinate their actions to achieve the overarching goal efficiently and effectively.
Planning Module (Figure 4): 1.
Analyze the current traffic conditions, including vehicle counts, road incidents, and construction zones. 2.
Identify intersections experiencing congestion and potential bottlenecks. 3.
Formulate high-level goals to alleviate congestion and optimize traffic flow. 4.
Break down the goals into a sequence of subgoals and subtasks. 5.
Determine the dependencies and coordination needs between subtasks. 6.
Plan the assignment of subtasks to specialized execution agents based on their expertise.
<details>
<summary>extracted/5737747/11.png Details</summary>

### Visual Description
## Flowchart: System Workflow for Process Optimization
### Overview
The diagram illustrates a multi-stage workflow for optimizing processes using a Large Language Model (LLM). It begins with input data and prior knowledge, progresses through planning and policy establishment, and concludes with task execution and goal achievement. Arrows indicate directional flow and interdependencies.
### Components/Axes
1. **Left Section (Input Stage)**
- **LLM**: Central node with a floral icon (top-left)
- **Data**: Lungs icon (blue, bottom-left)
- **Prior**: Document icon (blue, bottom-right)
- Arrows connect LLM to Data and Prior, indicating data/prior input.
2. **Middle Section (Planning & Policy)**
- **Dashed Box**: Contains three interconnected processes:
- **Optimize Planning Process**: Globe icon (top-center)
- **Establish Priority Policy**: Flower icon (top-right)
- **Respond Dynamic Changes**: Gear icon (bottom-left)
- Arrows form a triangular feedback loop between these components.
3. **Right Section (Execution Stage)**
- **Process**: Cogwheel icon (top-center)
- **Task**: Grid of four squares (bottom-left)
- **Goal**: Network diagram (bottom-right)
- Arrows connect the middle section to Process, Task, and Goal.
### Detailed Analysis
- **Input Stage**: LLM integrates Data (possibly sensor/biological data) and Prior (historical/predefined knowledge).
- **Planning & Policy**:
- Optimize Planning Process (global symbolism) interacts with Establish Priority Policy (hierarchical symbolism) and Respond Dynamic Changes (adaptive mechanisms).
- Feedback loops suggest iterative refinement.
- **Execution Stage**:
- Process (mechanical symbolism) leads to Task (structured execution) and Goal (systemic outcomes).
- No explicit numerical values; symbolic representation of workflow stages.
### Key Observations
- **Symbolic Representation**: Icons (lungs, globe, gear) abstractly represent data types, optimization scope, and adaptability.
- **Feedback Mechanism**: The triangular loop in the middle section emphasizes dynamic adjustment.
- **Hierarchical Flow**: Input → Planning → Execution structure suggests a top-down workflow.
### Interpretation
This diagram represents a closed-loop system where LLM-driven planning (informed by data and prior knowledge) adapts to dynamic changes through policy adjustments, ultimately executing tasks to achieve defined goals. The globe icon implies global optimization considerations, while the gear icon highlights responsiveness to real-time changes. The absence of numerical data suggests this is a conceptual workflow rather than a quantitative model. The feedback loop between planning and policy adjustment indicates an emphasis on iterative improvement in process optimization.
</details>
Figure 4: Overview of the proposed ReMALIS Planning Module for predicting sub-goals based on current goals, intentions, grounded embeddings, and agent feedback.
<details>
<summary>extracted/5737747/22.png Details</summary>

### Visual Description
## Diagram: Healthcare Collaboration and Planning Workflow
### Overview
The diagram illustrates a structured process for healthcare planning, divided into two interconnected sections. The left section outlines **input factors** (e.g., assessments, analyses, and guidance), while the right section depicts **collaborative workflows** leading to **specific implementation plans**.
### Components/Axes
#### Left Section (Input Factors):
- **Dashed Box**: Contains eight labeled icons representing input factors:
1. **Functional assessment** (teal mask icon)
2. **Behavioral history** (red magnifying glass icon)
3. **Pre-diagnosis problems** (blue stethoscope icon)
4. **Expert guidance** (light blue medical professional icon)
5. **Specialty analysis** (green hospital icon)
6. **Current status** (pink heart icon)
7. **Future deductions** (green microscope icon)
8. **Planned ployment** (orange pill icon)
#### Right Section (Collaborative Workflow):
- **Three 3D Cubes**:
- **Yellow cube**: Labeled "Collaboration with neighbors"
- **Purple cube**: Labeled "Collaboration with functions"
- **Blue cube**: Labeled "Collaboration with" (incomplete label)
- **Arrows**: Connect cubes to a **yellow cylinder** labeled "Specific plans for the implementation of goals."
### Detailed Analysis
- **Left Section**:
- Icons are arranged vertically, with colors and symbols representing distinct healthcare domains (e.g., teal for functional assessment, red for behavioral history).
- Labels are concise, focusing on diagnostic and planning stages (e.g., "Pre-diagnosis problems," "Future deductions").
- **Right Section**:
- Cubes are interconnected via arrows, suggesting iterative collaboration.
- The yellow cylinder acts as the endpoint, representing actionable plans derived from collaborative efforts.
- The blue cube’s incomplete label ("Collaboration with") may indicate a missing or implied stakeholder (e.g., "Collaboration with stakeholders").
### Key Observations
1. **Color Coding**:
- Left section uses distinct colors for each input factor (e.g., teal, red, blue).
- Right section uses yellow, purple, and blue for cubes, with yellow for the final output.
2. **Flow Direction**:
- Input factors (left) → Collaboration (right) → Implementation plans (cylinder).
3. **Ambiguity**:
- The blue cube’s label is truncated, leaving its collaboration target unspecified.
### Interpretation
The diagram models a **systems-thinking approach** to healthcare planning, emphasizing:
1. **Holistic Inputs**: Combining functional, behavioral, and expert insights to inform decisions.
2. **Collaborative Dynamics**: Highlighting the role of interdisciplinary ("neighbors") and functional ("functions") partnerships in refining plans.
3. **Goal-Oriented Outcomes**: The yellow cylinder symbolizes the translation of collaborative efforts into concrete implementation strategies.
**Notable Insight**: The truncated label on the blue cube suggests a potential oversight in the diagram’s design, possibly omitting a critical collaboration partner (e.g., "patients" or "technology"). This ambiguity could undermine the model’s completeness.
**Peircean Perspective**: The diagram reflects a **pragmatic framework** where data (input factors) and collaboration (process) co-evolve to produce actionable knowledge (plans). The missing label hints at an incomplete semiotic process, where not all signs (e.g., stakeholder input) are fully represented.
</details>
Figure 5: Framework of the proposed ReMALIS Grounding Module that contextualizes symbol embeddings using the current state, intentions, and feedback signals.
Grounding Module (Figure 5): 1.
Contextualize the abstract traffic concepts and symbols into grounded representations. 2.
Map entities like intersections, vehicles, and signal phases to their physical counterparts. 3.
Resolve ambiguities and uncertainties in grounding based on the current traffic context. 4.
Adjust grounding strategies based on feedback from execution agents and emerging coordination patterns. 5.
Provide grounded embeddings to inform the execution agents’ decision-making.
<details>
<summary>extracted/5737747/33.png Details</summary>

### Visual Description
## Flowchart: Healthcare Task Design and LLM Correction Process
### Overview
The image depicts a multi-stage process for designing and refining healthcare-related tasks using large language models (LLMs). It combines symbolic icons with textual labels to illustrate workflow stages, correction mechanisms, and outcome validation. The diagram emphasizes binary outcomes (correct/incorrect) through visual metaphors.
### Components/Axes
1. **Left Panel (Input/Process):**
- **Red Medical Team Icon**: Represents healthcare professionals.
- **Pink Hospital Icon**: Symbolizes healthcare institutions.
- **Pink Task Design Icon**: Labeled "Task design" with a checkmark and circle.
- **Green LLM Correction Icon**: Labeled "LLM correction" with a flower symbol.
- **Blue Arrows**: Connect components, indicating workflow progression.
2. **Right Panel (Output/Validation):**
- **Dashed Box 1 (Green Checkmark):**
- **Icons**: Vegetables → Shopping basket → Salad.
- **Text**: "Y" (Yes) symbol.
- **Dashed Box 2 (Red X):**
- **Icons**: Medicine bottle (skull/crossbones) → Alcohol bottle (crossed out).
- **Text**: "N" (No) symbol.
- **Dashed Box Arrows**: Connect icons to indicate causal relationships.
### Detailed Analysis
- **Task Design Stage**:
- Positioned centrally, this stage (pink icon) receives input from both healthcare professionals (red) and institutions (pink). The checkmark suggests validation or approval.
- **Instruction Fine-Tuning**:
- Blue arrows with three checkmarks indicate iterative refinement. This stage bridges task design and LLM correction.
- **LLM Correction**:
- The green flower icon suggests organic or adaptive refinement. Outputs flow to the validation stage.
- **Validation Outcomes**:
- **Correct Path (Green Checkmark)**:
- Vegetables → Shopping basket → Salad represents a validated, healthy outcome.
- **Incorrect Path (Red X)**:
- Medicine bottle (hazard symbol) → Alcohol bottle (prohibited) shows invalid/unsafe outcomes.
### Key Observations
1. **Binary Validation**: Outcomes are strictly categorized as correct (green) or incorrect (red), with no intermediate states.
2. **Process Flow**:
- Healthcare input → Task design → Iterative refinement → Outcome validation.
3. **Symbolic Logic**:
- "Y" (Yes) and "N" (No) directly map to the green checkmark and red X, respectively.
4. **No Numerical Data**: The diagram lacks quantitative metrics, focusing instead on qualitative workflow stages.
### Interpretation
This flowchart illustrates a quality control system for healthcare tasks using LLMs. The process emphasizes:
- **Collaboration**: Input from both professionals and institutions ensures task relevance.
- **Iterative Refinement**: Multiple checkmarks in "Instruction fine-tuning" suggest rigorous optimization.
- **Outcome Safety**: The prohibition of harmful substances (alcohol) and validation of healthy choices (salad) highlight patient safety priorities.
The absence of numerical data implies this is a conceptual workflow rather than a performance metric analysis. The use of food/medicine metaphors suggests applications in dietary planning or medication management systems. The strict binary validation may indicate a high-stakes environment where errors have severe consequences.
</details>
Figure 6: Overview of our ReMALIS Cooperative Execution Module consisting of specialized agents that collaboratively execute actions and propagate intentions.
Execution Module (Figure 6, 7): 1.
Specialized agents monitor their respective domains (vehicle counts, road conditions, signal timings, etc.). 2.
Agents communicate their local intentions and goals to relevant teammates. 3.
Agents align their actions based on shared intentions and the coordinated plans. 4.
Agents execute their assigned subtasks (adjusting signal phases, routing emergency vehicles, etc.). 5.
Agents observe the impact of their actions and provide feedback on emerging coordination patterns. 6.
Agents adapt their strategies dynamically based on the feedback and changing traffic conditions. 7.
Agents continuously monitor and respond to fluctuations in vehicle arrival rates and traffic patterns. 8.
Agents collaborate and coordinate their efforts to collectively alleviate congestion and optimize traffic flow.
<details>
<summary>extracted/5737747/44.png Details</summary>

### Visual Description
## Flowchart: Collaborative Process for Agent Cooperation
### Overview
The image depicts a structured workflow for achieving cooperation among agents, visualized as a left-to-right flowchart with interconnected components. The process emphasizes communication, historical analysis, expert input, and collaborative evaluation leading to coordinated action.
### Components/Axes
1. **Left Section (Input Components):**
- **Red Circular Arrow**: Labeled "Communication between agents" (top-left).
- **Teal Computer Monitor**: Contains a gear icon and bar graph, labeled "Historical process" (top-center).
- **Red Graduation Cap Icon**: Labeled "Expert guidance" (bottom-left).
2. **Central Element:**
- **Orange Hourglass**: Labeled "Collaborative evaluation," acting as a bridge between input components and the final output.
3. **Right Section (Output Component):**
- **Dashed Box**: Contains:
- **Eight Stick Figures**: Arranged in a circular pattern with speech bubbles (representing agents).
- **Central Icons**: Books, pie chart, calculator, and notebook (symbolizing tools/resources).
- **Text Overlay**: "Communicate and Coordinate for Cooperation" (bottom-right).
### Detailed Analysis
- **Flow Direction**: Left-to-right progression from input components (communication, historical data, expert guidance) to collaborative evaluation (hourglass) and final output (cooperation).
- **Input Components**:
- **Communication**: Cyclical arrows suggest iterative interaction.
- **Historical Process**: Bar graph implies data-driven analysis.
- **Expert Guidance**: Symbolizes human oversight or domain expertise.
- **Collaborative Evaluation**: Hourglass signifies time-bound deliberation or iterative refinement.
- **Output Component**: Dashed box represents a collective framework for agents to interact, share resources, and align goals.
### Key Observations
- **Cyclical Communication**: The red arrow emphasizes continuous interaction among agents.
- **Resource Integration**: Central icons (books, calculator, etc.) suggest multidisciplinary collaboration.
- **Agent Roles**: Stick figures with speech bubbles highlight active participation in dialogue.
### Interpretation
The flowchart illustrates a systemic approach to fostering cooperation among agents by integrating:
1. **Dynamic Communication**: Ensuring ongoing dialogue to align objectives.
2. **Data-Driven Insights**: Leveraging historical processes for informed decisions.
3. **Expert Oversight**: Incorporating specialized knowledge to guide evaluations.
4. **Collaborative Frameworks**: Structuring interactions to optimize resource sharing and goal alignment.
The absence of numerical data suggests this is a conceptual model rather than a quantitative analysis. The emphasis on cyclical communication and resource integration implies that cooperation is an iterative, resource-dependent process requiring both human and technical inputs.
</details>
Figure 7: Overview of the collaborative evaluation setup in the proposed ReMALIS framework.