# Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models
**Authors**: Xihe Qiu, Haoyu Wang, Xiaoyu Tan, Chao Qu, Yujie Xiong, Yuan Cheng, Yinghui Xu, Wei Chu, Yuan Qi
Abstract
Effective collaboration in multi-agent systems requires communicating goals and intentions between agents. Current agent frameworks often suffer from dependencies on single-agent execution and lack robust inter-module communication, frequently leading to suboptimal multi-agent reinforcement learning (MARL) policies and inadequate task coordination. To address these challenges, we present a framework for training large language models (LLMs) as collaborative agents to enable coordinated behaviors in cooperative MARL. Each agent maintains a private intention consisting of its current goal and associated sub-tasks. Agents broadcast their intentions periodically, allowing other agents to infer coordination tasks. A propagation network transforms broadcast intentions into teammate-specific communication messages, sharing relevant goals with designated teammates. The architecture of our framework is structured into planning, grounding, and execution modules. During execution, multiple agents interact in a downstream environment and communicate intentions to enable coordinated behaviors. The grounding module dynamically adapts comprehension strategies based on emerging coordination patterns, while feedback from execution agents influnces the planning module, enabling the dynamic re-planning of sub-tasks. Results in collaborative environment simulation demonstrate intention propagation reduces miscoordination errors by aligning sub-task dependencies between agents. Agents learn when to communicate intentions and which teammates require task details, resulting in emergent coordinated behaviors. This demonstrates the efficacy of intention sharing for cooperative multi-agent RL based on LLMs.
Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models
1 Introduction
With the recent advancements of large language models (LLMs), developing intelligent agents that can perform complex reasoning and long-horizon planning has attracted increasing research attention Sharan et al. (2023); Huang et al. (2022). A variety of agent frameworks have been proposed, such as ReAct Yao et al. (2022), LUMOS Yin et al. (2023), Chameleon Lu et al. (2023) and BOLT Chiu et al. (2024). These frameworks typically consist of modules for high-level planning, grounding plans into executable actions, and interacting with environments or tools to execute actions Rana et al. (2023).
Despite their initial success, existing agent frameworks may experience some limitations. Firstly, most of them rely on a single agent for execution Song et al. (2023); Hartmann et al. (2022). However, as tasks become more complex, the action dimension can be increased exponentially, and it poses significant challenges for a single agent to handle all execution functionalities Chebotar et al. (2023); Wen et al. (2023). Secondly, existing frameworks lack inter-module communication mechanisms. Typically, the execution results are directly used as input in the planning module without further analysis or coordination Zeng et al. (2023); Wang et al. (2024b). When execution failures occur, the agent may fail to adjust its strategies accordingly Chaka (2023). Thirdly, the grounding module in existing frameworks operates statically, without interactions with downstream modules. It grounds plans independently without considering feedback or states of the execution module Xi et al. (2023). LLMs struggle to handle emergent coordination behaviors and lack common grounding on shared tasks. Moreover, existing multi-agent reinforcement learning (MARL) methods often converge on suboptimal policies that fail to exhibit a certain level of cooperation Gao et al. (2023); Yu et al. (2023).
How can the agents with LLMs effectively communicate and collaborate with each other? we propose a novel approach, Re cursive M ulti- A gent L earning with I ntention S haring (ReMALIS The code can be accessed at the following URL: https://github.com/AnonymousBoy123/ReMALIS.) to address the limitations of existing cooperative artificial intelligence (AI) multi-agent frameworks with LLMs. ReMALIS employs intention propagation between LLM agents to enable a shared understanding of goals and tasks. This common grounding allows agents to align intentions and reduce miscoordination. Additionally, we introduce bidirectional feedback loops between downstream execution agents and upstream planning and grounding modules. This enables execution coordination patterns to guide adjustments in grounding strategies and planning policies, resulting in more flexible emergent behaviors Topsakal and Akinci (2023). By integrating these mechanisms, ReMALIS significantly improves the contextual reasoning and adaptive learning capabilities of LLM agents during complex collaborative tasks. The execution module utilizes specialized agents that collaboratively execute actions, exchange information, and propagate intentions via intention networks. These propagated intentions reduce miscoordination errors and guide grounding module adjustments to enhance LLM comprehension based on coordination patterns Dong et al. (2023). Furthermore, execution agents can provide feedback to prompt collaborative re-planning in the planning module when necessary.
Compared to single-agent frameworks, the synergistic work of multiple specialized agents enhances ReMALIS ’s collective intelligence and leads to emerging team-level behaviors Wang et al. (2023). The collaborative design allows for dealing with more complex tasks that require distributed knowledge and skills. We demonstrate that:
- Intention propagation between execution agents enables emergent coordination behaviors and reduces misaligned sub-tasks.
- Grounding module strategies adjusted by intention sharing improve LLM scene comprehension.
- Planning module re-planning guided by execution feedback increases goal-oriented coordination.
Compared to various single-agent baselines and existing state-of-the-art MARL Hu and Sadigh (2023); Zou et al. (2023) methods using LLMs, our ReMALIS framework demonstrates improved performance on complex collaborative tasks, utilizing the publicly available large-scale traffic flow prediction (TFP) dataset and web-based activities dataset. This demonstrates its effectiveness in deploying LLMs as collaborative agents capable of intention communication, strategic adjustments, and collaborative re-planning Du et al. (2023).
2 Preliminary
In this section, we introduce the methods of the proposed ReMALIS framework in detail. As illustrated in Figure 1, ReMALIS consists of four key components:
<details>
<summary>x1.png Details</summary>

### Visual Description
\n
## Diagram: Iterative Planning and Execution Framework
### Overview
This diagram illustrates an iterative planning and execution framework, likely for a Large Language Model (LLM) based agent. The framework consists of a Planning Module and a Grounding Module, with iterative feedback loops between them and an Execution Learning component. The diagram emphasizes the interplay between goal decomposition, knowledge elicitation, planning guidance, and execution monitoring.
### Components/Axes
The diagram is segmented into three main modules:
* **Planning Module (Bottom):** Contains "Situation Analysis", "Task Requirements", "Goal Decomposition", "LLM", "Knowledge Elicitation", and "External Information".
* **Intentional Transmission/Planning Guidance (Center):** Connects the Planning Module to the Grounding Module. Includes elements like "Type-level Attention", "Dropout", and "Iterative Update Reward Feedback".
* **Grounding Module (Right):** Contains "Execution Learning", "Action History Trajectory", and "Confidence/Uncertainty Thresholds".
Key labels and text elements include:
* "Iterative Update Reward Feedback"
* "Collaborative Communication"
* "Progress Broadcast"
* "Type-level Attention" (appears twice, with slight variations)
* "Dropout"
* "Intentional Transmission"
* "Planning Guidance" - "The planning module predicts the next pending sub-goal s(t+1)"
* "Action History Trajectory"
* "Confidence thresholds: T<sub>p</sub>, T<sub>n</sub>"
* "Meets Expectations: μ<sub>1</sub>, μ<sub>2</sub>, μ<sub>3</sub>"
* "Incorrect Behavior: σ<sub>1</sub>, σ<sub>2</sub>, σ<sub>3</sub>"
* "Uncertainty thresholds: K<sub>p</sub>, K<sub>n</sub>"
* "Manually labeled data pseudo-labeled data"
* "Expand the Task Prompt"
* "Effective Supervision and Guidance"
* "Effective Prompt"
* "Redefine Planning"
* Nodes labeled N<sup>1</sup> through N<sup>6</sup> (appearing multiple times)
* Nodes labeled α<sub>1</sub> through α<sub>3</sub> (appearing multiple times)
* Nodes labeled β<sub>1</sub> through β<sub>3</sub> (appearing multiple times)
* "Pre-training LLM Intervention"
### Detailed Analysis or Content Details
The diagram depicts a cyclical process.
**Planning Module:**
* "Situation Analysis" and "Task Requirements" feed into the "LLM".
* The LLM outputs to "Goal Decomposition" and "Knowledge Elicitation".
* "Knowledge Elicitation" incorporates "External Information".
* Both "Goal Decomposition" and "Knowledge Elicitation" contribute to the "Intentional Transmission" stage.
**Intentional Transmission/Planning Guidance:**
* The "Intentional Transmission" stage involves a network of nodes (N<sup>1</sup>-N<sup>6</sup>, α<sub>1</sub>-α<sub>3</sub>, β<sub>1</sub>-β<sub>3</sub>) arranged in two parallel pathways.
* The left pathway shows nodes N<sup>1</sup>, N<sup>2</sup>, N<sup>3</sup>, N<sup>4</sup>, N<sup>5</sup>, N<sup>6</sup> connected by arrows.
* The right pathway shows nodes N<sup>1</sup>, N<sup>2</sup>, N<sup>3</sup>, N<sup>4</sup>, N<sup>5</sup>, N<sup>6</sup> connected by arrows.
* "Type-level Attention" and "Dropout" are applied within these pathways.
* "Iterative Update Reward Feedback" provides input to this stage.
**Grounding Module:**
* "Planning Guidance" feeds into the "Action History Trajectory".
* The "Action History Trajectory" is represented as a series of states P<sub>1</sub>, P<sub>2</sub>, P<sub>3</sub>, repeated three times.
* "Execution Learning" uses these states to determine "Meets Expectations" (μ<sub>1</sub>, μ<sub>2</sub>, μ<sub>3</sub>) or "Incorrect Behavior" (σ<sub>1</sub>, σ<sub>2</sub>, σ<sub>3</sub>) based on "Confidence" and "Uncertainty" thresholds.
* The "Execution Learning" component then influences the "Expand the Task Prompt" and "Redefine Planning" stages, completing the cycle.
* "Pre-training LLM Intervention" provides input to "Effective Supervision and Guidance".
### Key Observations
* The diagram emphasizes iterative refinement through feedback loops.
* The parallel pathways in the "Intentional Transmission" stage suggest exploration of multiple planning options.
* The use of confidence and uncertainty thresholds indicates a probabilistic approach to execution monitoring.
* The integration of manually labeled data with pseudo-labeled data suggests a semi-supervised learning approach.
* The diagram is highly conceptual and does not provide specific numerical data.
### Interpretation
This diagram represents a sophisticated framework for controlling an LLM-based agent. The core idea is to move beyond simple prompt engineering and towards a more robust, iterative planning and execution process. The "Planning Module" acts as the agent's reasoning engine, decomposing goals and leveraging external knowledge. The "Grounding Module" provides a mechanism for evaluating the agent's actions and providing feedback, allowing it to learn from its mistakes and refine its plans.
The "Intentional Transmission" stage, with its parallel pathways and attention mechanisms, suggests a process of exploring multiple potential plans and selecting the most promising one. The use of confidence and uncertainty thresholds in the "Execution Learning" component indicates a nuanced understanding of the agent's capabilities and limitations.
The diagram highlights the importance of both supervised learning (through manually labeled data) and unsupervised learning (through pseudo-labeled data) in training the agent. The feedback loops between the "Grounding Module" and the "Planning Module" suggest a continuous learning process, where the agent constantly adapts its plans based on its experiences.
The diagram is a high-level overview and does not provide details on the specific algorithms or techniques used in each component. However, it provides a valuable conceptual framework for understanding how an LLM-based agent can be designed to perform complex tasks in a reliable and efficient manner. The diagram suggests a system designed for continual improvement and adaptation, rather than a static, pre-programmed solution.
</details>
Figure 1: This framework introduces a multi-agent learning strategy designed to enhance the capabilities of LLMs through cooperative coordination. It enables agents to collaborate and share intentions for effective coordination, and utilizes recursive reasoning to model and adapt to each other’s strategies.
Planning Module $p_{\theta}$ predicts the next pending sub-goal $s_{t+1}$ , given the current sub-goal $s_{t}$ and other inputs $s_{t+1}=p_{\theta}(s_{t},I_{t},e_{t},f_{t}),$ where $I_{t}$ is the current intention, $e_{t}$ is the grounded embedding, and $f_{t}$ is agent feedback. $p_{\theta}$ first encode information through encoding layers $h_{t}=Encoder(s_{t},I_{t},e_{t},f_{t})$ and subsequently predict the sub-goal through $s_{t+1}=Softmax(T_{\theta}(h_{t}))$ , where $T_{\theta}$ utilizes the graph neural network (GNN) architecture.
The module is trained to maximize the likelihood of all sub-goals along the decision sequences given the current information on time step $t$ . This allows the dynamic re-planning of sub-task dependencies based on agent feedback.
$$
\theta^{*}=\arg\max_{\theta}\prod_{t=1}^{T}p_{\theta}(s_{t+1}|s_{t},I_{t},e_{t%
},f_{t}). \tag{1}
$$
Grounding Module $g_{\phi}$ contextualizes symbol embeddings $e_{t}=g_{\phi}(s_{t},I_{t},f_{1:t})$ , where $s_{t}$ , $I_{t}$ , and $f_{1:t}$ represent the states, intention, and feedback up to time step $t$ , respectively. These embeddings are processed by encoders $h_{t}=\text{Encoder}(s_{t},I_{t},f_{1:t})$ and then by cross-attention layers and convolutional feature extractors: $e_{t}=Conv(Attn(h_{t},V))+P_{t}$ over vocabulary $V$ . Here, $P_{t}$ includes agent feedback to enhance grounding accuracy based on coordination signals for more accurate contextual understanding. The module maps language symbols to physical environment representations through:
$$
g(x)=f_{\theta}\left(\sum_{i=1}^{N}w_{i}g(x_{i})\right), \tag{2}
$$
where $g(x)$ is the grounded embeddings of policy set $x$ and $g(x_{i})$ represents its individual action embedding on agent $i$ , respectively, and $w_{i}$ are learnable weights. The grounding function $f_{\theta}$ utilizes a GNN architecture for structural composition. Additionally, we employ an uncertainty modeling module that represents ambiguities in grounding:
$$
q_{\phi}(z|x)=\text{Normal}\big{(}z;\mu_{\phi}(x),\sigma^{2}_{\phi}(x)\big{)}, \tag{3}
$$
where $z$ is a latent variable modeled as a normal distribution, enabling the capture of multimodal uncertainties in grounding.
Cooperative Execution Module comprises $N$ specialized agents $\{A_{1},...,A_{N}\}$ . This architecture avoids using a single agent to handle all tasks. Instead, each agent is dedicated to a distinct semantic domain, cultivating expertise specific to that domain. For instance, agents $A_{1},A_{2},$ and $A_{3}$ may be dedicated to query processing, information retrieval, and arithmetic operations, respectively. This specialization promotes an efficient distribution of tasks and reduces overlap in capabilities.
Decomposing skills into specialized agents risks creating isolated capabilities that lack coordination. To address this, it is essential that agents not only excel individually but also comprehend the capacities and limitations of their peers. We propose an integrated training approach where specialized agents are trained simultaneously to foster collaboration and collective intelligence. We represent the parameters of agent $A_{i}$ as $\theta_{i}$ . Each agent’s policy, denoted as $y_{i}\sim\pi_{\theta_{i}}(·|s)$ , samples an output $y_{i}$ from a given input state $s$ . The training objective for our system is defined by the following equation:
$$
L_{exe}=\sum_{i=1}^{N}\mathbb{E}_{(s,y^{\star})\sim\mathcal{D}}{\ell(\pi_{%
\theta_{i}}(y_{i}|s),y^{\star})}, \tag{4}
$$
where $\ell(·)$ represents the task-specific loss function, comparing the agent-generated output $y_{i}$ with the ground-truth label $y^{\star}$ . $\mathcal{D}$ denotes the distribution of training data. By optimizing this objective collectively across all agents, each agent not only improves its own output accuracy but also enhances the overall team’s ability to produce coherent and well-coordinated results.
During training, we adjust the decomposition of grounding tasks to enhance collaboration, which is represented by the soft module weights $\{w_{1},...,w_{N}\}$ . These weights indicate how the distribution of grounding commands can be optimized to better utilize the capabilities of different agents. The objective of this training is defined by the following loss function: $L_{com}=\ell(d,w^{\star})$ , where $\ell$ represents the loss function, $d$ is expressed as subgoal task instruction data, and $w^{\star}$ signifies the optimal set of weights.
<details>
<summary>x2.png Details</summary>

### Visual Description
\n
## Diagram: LLM-Based Agent Workflow
### Overview
This diagram illustrates a workflow for an LLM-based agent, broken down into three main stages: Planning, Grounding, and Execution. The diagram depicts a cyclical process with feedback loops and interactions between various components. The overall flow is from left to right, with iterative refinement and guidance.
### Components/Axes
The diagram is divided into three main colored sections:
* **Planning (Pink):** Focuses on goal decomposition and strategy formulation.
* **Grounding (Green):** Focuses on connecting the plan to real-world information and refining instructions.
* **Execution (Orange):** Focuses on carrying out the plan and evaluating results.
Key components within each stage include:
* **Process (Top-Left, Cyan):** Includes Task Coding and Goal Decomposition.
* **LLM (Bottom-Left, Grey):** Receives input from Logical Alignment, Data Analysis, and Prior Knowledge.
* **Planning:** Collaboration with Neighbors, Collaboration with Functions, Collaboration with Targets.
* **Grounding:** Functional Assessment, Specialty Analysis, Current Status, Future Deductions, Deployment, Expert Guidance.
* **Execution:** Communicate and Coordinate for Cooperation, Collaborative Evaluation, Communication between Agents, Historical Process, Expert Guidance.
There are two text boxes at the bottom:
* "The current error is too large. Please plan and deploy again" (labeled 'A')
* "Is there any feedback on re-planning?" (labeled with a question mark)
Arrows indicate the flow of information and control between components.
### Detailed Analysis or Content Details
**Process (Cyan):**
* A circular arrow indicates a continuous process.
* Task Coding is represented by a grid of squares.
* Goal Decomposition is represented by a branching tree structure.
* An arrow points from "Process" to "Optimize Planning Process".
* An arrow points from "Optimize Planning Process" to "Respond Dynamic Changes".
* An arrow points from "Respond Dynamic Changes" to the LLM.
**LLM (Grey):**
* The LLM receives input from three sources: Logical Alignment, Data Analysis, and Prior Knowledge.
* An arrow points from the LLM to "Establish Priority Policy".
**Planning (Pink):**
* "Collaboration with Neighbors" is represented by interconnected pyramids.
* "Collaboration with Functions" is represented by interconnected rectangles.
* "Collaboration with Targets" is represented by interconnected squares.
* A plus sign connects the Planning stage to the Grounding stage.
**Grounding (Green):**
* "Functional Assessment" is represented by a magnifying glass over a document.
* "Behavioral History" is represented by a chart.
* "Pre-diagnosis problems" is represented by a question mark in a circle.
* "Specialty Analysis" is represented by a bar graph.
* "Current Status" is represented by a clock.
* "Future Deductions" is represented by a crystal ball.
* "Deployment" is represented by a safe.
* "Expert Guidance" is represented by a person speaking.
* An arrow points from "Grounding" to "Instruction fine-tuning".
* An arrow points from "Instruction fine-tuning" to "Task design".
* An arrow points from "Task design" to "LLM correction".
**Execution (Orange):**
* "Communicate and Coordinate for Cooperation" is represented by multiple people icons.
* "Collaborative Evaluation" is represented by a scale.
* "Communication between Agents" is represented by a circular chart.
* "Historical Process" is represented by a bar graph.
* "Expert Guidance" is represented by a person speaking.
* An arrow points from "LLM correction" to "Agents execution Guided by experts".
* An arrow points from "Agents execution Guided by experts" to "Execution".
**Feedback Loops:**
* A dashed arrow points from "Execution" back to "Grounding".
* A dashed arrow points from "Grounding" back to "Planning".
* A dashed arrow points from "Planning" back to "Process".
### Key Observations
* The diagram emphasizes a cyclical and iterative process.
* The LLM plays a central role in coordinating the workflow.
* Expert guidance is integrated into both the Grounding and Execution stages.
* The feedback loops suggest continuous refinement and adaptation.
* The error message "The current error is too large. Please plan and deploy again" indicates a potential issue with the planning or execution stages.
* The question "Is there any feedback on re-planning?" highlights the importance of incorporating feedback into the process.
### Interpretation
This diagram represents a sophisticated agent workflow leveraging the capabilities of a Large Language Model (LLM). The three stages – Planning, Grounding, and Execution – represent a clear division of labor, mirroring cognitive processes. The Planning stage focuses on defining the problem and formulating a strategy. The Grounding stage connects the plan to the real world, ensuring it is feasible and informed. The Execution stage carries out the plan and evaluates the results.
The feedback loops are crucial, suggesting a system designed for continuous learning and improvement. The integration of expert guidance indicates a human-in-the-loop approach, where human expertise is used to refine the agent's behavior. The error message and question at the bottom suggest that the system is capable of detecting and responding to errors, and that feedback is actively sought to improve the planning process.
The diagram suggests a complex system designed to address challenges that require both strategic thinking and real-world awareness. The use of visual metaphors (e.g., magnifying glass for assessment, crystal ball for future deductions) adds a layer of intuitiveness to the diagram, making it easier to understand the overall workflow. The diagram is a high-level overview and does not provide specific details about the algorithms or techniques used in each stage. However, it provides a valuable framework for understanding the key components and interactions within the system.
</details>
Figure 2: Overview of the proposed ReMALIS: This framework comprises a planning module, grounding module, cooperative execution module, and intention coordination channels.
3 Approach
The collaborative MARL of ReMALIS focuses on three key points: intention propagation for grounding, bidirectional coordination channels, and integration with recursive reasoning agents. Detailed parameter supplements and pseudocode details can be found in Appendix C and Appendix F.
3.1 Planning with Intention Propagation
We formulate a decentralized, partially observable Markov game for multi-agent collaboration. Each agent $i$ maintains a private intention $\mathcal{I}_{i}$ encoded as a tuple $\mathcal{I}_{i}=(\gamma_{i},\Sigma_{i},\pi_{i},\delta_{i})$ , where $\gamma_{i}$ is the current goal, $\Sigma_{i}=\{\sigma_{i1},\sigma_{i2},...\}$ is a set of related sub-goals, $\pi_{i}(\sigma)$ is a probability distribution over possible next sub-goals, and $\delta_{i}(\sigma)$ is the desired teammate assignment for sub-goal $\sigma$ .
Intentions are propagated through a communication channel $f_{\Lambda}$ parameterized by $\Lambda$ . For a received message $m_{ij}$ from agent $j$ , agent $i$ infers a belief over teammate $j$ ’s intention $b_{i}(\mathcal{I}_{j}|m_{ij})=f_{\Lambda}(m_{ij})$ , where $\Lambda$ is a recurrent neural network. The channel $f_{\theta}$ is trained in an end-to-end manner to maximize the coordination reward function $R_{c}$ . This propagates relevant sub-task dependencies to enhance common grounding on collaborative goals.
$$
\Lambda^{*}=\arg\max_{\Lambda}\mathbb{E}_{\mathcal{I},m\sim f_{\Lambda}}[R_{c}%
(\mathcal{I},m)]. \tag{5}
$$
At each time-step $t$ , the LLM witll processinputs comprising the agent’s state $s_{t}$ , the intention $\mathcal{I}_{t}$ , and the feedback $f_{1:t}$ .
3.2 Grounding with Bidirectional Coordination Channels
The execution agent policies, denoted by $\pi_{\xi}(a_{i}|s_{i},\mathcal{I}_{i})$ , are parameterized by $\xi$ and conditioned on the agent’s state $s_{i}$ and intention $\mathcal{I}_{i}$ . Emergent coordination patterns are encoded in a summary statistic $c_{t}$ and passed to upstream modules to guide planning and grounding adjustments. For example, frequent miscoordination on sub-goal $\sigma$ indicates the necessity to re-plan $\sigma$ dependencies in $\mathcal{I}$ .
This bidirectional feedback aligns low-level execution with high-level comprehension strategies. In addition to the downstream propagation of intents, execution layers provide bidirectional feedback signals $\psi(t)$ to upstream modules $\psi(t)=\Phi(h^{\text{exec}}_{t})$ :
$$
h^{\text{exec}}_{t}=[\phi_{1}(o_{1}),\ldots,\phi_{N}(o_{N})], \tag{6}
$$
where $\Phi(·)$ aggregates agent encodings to summarize emergent coordination, and $\phi_{i}(·)$ encodes the observation $o_{i}$ for agent $i$ .
Execution agents generate feedback $f_{t}$ to guide upstream LLM modules through: $f_{t}=g_{\theta}(\tau_{1:t})$ , where $g_{\theta}$ processes the action-observation history $\tau_{1:t}$ . These signals include coordination errors $\mathcal{E}_{t}$ which indicate misalignment of sub-tasks; grounding uncertainty $\mathcal{U}_{t}$ , measured as entropy over grounded symbol embeddings; and re-planning triggers $\mathcal{R}_{t}$ , which flag the need for sub-task reordering. These signals can reflect inconsistencies between sub-task objectives, the ambiguity of symbols in different contexts, and the need to adjust previous sub-task sequencing.
Algorithm 1 ReMALIS: Recursive Multi-Agent Learning with Intention Sharing
1: Initialize LLM parameters $\theta,\phi,\omega$
2: Initialize agent policies $\pi_{\xi}$ , communication channel $f_{\theta}$
3: Initialize grounding confusion matrix $C$ , memory $M$
4: for each episode do
5: for each time step $t$ do
6: Observe states $s_{t}$ and feedback $f_{1:t}$ for all agents
7: Infer intentions $\mathcal{I}_{t}$ from $s_{t},f_{1:t}$ using $\text{LLM}_{\theta}$
8: Propagate intentions $\mathcal{I}_{t}$ through channel $f_{\theta}$
9: Compute grounded embeddings $e_{t}=g_{\phi}(s_{t},\mathcal{I}_{t},f_{1:t})$
10: Predict sub-tasks $\Sigma_{t+1}=p_{\theta}(\mathcal{I}_{t},e_{t},f_{1:t})$
11: Generate actions $a_{t}=a_{\omega}(e_{t},\Sigma_{t+1},f_{1:t})$
12: Execute actions $a_{t}$ and observe rewards $r_{t}$ , new states $s_{t+1}$
13: Encode coordination patterns $c_{t}=\Phi(h^{\text{exec}}_{t})$
14: Update grounding confusion $C_{t},M_{t}$ using $c_{t}$
15: Update policies $\pi_{\xi}$ using $R$ and auxiliary loss $\mathcal{L}_{\text{aux}}$
16: Update LLM $\theta,\phi,\omega$ using $\mathcal{L}_{\text{RL}},\mathcal{L}_{\text{confusion}}$
17: end for
18: end for
3.3 Execution: Integration with Reasoning Agents
3.3.1 Agent Policy Generation
We parameterize agent policies $\pi_{\theta}(a_{t}|s_{t},\mathcal{I}_{t},c_{1:t})$ using an LLM with weights $\theta$ . At each time step, the LLM takes as input the agent’s state $s_{t}$ , intention $\mathcal{I}_{t}$ , and coordination feedback $c_{1:t}$ . The output is a distribution over the next actions $a_{t}$ :
$$
\pi_{\theta}(a_{t}|s_{t},\mathcal{I}_{t},c_{1:t})=\text{LLM}_{\theta}(s_{t},%
\mathcal{I}_{t},c_{1:t}). \tag{7}
$$
To leverage agent feedback $f_{1:t}$ , we employ an auxiliary regularization model $\hat{\pi}_{\phi}(a_{t}|s_{t},f_{1:t})$ :
$$
\mathcal{L}_{\text{aux}}(\theta;s_{t},f_{1:t})=\text{MSE}(\pi_{\theta}(s_{t}),%
\hat{\pi}_{\phi}(s_{t},f_{1:t})), \tag{8}
$$
where $\hat{\pi}_{\phi}$ is a feedback-conditioned policy approximation. The training loss to optimize $\theta$ is:
$$
\mathcal{L}(\theta)=\mathcal{L}_{\text{RL}}(\theta)+\lambda\mathcal{L}_{\text{%
aux}}(\theta), \tag{9}
$$
where $\mathcal{L}_{\text{RL}}$ is the reinforcement learning objective and $\lambda$ a weighting factor.
3.3.2 Grounding Strategy Adjustment
We model action dependencies using a graph neural policy module $h_{t}^{a}=\text{GNN}(s_{t},a)$ , where $h_{t}^{a}$ models interactions between action $a$ and the state $s_{t}$ . The policy is then given by $\pi_{\theta}(a_{t}|s_{t})=\prod_{i=1}^{|A|}h_{t}^{a_{i}}$ . This captures the relational structure in the action space, enabling coordinated action generation conditioned on agent communication.
The coordination feedback $c_{t}$ is used to guide adjustments in the grounding module’s strategies. We define a grounding confusion matrix $C_{t}$ , where $C_{t}(i,j)$ represents grounding errors between concepts $i$ and $j$ . The confusion matrix constrains LLM grounding as:
$$
f_{\phi}(s_{t},\mathcal{I}_{t})=\text{LLM}_{\phi}(s_{t},\mathcal{I}_{t})\odot%
\lambda C_{t} \tag{10}
$$
where $\odot$ is element-wise multiplication and $\lambda$ controls the influence of $C_{t}$ , reducing uncertainty on error-prone concept pairs.
We propose a modular regularization approach, with the grounding module $g_{\phi}$ regularized by a coordination confusion estimator:
$$
\mathcal{L}_{\text{confusion}}=\frac{1}{N}\sum_{i,j}A_{\psi}(c_{i},c_{j})\cdot%
\text{Conf}(c_{i},c_{j}) \tag{11}
$$
where $\mathcal{L}_{\text{task}}$ is the task reward, $\text{Conf}(c_{i},c_{j})$ measures confusion between concepts $c_{i}$ and $c_{j}$ , and $A_{\psi}(c_{i},c_{j})$ are attention weights assigning importance based on grounding sensitivity.
An episodic confusion memory $M_{t}$ accumulates long-term grounding uncertainty statistics:
$$
M_{t}(i,j)=M_{t-1}(i,j)+\mathbb{I}(\text{Confuse}(c_{i},c_{j})_{t}), \tag{12}
$$
where $\mathbb{I}(·)$ are indicator functions tracking confusion events. By regularizing with a coordination-focused confusion estimator and episodic memory, the grounding module adapts to avoid miscoordination.
3.4 Collective Learning and Adaptation
The coordination feedback signals $c_{t}$ and interpretability signals $\mathcal{E}_{t},\mathcal{U}_{t},\mathcal{R}_{t}$ play a crucial role in enabling the LLM agents to adapt and learn collectively. By incorporating these signals into the training process, the agents can adjust their strategies and policies to better align with the emerging coordination patterns and requirements of the collaborative tasks.
The collective learning process can be formalized as an optimization problem, where the goal is to minimize the following objective function $\mathcal{L}(\eta,\gamma,\zeta,\xi)=\mathbb{E}_{s_{t},\mathcal{I}_{t},f_{1:t}}%
\left[\alpha\mathcal{U}_{t}+\beta\mathcal{E}_{t}-\mathcal{R}\right]+\Omega(%
\eta,\gamma,\zeta,\xi)$ . Here, $\alpha$ and $\beta$ are weighting factors that balance the contributions of the grounding uncertainty $\mathcal{U}_{t}$ and coordination errors $\mathcal{E}_{t}$ , respectively. The team reward $\mathcal{R}$ is maximized to encourage collaborative behavior. The term $\Omega(\eta,\gamma,\zeta,\xi)$ represents regularization terms or constraints on the model parameters to ensure stable and robust learning.
The objective function $\mathcal{L}$ is defined over the current state $s_{t}$ , the interpretability signals $\mathcal{I}_{t}=\{\mathcal{E}_{t},\mathcal{U}_{t},\mathcal{R}_{t}\}$ , and the trajectory of feedback signals $f_{1:t}=\{c_{1},\mathcal{I}_{1},...,c_{t},\mathcal{I}_{t}\}$ up to the current time step $t$ . The expectation $\mathbb{E}_{s_{t},\mathcal{I}_{t},f_{1:t}}[·]$ is taken over the distribution of states, interpretability signals, and feedback signal trajectories encountered during training.
| Method | Web | TFP | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Easy | Medium | Hard | All | Easy | Medium | Hard | Hell | |
| GPT-3.5-Turbo | | | | | | | | |
| CoT | 65.77 | 51.62 | 32.45 | 17.36 | 81.27 | 68.92 | 59.81 | 41.27 |
| Zero-Shot Plan | 57.61 | 52.73 | 28.92 | 14.58 | 82.29 | 63.77 | 55.39 | 42.38 |
| Llama2-7B | | | | | | | | |
| CoT | 59.83 | 54.92 | 30.38 | 15.62 | 82.73 | 65.81 | 57.19 | 44.58 |
| ReAct | 56.95 | 41.86 | 27.59 | 13.48 | 81.15 | 61.65 | 53.97 | 43.25 |
| ART | 62.51 | 52.34 | 33.81 | 18.53 | 81.98 | 63.23 | 51.78 | 46.83 |
| ReWOO | 63.92 | 53.17 | 34.95 | 19.37 | 82.12 | 71.38 | 61.23 | 47.06 |
| AgentLM | 62.14 | 46.75 | 30.84 | 15.98 | 82.96 | 66.03 | 57.16 | 43.91 |
| FireAct | 64.03 | 50.68 | 32.78 | 17.49 | 83.78 | 68.19 | 58.94 | 45.06 |
| LUMOS | 66.27 | 53.81 | 35.37 | 19.53 | 84.03 | 71.75 | 62.57 | 51.49 |
| Llama3-8B | | | | | | | | |
| Code-Llama (PoT) | 64.85 | 49.49 | 32.16 | 17.03 | 83.34 | 68.47 | 59.15 | 52.64 |
| AgentLM | 66.77 | 51.45 | 31.59 | 16.58 | 85.26 | 71.81 | 58.68 | 53.39 |
| FiReAct | 68.92 | 53.27 | 32.95 | 17.64 | 84.11 | 72.15 | 58.63 | 51.65 |
| DGN | 69.15 | 54.78 | 33.63 | 18.17 | 83.42 | 71.08 | 62.34 | 53.57 |
| LToS | 68.48 | 55.03 | 33.06 | 17.71 | 85.77 | 74.61 | 59.37 | 54.81 |
| AUTOACT | 67.62 | 56.25 | 31.84 | 16.79 | 87.89 | 76.29 | 58.94 | 52.87 |
| ReMALIS(Ours) | 73.92 | 58.64 | 38.37 | 21.42 | 89.15 | 77.62 | 64.53 | 55.37 |
Table 1: Comparative analysis of the ReMALIS framework against single-agent baselines and contemporary methods across two datasets
4 Experiments
4.1 Datasets
To assess the performance of our models, we conducted evaluations using two large-scale real-world datasets: the traffic flow prediction (TFP) dataset and the web-based activities dataset.
TFP dataset comprises 100,000 traffic scenarios, each accompanied by corresponding flow outcomes. Each example is detailed with descriptions of road conditions, vehicle count, weather, and traffic control measures, and is classified as traffic flow: smooth, congested, or jammed. The raw data was sourced from traffic cameras, incident reports, and simulations, and underwent preprocessing to normalize entities and eliminate duplicates.
Web activities dataset contains over 500,000 examples of structured web interactions such as booking flights, scheduling appointments, and making reservations. Each activity follows a template with multiple steps like searching, selecting, filling forms, and confirming. User utterances and system responses were extracted to form the input-output pairs across 150 domains, originating from real anonymized interactions with chatbots, virtual assistants, and website frontends.
4.2 Implementation Details
To handle the computational demands of training our framework with LLMs, we employ 8 Nvidia A800-80G GPUs Chen et al. (2024) under the DeepSpeed Aminabadi et al. (2022) training framework, which can effectively accommodate the extensive parameter spaces and activations required by our framework’s LLM components and multi-agent architecture Rasley et al. (2020).
For the TFP dataset, we classified the examples into four difficulty levels: “Easy”, “Medium”, “Hard”, and “Hell”. The “Easy” level comprises small grid networks with low, stable vehicle arrival rates. The “Medium” level includes larger grids with variable arrival rates. “Hard” tasks feature large, irregular networks with highly dynamic arrival rates and complex intersection configurations. The “Hell” level introduces challenges such as partially observable states, changing road conditions, and fully decentralized environments.
For the web activities dataset, we divided the tasks into “Easy”, “Medium”, “Hard”, and “All” levels. “Easy” tasks required basic single-click or short phrase interactions. “Medium” involved complex multi-page sequences like form submissions. “Hard” tasks demanded significant reasoning through ambiguous, dense websites. The “All” level combined tasks across the full difficulty spectrum.
The dataset was divided into 80% for training, 10% for validation, and 10% for testing, with examples shuffled. These large-scale datasets offer a challenging and naturalistic benchmark to evaluate our multi-agent framework on complex, real-world prediction and interaction tasks.
4.3 Results and Analysis
Table 1 displays the principal experimental results of our ReMALIS framework in comparison with various single-agent baselines and contemporary methods using the web activities dataset. We evaluated the models across four levels of task difficulty: “Easy”, “Medium”, “Hard”, and “All”.
The results from our comparative analysis indicate that ReMALIS (7B), equipped with a 7B parameter LLM backbone, significantly outperforms competing methods. On the comprehensive “All” difficulty level, which aggregates tasks across a range of complexities, ReMALIS achieved a notable score of 55.37%, surpassing the second-highest scoring method, LUMOS, which scored 51.49%. Additionally, ReMALIS (7B) also excelled against AUTOACT, which utilizes a larger 13B parameter model, by achieving a score that is over 3 percentage points higher at 52.87%. These findings highlight the efficacy of ReMALIS ’s parameter-efficient design and its advanced multi-agent collaborative training approach, which allow it to outperform larger single-agent LLMs significantly.
Notably, ReMALIS (7B) also exceeded the performance of GPT-3.5 (Turbo), a substantially larger foundation model, across all difficulty levels. On “Hard” tasks, ReMALIS ’s 21.42% surpassed GPT-3.5’s 17.36% by over 4 points. This indicates that ReMALIS ’s coordination mechanisms transform relatively modest LLMs into highly capable collaborative agents.
Despite their larger sizes, single-agent approaches like GPT-3.5 CoT, ReAct, and AgentLM significantly underperformed. Notably, even the advanced single-agent method LUMOS (13B) could not rival the performance of ReMALIS (7B). The superiority of ReMALIS, attributed to its specialized multi-agent design and novel features such as intention propagation, bidirectional feedback, and recursive reasoning, was particularly evident. On complex “Hard” tasks that required extensive reasoning, ReMALIS achieved a notable performance of 21.42%, surpassing LUMOS by over 2 percentage points, thus highlighting the benefits of its multi-agent architecture and collaborative learning mechanisms.
The exceptional performance of our proposed ReMALIS framework on the Traffic Flow Prediction (TFP) dataset can also be attributed to its innovative design and the effective integration of advanced techniques. On the "Easy" difficulty level, ReMALIS achieved an impressive accuracy of 89.15%, outperforming the second-best method, AUTOACT, by a substantial margin of 1.26%. In the "Medium" category, ReMALIS secured an accuracy of 77.62%, surpassing AUTOACT’s 76.29% by 1.33%. Even in the most challenging "Hard" and "Hell" levels, ReMALIS maintained its lead with accuracies of 64.53% and 55.37%, respectively, outperforming the next best methods, DGN (62.34%) and LToS (54.81%), by 2.19% and 0.56%.
4.4 Ablation Studies
1)The Impact on Improving Multi-Agent Coordination Accuracy We conduct ablation studies to evaluate the impact of each component within the ReMALIS framework. The observations can be found in Table 2. Excluding intention propagation results in a decrease in accuracy by over 6% across both datasets, highlighting difficulties in achieving common grounding among agents without shared local beliefs This highlights the importance of intention sharing for emergent team behaviors.
The absence of bidirectional coordination channels leads to a 4.37% decline in performance across various metrics, illustrating the importance of execution-level signals in shaping planning and grounding strategies. Without feedback coordination, agents become less responsive to new scenarios that require re-planning.
Table 2: Ablation studies on Traffic and Web datasets
| Traffic | Single Agent Baseline | 42.5% | 0.217 | 0.384 |
| --- | --- | --- | --- | --- |
| Intention Propagation | 47.3% | 0.251 | 0.425 | |
| Bidirectional Feedback | 49.8% | 0.278 | 0.461 | |
| Recursive Reasoning | 53.2% | 0.311 | 0.503 | |
| ReMALIS (Full) | 58.7% | 0.342 | 0.538 | |
| Web | Single Agent Baseline | 38.9% | 0.255 | 0.416 |
| Intention Propagation | 42.7% | 0.283 | 0.453 | |
| Bidirectional Feedback | 46.3% | 0.311 | 0.492 | |
| Recursive Reasoning | 50.6% | 0.345 | 0.531 | |
| ReMALIS (Full) | 55.4% | 0.379 | 0.567 | |
Substituting recursive reasoning with convolutional and recurrent neural networks reduces contextual inference accuracy by 5.86%. Non-recursive agents display short-sighted behavior compared to the holistic reasoning enabled by recursive transformer modeling. This emphasizes that recursive architectures are vital for complex temporal dependencies.
<details>
<summary>extracted/5737747/m1.png Details</summary>

### Visual Description
\n
## Line Chart: Performance vs. Milliseconds for Different Models
### Overview
The image presents two line charts comparing the performance of several language models (GPT-3.5, ReAct, ART, ReWOO, AgentLM, FireAct, LUMOS, and ReMALIS) against milliseconds (ms). The left chart displays performance on a scale from approximately 60 to 85, while the right chart shows performance on a scale from approximately 15 to 22. Both charts have the x-axis representing time in milliseconds, ranging from 0 to 80.
### Components/Axes
* **X-axis (Both Charts):** Milliseconds (ms), ranging from 0 to 80.
* **Y-axis (Left Chart):** Performance, ranging from approximately 60 to 85.
* **Y-axis (Right Chart):** Performance, ranging from approximately 15 to 22.
* **Legend (Both Charts):**
* Blue: GPT-3.5
* Purple: ReAct
* Green: ART
* Pink: ReWOO
* Gray: AgentLM
* Red: FireAct
* Orange: LUMOS
* Brown: ReMALIS
### Detailed Analysis or Content Details
**Left Chart:**
* **GPT-3.5 (Blue):** The line starts at approximately 82, fluctuates between 80 and 84, and ends around 83. It generally maintains a high level of performance.
* **ReAct (Purple):** The line begins at approximately 78, dips to around 75, and then rises to approximately 79, ending around 77. It shows some fluctuation.
* **ART (Green):** The line starts at approximately 65, rises to around 68, and then declines to approximately 64, ending around 65. It exhibits moderate fluctuation.
* **ReWOO (Pink):** The line begins at approximately 66, fluctuates between 64 and 68, and ends around 66. It shows moderate fluctuation.
* **AgentLM (Gray):** The line starts at approximately 75, dips to around 73, and then rises to approximately 76, ending around 75. It shows some fluctuation.
* **FireAct (Red):** The line begins at approximately 62, rises to around 65, and then declines to approximately 62, ending around 63. It exhibits moderate fluctuation.
* **LUMOS (Orange):** The line starts at approximately 60, rises to around 63, and then declines to approximately 61, ending around 62. It exhibits moderate fluctuation.
* **ReMALIS (Brown):** The line begins at approximately 64, fluctuates between 62 and 66, and ends around 64. It shows moderate fluctuation.
**Right Chart:**
* **GPT-3.5 (Blue):** The line starts at approximately 21.5, fluctuates significantly between 19 and 22, and ends around 20.5. It shows high variability.
* **ReAct (Purple):** The line begins at approximately 19.5, fluctuates significantly between 17 and 21, and ends around 19. It shows high variability.
* **ART (Green):** The line starts at approximately 17.5, rises to around 19, and then declines to approximately 17, ending around 18. It exhibits moderate fluctuation.
* **ReWOO (Pink):** The line begins at approximately 18, fluctuates between 16 and 19, and ends around 17. It shows moderate fluctuation.
* **AgentLM (Gray):** The line starts at approximately 19, fluctuates between 17 and 20, and ends around 18. It shows moderate fluctuation.
* **FireAct (Red):** The line begins at approximately 15.5, rises to around 16.5, and then declines to approximately 15.5, ending around 16. It exhibits moderate fluctuation.
* **LUMOS (Orange):** The line starts at approximately 16, fluctuates between 15 and 17, and ends around 16. It shows moderate fluctuation.
* **ReMALIS (Brown):** The line begins at approximately 16, fluctuates between 15 and 17, and ends around 16. It shows moderate fluctuation.
### Key Observations
* GPT-3.5 consistently demonstrates the highest performance on the left chart.
* The right chart shows more variability in performance across all models.
* FireAct, LUMOS, and ReMALIS consistently exhibit lower performance on the left chart compared to other models.
* The right chart shows a wider range of performance fluctuations for all models, suggesting greater sensitivity to time (ms).
### Interpretation
The data suggests that GPT-3.5 is the most robust model in terms of maintaining high performance across the measured time range (0-80ms) based on the left chart. However, when examining performance at a finer granularity (right chart), all models exhibit greater variability. This could indicate that the performance of these models is sensitive to slight variations in processing time. The difference in scales between the two charts suggests that the left chart measures a more general performance metric, while the right chart focuses on a more specific or nuanced aspect of performance. The lower performance of FireAct, LUMOS, and ReMALIS on the left chart might indicate inherent limitations in their architecture or training data. The high variability observed in the right chart could be due to factors such as caching effects, resource contention, or the inherent stochasticity of the language models themselves. Further investigation would be needed to determine the underlying causes of these observed trends.
</details>
Figure 3: Comparative performance evaluation across varying task difficulty levels for the web activities dataset, which indicates the accuracy scores achieved by ReMALIS and several state-of-the-art baselines.
Table 3: Ablation on agent coordination capabilities
| No Communication | 31% | 23% | 17% | 592 | 873 | 1198 |
| --- | --- | --- | --- | --- | --- | --- |
| REACT | 42% | 34% | 29% | 497 | 732 | 984 |
| AgentLM | 48% | 39% | 32% | 438 | 691 | 876 |
| FiReAct | 58% | 47% | 37% | 382 | 569 | 745 |
| Basic Propagation | 68% | 53% | 41% | 314 | 512 | 691 |
| Selective Propagation | 79% | 62% | 51% | 279 | 438 | 602 |
| Full Intention Sharing | 91% | 71% | 62% | 248 | 386 | 521 |
2)The Impact on Improving Multi-Agent Coordination Capability As presented in Table 3, on aligned sub-task percentage, the proposed Basic Propagation, Selective Propagation, and Full Intention Sharing methods consistently outperform baseline models like REACT and AgentLM across varying difficulty levels (“easy”, “medium”, and “hard”). For example, Full Intention Sharing achieves alignment of 91%, 71%, and 62% across these levels, respectively. These results are substantially higher compared to scenarios with no communication (31%, 23%, and 17%).
Similarly, coordination time metrics exhibit major efficiency gains from intention propagation. On “Hard” tasks, Full Intention Sharing reduces coordination time to 521 ms, 57% faster than the 1198 ms for No Communication. As task complexity increases from easy to hard, the coordination time savings compared to baselines grows from 138 ms to 677 ms. This reveals that intention sharing mitigates growing coordination delays for difficult scenarios.
The highlighted propagation mechanisms also demonstrate clear incremental performance improvements over increasingly selective information sharing. As agents propagate more precise intentions to relevant teammates, both sub-task alignment and coordination efficiency improve. Moving from Basic to Selective to Full sharing provides gains on top of gains.
5 Conclusion
In this paper, we introduce a novel framework, ReMALIS, designed to enhance collaborative capabilities within multi-agent systems using LLMs. Our approach incorporates three principal innovations: intention propagation for establishing a shared understanding among agents, bidirectional coordination channels to adapt reasoning processes in response to team dynamics, and recursive reasoning architectures that provide agents with advanced contextual grounding and planning capabilities necessary for complex coordination tasks. Experimental results indicate that ReMALIS significantly outperforms several baseline methods, underscoring the efficacy of cooperative multi-agent AI systems. By developing frameworks that enable LLMs to acquire cooperative skills analogous to human team members, we advance the potential for LLM agents to manage flexible coordination in complex collaborative environments effectively.
6 Limitiation
While ReMALIS demonstrates promising results in collaborative multi-agent tasks, our framework relies on a centralized training paradigm, which may hinder scalability in fully decentralized environments. The current implementation does not explicitly handle dynamic agent arrival or departure during execution, which could impact coordination in real-world applications, the recursive reasoning component may struggle with long-term dependencies and planning horizons beyond a certain time frame.
References
- Aminabadi et al. (2022) Reza Yazdani Aminabadi et al. 2022. Deepspeed-inference: Enabling efficient inference of transformer models at unprecedented scale. In SC22: International Conference for High Performance Computing, Networking, Storage and Analysis.
- Chaka (2023) Chaka Chaka. 2023. Generative ai chatbots-chatgpt versus youchat versus chatsonic: Use cases of selected areas of applied english language studies. International Journal of Learning, Teaching and Educational Research, 22(6):1–19.
- Chebotar et al. (2023) Yevgen Chebotar et al. 2023. Q-transformer: Scalable offline reinforcement learning via autoregressive q-functions. In Conference on Robot Learning. PMLR.
- Chen et al. (2023) Baian Chen et al. 2023. Fireact: Toward language agent fine-tuning. arXiv preprint arXiv:2310.05915.
- Chen et al. (2024) Yushuo Chen et al. 2024. Towards coarse-to-fine evaluation of inference efficiency for large language models. arXiv preprint arXiv:2404.11502.
- Chiu et al. (2024) Yu Ying Chiu et al. 2024. A computational framework for behavioral assessment of llm therapists. arXiv preprint arXiv:2401.00820.
- Dong et al. (2023) Yihong Dong et al. 2023. Codescore: Evaluating code generation by learning code execution. arXiv preprint arXiv:2301.09043.
- Du et al. (2023) Yali Du et al. 2023. A review of cooperation in multi-agent learning. arXiv preprint arXiv:2312.05162.
- Fan et al. (2020) Cheng Fan et al. 2020. Statistical investigations of transfer learning-based methodology for short-term building energy predictions. Applied Energy, 262:114499.
- Foerster et al. (2018) Jakob Foerster et al. 2018. Counterfactual multi-agent policy gradients. In Proceedings of the AAAI conference on artificial intelligence, volume 32.
- Gao et al. (2023) Yunfan Gao et al. 2023. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997.
- Hartmann et al. (2022) Valentin N. Hartmann et al. 2022. Long-horizon multi-robot rearrangement planning for construction assembly. IEEE Transactions on Robotics, 39(1):239–252.
- He et al. (2021) Junxian He et al. 2021. Towards a unified view of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366.
- Hu and Sadigh (2023) Hengyuan Hu and Dorsa Sadigh. 2023. Language instructed reinforcement learning for human-ai coordination. arXiv preprint arXiv:2304.07297.
- Huang et al. (2022) Baichuan Huang, Abdeslam Boularias, and Jingjin Yu. 2022. Parallel monte carlo tree search with batched rigid-body simulations for speeding up long-horizon episodic robot planning. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE.
- Khamparia et al. (2021) Aditya Khamparia et al. 2021. An internet of health things-driven deep learning framework for detection and classification of skin cancer using transfer learning. Transactions on Emerging Telecommunications Technologies, 32(7):e3963.
- Lee and Perret (2022) Irene Lee and Beatriz Perret. 2022. Preparing high school teachers to integrate ai methods into stem classrooms. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36.
- Li et al. (2020) Chuan Li et al. 2020. A systematic review of deep transfer learning for machinery fault diagnosis. Neurocomputing, 407:121–135.
- Li et al. (2022) Weihua Li et al. 2022. A perspective survey on deep transfer learning for fault diagnosis in industrial scenarios: Theories, applications and challenges. Mechanical Systems and Signal Processing, 167:108487.
- Loey et al. (2021) Mohamed Loey et al. 2021. A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the covid-19 pandemic. Measurement, 167:108288.
- Lotfollahi et al. (2022) Mohammad Lotfollahi et al. 2022. Mapping single-cell data to reference atlases by transfer learning. Nature biotechnology, 40(1):121–130.
- Lu et al. (2023) Pan Lu et al. 2023. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842.
- Lyu et al. (2021) Xueguang Lyu et al. 2021. Contrasting centralized and decentralized critics in multi-agent reinforcement learning. arXiv preprint arXiv:2102.04402.
- Mao et al. (2022) Weichao Mao et al. 2022. On improving model-free algorithms for decentralized multi-agent reinforcement learning. In International Conference on Machine Learning. PMLR.
- Martini et al. (2021) Franziska Martini et al. 2021. Bot, or not? comparing three methods for detecting social bots in five political discourses. Big data & society, 8(2):20539517211033566.
- Miao et al. (2023) Ning Miao, Yee Whye Teh, and Tom Rainforth. 2023. Selfcheck: Using llms to zero-shot check their own step-by-step reasoning. arXiv preprint arXiv:2308.00436.
- Qiu et al. (2024) Xihe Qiu et al. 2024. Chain-of-lora: Enhancing the instruction fine-tuning performance of low-rank adaptation on diverse instruction set. IEEE Signal Processing Letters.
- Raman et al. (2022) Shreyas Sundara Raman et al. 2022. Planning with large language models via corrective re-prompting. In NeurIPS 2022 Foundation Models for Decision Making Workshop.
- Rana et al. (2023) Krishan Rana et al. 2023. Sayplan: Grounding large language models using 3d scene graphs for scalable task planning. arXiv preprint arXiv:2307.06135.
- Rashid et al. (2020) Tabish Rashid et al. 2020. Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning. In Advances in neural information processing systems 33, pages 10199–10210.
- Rasley et al. (2020) Jeff Rasley et al. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
- Saber et al. (2021) Abeer Saber et al. 2021. A novel deep-learning model for automatic detection and classification of breast cancer using the transfer-learning technique. IEEE Access, 9:71194–71209.
- Schroeder de Witt et al. (2019) Christian Schroeder de Witt et al. 2019. Multi-agent common knowledge reinforcement learning. In Advances in Neural Information Processing Systems 32.
- Schuchard and Crooks (2021) Ross J. Schuchard and Andrew T. Crooks. 2021. Insights into elections: An ensemble bot detection coverage framework applied to the 2018 us midterm elections. Plos one, 16(1):e0244309.
- Schumann et al. (2024) Raphael Schumann et al. 2024. Velma: Verbalization embodiment of llm agents for vision and language navigation in street view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38.
- Shanahan et al. (2023) Murray Shanahan, Kyle McDonell, and Laria Reynolds. 2023. Role play with large language models. Nature, 623(7987):493–498.
- Sharan et al. (2023) S. P. Sharan, Francesco Pittaluga, and Manmohan Chandraker. 2023. Llm-assist: Enhancing closed-loop planning with language-based reasoning. arXiv preprint arXiv:2401.00125.
- Shen et al. (2020) Sheng Shen et al. 2020. Deep convolutional neural networks with ensemble learning and transfer learning for capacity estimation of lithium-ion batteries. Applied Energy, 260:114296.
- Singh et al. (2023) Ishika Singh et al. 2023. Progprompt: Generating situated robot task plans using large language models. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE.
- Song et al. (2023) Chan Hee Song et al. 2023. Llm-planner: Few-shot grounded planning for embodied agents with large language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision.
- Topsakal and Akinci (2023) Oguzhan Topsakal and Tahir Cetin Akinci. 2023. Creating large language model applications utilizing langchain: A primer on developing llm apps fast. In International Conference on Applied Engineering and Natural Sciences, volume 1.
- Valmeekam et al. (2022) Karthik Valmeekam et al. 2022. Large language models still can’t plan (a benchmark for llms on planning and reasoning about change). arXiv preprint arXiv:2206.10498.
- Wang et al. (2024a) Haoyu Wang et al. 2024a. Carbon-based molecular properties efficiently predicted by deep learning-based quantum chemical simulation with large language models. Computers in Biology and Medicine, page 108531.
- Wang et al. (2024b) Haoyu Wang et al. 2024b. Subequivariant reinforcement learning framework for coordinated motion control. arXiv preprint arXiv:2403.15100.
- Wang et al. (2023) Lei Wang et al. 2023. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432.
- Wang et al. (2020) Tonghan Wang et al. 2020. Roma: Multi-agent reinforcement learning with emergent roles. arXiv preprint arXiv:2003.08039.
- Wen et al. (2023) Hao Wen et al. 2023. Empowering llm to use smartphone for intelligent task automation. arXiv preprint arXiv:2308.15272.
- Xi et al. (2023) Zhiheng Xi et al. 2023. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864.
- Yao et al. (2022) Shunyu Yao et al. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629.
- Yin et al. (2023) Da Yin et al. 2023. Lumos: Learning agents with unified data, modular design, and open-source llms. arXiv preprint arXiv:2311.05657.
- Yu et al. (2023) Shengcheng Yu et al. 2023. Llm for test script generation and migration: Challenges, capabilities, and opportunities. In 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security (QRS). IEEE.
- Zeng et al. (2023) Fanlong Zeng et al. 2023. Large language models for robotics: A survey. arXiv preprint arXiv:2311.07226.
- Zhang and Gao (2023) Xuan Zhang and Wei Gao. 2023. Towards llm-based fact verification on news claims with a hierarchical step-by-step prompting method. arXiv preprint arXiv:2310.00305.
- Zhao et al. (2024) Andrew Zhao et al. 2024. Expel: Llm agents are experiential learners. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38.
- Zhu et al. (2023) Zhuangdi Zhu et al. 2023. Transfer learning in deep reinforcement learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence.
- Zhuang et al. (2020) Fuzhen Zhuang et al. 2020. A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1):43–76.
- Zimmer et al. (2021a) Matthieu Zimmer et al. 2021a. Learning fair policies in decentralized cooperative multi-agent reinforcement learning. In International Conference on Machine Learning. PMLR.
- Zimmer et al. (2021b) Matthieu Zimmer et al. 2021b. Learning fair policies in decentralized cooperative multi-agent reinforcement learning. In International Conference on Machine Learning. PMLR.
- Zou et al. (2023) Hang Zou et al. 2023. Wireless multi-agent generative ai: From connected intelligence to collective intelligence. arXiv preprint arXiv:2307.02757.
Appendix A Related Work
A.1 Single Agent Frameworks
Early agent frameworks such as Progprompt Singh et al. (2023) directly prompt large language models (LLMs) to plan, execute actions, and process feedback in a chained manner within one model Song et al. (2023). Despite its conceptual simplicity Valmeekam et al. (2022), an integrated framework imposes a substantial burden on a single LLM, leading to challenges in managing complex tasks Raman et al. (2022); Wang et al. (2024a).
To reduce the reasoning burden, recent works explore modular designs by separating high-level planning and low-level execution into different modules. For example, LUMOS Yin et al. (2023) consists of a planning module, a grounding module, and an execution module. The planning and grounding modules break down complex tasks into interpretable sub-goals and executable actions. FiReAct Chen et al. (2023) introduces a similar hierarchical structure, with a focus on providing step-by-step explanations Zhang and Gao (2023). Although partitioning into modules specializing for different skills is reasonable, existing modular frameworks still rely on a single agent for final action execution Miao et al. (2023); Qiu et al. (2024). Our work pushes this idea further by replacing the single execution agent with a cooperative team of multiple agents.
A.2 Multi-Agent Reinforcement Learning
Collaborative multi-agent reinforcement learning has been studied to solve complex control or game-playing tasks. Representative algorithms include COMA Foerster et al. (2018), QMIX Rashid et al. (2020) and ROMA Wang et al. (2020). These methods enable decentralized execution of different agents but allow centralized training by sharing experiences or parameters Lyu et al. (2021). Drawing on this concept, our ReMALIS framework places greater emphasis on integrating modular LLMs to address complex language tasks. In ReMALIS, each execution agent specializes in specific semantic domains such as query, computation, or retrieval, and is coordinated through a communication module Mao et al. (2022).
The concept of multi-agent RL has recently influenced the design of conversational agents Zimmer et al. (2021a); Schumann et al. (2024). EnsembleBot Schuchard and Crooks (2021) utilizes multiple bots trained on distinct topics, coordinated by a routing model. However, this approach primarily employs a divide-and-conquer strategy with independent skills Martini et al. (2021), and communication within EnsembleBot predominantly involves one-way dispatching rather than bidirectional coordination. In contrast, our work focuses on fostering a more tightly integrated collaborative system for addressing complex problems Schroeder de Witt et al. (2019); Zimmer et al. (2021b).
A.3 Integrated & Collaborative Learning
Integrated learning techniques originate from transfer learning Zhuang et al. (2020); Zhu et al. (2023), aiming to improve a target model by incorporating additional signals from other modalities Lotfollahi et al. (2022); Shanahan et al. (2023). For multi-agent systems, Li et al. (2022); Zhao et al. (2024) find joint training of multiple agents simultaneously boosts performance over separately trained independent agents Lee and Perret (2022). Recently, integrated learning has been used in single agent frameworks like Shen et al. (2020) and Loey et al. (2021), where auxiliary losses of interpretable outputs facilitate main model training through multi-tasking Khamparia et al. (2021); Saber et al. (2021).
Our work adopts integrated learning to train specialized execution agents that are semantically consistent. At the team level, a communication module learns to attentively aggregate and propagate messages across agents, which indirectly coordinates their strategies and behaviors Fan et al. (2020). The integrated and collaborative learning synergizes individual skills and leads to emerged collective intelligence, enhancing the overall reasoning and planning capabilities when dealing with complex tasks He et al. (2021); Li et al. (2020).
Appendix B Methodology and Contributions
Based on the motivations and inspirations above, we propose recursive multi-agent learning with intention sharing framework (ReMALIS), an innovative multi-agent framework empowered by integrated learning for communication and collaboration. The main contributions are:
1. We design a cooperative execution module with multiple agents trained by integrated learning. Different execution agents specialize in different semantic domains while understanding peer abilities, which reduces redundant capacities and improves efficient division of labor.
2. We propose an attentive communication module that propagates informative cues across specialized agents. The module coordinates agent execution strategies without explicit supervision, acting as the role of team leader.
3. The collaborative design allows ReMALIS to handle more complex tasks compared to single-agent counterparts. Specialized agents focus on their specialized domain knowledge while collaborating closely through communicative coordination, leading to strong emergent team intelligence.
4. We enable dynamic feedback loops from communication to the grounding module and re-planning of the planning module, increasing adaptability when execution difficulties arise.
We expect the idea of integrating specialized collaborative agents with dynamic coordination mechanisms to inspire more future research toward developing intelligent collaborative systems beyond conversational agents.
Appendix C Key variables and symbols
Table 4: Key variables and symbols in the proposed recursive multi-agent learning framework.
| $p_{\theta}$ $s_{t}$ $I_{t}$ | Planning module parameterized by $\theta$ Current sub-goal at time $t$ Current intention at time $t$ |
| --- | --- |
| $e_{t}$ | Grounded embedding at time $t$ |
| $f_{t}$ | Agent feedback at time $t$ |
| $g_{\phi}$ | Grounding module parameterized by $\phi$ |
| $\pi_{\xi_{i}}$ | Execution policy of agent $i$ parameterized by $\xi_{i}$ |
| $f_{\Lambda}$ | Intention propagation channel parameterized by $\Lambda$ |
| $m_{ij}$ | Message sent from agent $j$ to agent $i$ |
| $b_{i}(I_{j}|m_{ij})$ | Agent $i$ ’s belief over teammate $j$ ’s intention $I_{j}$ given message $m_{ij}$ |
| $R_{c}$ | Coordination reward |
| $\pi_{\xi}(a_{i}|s_{i},I_{i})$ | Execution agent policy conditioned on state $s_{i}$ and intention $I_{i}$ |
| $a_{i}$ | Action of agent $i$ |
| $s_{i}$ | State of agent $i$ |
| $I_{i}=(\gamma_{i},\Sigma_{i},\pi_{i},\delta_{i})$ | Intention of agent $i$ |
| $\gamma_{i}$ | Current goal of agent $i$ |
| $\Sigma_{i}=\{\sigma_{i1},\sigma_{i2},...\}$ | Set of sub-goals for agent $i$ |
| $\pi_{i}(\sigma)$ | Probability distribution over possible next sub-goals for agent $i$ |
| $\delta_{i}(\sigma)$ | Desired teammate assignment for sub-goal $\sigma$ of agent $i$ |
Table 4 summarizes the key variables and symbols used in the proposed recursive multi-agent learning framework called ReMALIS. It includes symbols representing various components like the planning module, grounding module, execution policies, intentions, goals, sub-goals, and the intention propagation channel.
Table 5: Comparison of Traffic Network Complexity Levels
| Difficulty Level | Grid Size | Intersections | Arrival Rates | Phases per Intersection |
| --- | --- | --- | --- | --- |
| Easy | 3x3 | 9 | Low and stable (0.5 vehicles/s) | Less than 10 |
| Medium | 5x5 | 25 | Fluctuating (0.5-2 vehicles/s) | 10-15 |
| Hard | 8x8 | 64 | Highly dynamic (0.1 to 3 vehicles/s) | More than 15 |
| Hell | Irregular | 100+ | Extremely dynamic with spikes | $>$ 25 |
Table 6: Training hyperparameters and configurations
| Hyperparameter/Configuration Language Model Size Optimizer | ReMALIS 7B AdamW | LUMOS 13B Adam | AgentLM 6B AdamW | GPT-3.5 175B Adam |
| --- | --- | --- | --- | --- |
| Learning Rate | 1e-4 | 2e-5 | 1e-4 | 2e-5 |
| Batch Size | 32 | 64 | 32 | 64 |
| Dropout | 0 | 0.1 | 0 | 0.1 |
| Number of Layers | 12 | 8 | 6 | 48 |
| Model Dimension | 768 | 512 | 768 | 1024 |
| Number of Heads | 12 | 8 | 12 | 16 |
| Training Epochs | 15 | 20 | 10 | 20 |
| Warmup Epochs | 1 | 2 | 1 | 2 |
| Weight Decay | 0.01 | 0.001 | 0.01 | 0.001 |
| Network Architecture | GNN | Transformer | Transformer | Transformer |
| Planning Module | GNN, 4 layers, 512 hidden size | 2-layer GNN, 1024 hidden size | - | - |
| Grounding Module | 6-layer Transformer, $d_{\text{model}}=768$ | 4-layer Transformer, $d_{\text{model}}=512$ | - | - |
| Execution Agents | 7 specialized, integrated training | Single agent | 8 agent | 4 agent |
| Intention Propagation | 4-layer GRU, 256 hidden size | - | - | - |
| Coordination Feedback | GAT, 2 heads, $\alpha=0.2$ | - | - | - |
| Trainable Parameters | 5.37B | 6.65B | 4.61B | 17.75B |
Appendix D Tasks Setup
D.1 Traffic Control
We define four levels of difficulty for our traffic control tasks: Easy, Medium, Hard, and Hell in Table 5.
D.2 Web Tasks
Similarly, we categorize the web tasks in our dataset into four levels of difficulty: Easy, Medium, Hard, and All.
Easy: The easy web tasks involve basic interactions like clicking on a single link or typing a short phrase. They require navigating simple interfaces with clear options to reach the goal.
Medium: The medium-difficulty tasks demand more complex sequences of actions across multiple pages, such as selecting filters or submitting forms. They test the agent’s ability to understand the site structure and flow.
Hard: The hard web tasks feature more open-ended exploration through dense sites with ambiguity. Significant reasoning is needed to chain obscure links and controls to achieve aims.
All: The all-level combines tasks across the spectrum of difficulty. Both simple and complex interactions are blended to assess generalized web agent skills. The performance here correlates to readiness for real-world web use cases.
Appendix E Experimental Setups
In this study, we compare the performance of several state-of-the-art language models, including ReMALIS, LUMOS, AgentLM, and GPT-3.5. These models vary in size, architecture, and training configurations, reflecting the diversity of approaches in the field of natural language processing in Table 6.
ReMALIS is a 7 billion parameter model trained using the AdamW optimizer with a learning rate of 1e-4, a batch size of 32, and no dropout. It has 12 layers, a model dimension of 768, and 12 attention heads. The model was trained for 15 epochs with a warmup period of 1 epoch and a weight decay of 0.01. ReMALIS employs a Graph Neural Network (GNN) architecture, which is particularly suited for modeling complex relationships and structures.
LUMOS, a larger model with 13 billion parameters, was trained using the Adam optimizer with a learning rate of 2e-5, a batch size of 64, and a dropout rate of 0.1. It has 8 layers, a model dimension of 512, and 8 attention heads. The model was trained for 20 epochs with a warmup period of 2 epochs and a weight decay of 0.001. LUMOS follows a Transformer architecture, which has proven effective in capturing long-range dependencies in sequential data.
AgentLM, a 6 billion parameter model, was trained using the AdamW optimizer with a learning rate of 1e-4, a batch size of 32, and no dropout. It has 6 layers, a model dimension of 768, and 12 attention heads. The model was trained for 10 epochs with a warmup period of 1 epoch and a weight decay of 0.01. AgentLM also uses a Transformer architecture.
GPT-3.5, the largest model in this study with 175 billion parameters, was trained using the Adam optimizer with a learning rate of 2e-5, a batch size of 64, and a dropout rate of 0.1. It has 48 layers, a model dimension of 1024, and 16 attention heads. The model was trained for 20 epochs with a warmup period of 2 epochs and a weight decay of 0.001. GPT-3.5 follows the Transformer architecture, which has been widely adopted for large language models.
In addition to the base language models, the table provides details on the specialized modules and configurations employed by ReMALIS and LUMOS. ReMALIS incorporates a planning module with a 4-layer GNN and a 512 hidden size, a grounding module with a 6-layer Transformer and a model dimension of 768, 7 specialized and integrated execution agents, a 4-layer Gated Recurrent Unit (GRU) with a 256 hidden size for intention propagation, and a Graph Attention Network (GAT) with 2 heads and an alpha value of 0.2 for coordination feedback.
LUMOS, on the other hand, employs a 2-layer GNN with a 1024 hidden size for planning, a 4-layer Transformer with a model dimension of 512 for grounding, and a single integrated execution agent.
Appendix F Pseudo-code
This algorithm 2 presents the hierarchical planning and grounding processes in the proposed recursive multi-agent learning framework. The planning module $p_{\theta}$ takes the current sub-goal $s_{t}$ , intention $I_{t}$ , grounded embedding $e_{t}$ , and feedback $f_{t}$ as inputs, and predicts the next sub-goal $s_{t+1}$ . It first encodes the inputs using an encoder, and then passes the encoded representation through a graph neural network $T_{\theta}$ parameterized by $\theta$ . The output of $T_{\theta}$ is passed through a softmax layer to obtain the probability distribution over the next sub-goal.
The grounding module $g_{\phi}$ takes the current state $s_{t}$ , intention $I_{t}$ , and feedback trajectory $f_{1:t}$ as inputs, and produces the grounded embedding $e_{t}$ . It encodes the inputs using an encoder, and then applies cross-attention over the vocabulary $V$ , followed by a convolutional feature extractor. The output is combined with agent feedback $P_{t}$ to enhance the grounding accuracy. The grounding module is parameterized by $\phi$ .
This algorithm 3 describes the intention propagation mechanism in the proposed recursive multi-agent learning framework. The goal is for each agent $i$ to infer a belief $b_{i}(I_{j}|m_{ij})$ over the intention $I_{j}$ of a teammate $j$ , given a message $m_{ij}$ received from $j$ .
Algorithm 2 Hierarchical Planning and Grounding
1: Input: Current sub-goal $s_{t}$ , intention $I_{t}$ , grounded embedding $e_{t}$ , feedback $f_{t}$
2: Output: Next sub-goal $s_{t+1}$
3: $h_{t}=\text{Encoder}(s_{t},I_{t},e_{t},f_{t})$ {Encode inputs}
4: $s_{t+1}=\text{Softmax}(T_{\theta}(h_{t}))$ {Predict next sub-goal}
5: $T_{\theta}$ is a graph neural network parameterized by $\theta$ {Planning module $p_{\theta}$ }
6: Input: Current state $s_{t}$ , intention $I_{t}$ , feedback $f_{1:t}$
7: Output: Grounded embedding $e_{t}$
8: $h_{t}=\text{Encoder}(s_{t},I_{t},f_{1:t})$ {Encode inputs}
9: $e_{t}=\text{Conv}(\text{Attn}(h_{t},V))+P_{t}$ {Grounded embedding}
10: $\text{Attn}(·,·)$ is a cross-attention layer over vocabulary $V$
11: $\text{Conv}(·)$ is a convolutional feature extractor
12: $P_{t}$ includes agent feedback to enhance grounding accuracy
13: $g_{\phi}$ is the grounding module parameterized by $\phi$
It initializes an intention propagation channel $f_{\Lambda}$ , parameterized by $\Lambda$ , which is implemented as a recurrent neural network.
The intention inference process works as follows:
1. The received message $m_{ij}$ is encoded using an encoder to obtain a representation $h_{ij}$ .
1. The encoded message $h_{ij}$ is passed through the propagation channel $f_{\Lambda}$ to infer the belief $b_{i}(I_{j}|m_{ij})$ over teammate $j$ ’s intention $I_{j}$ .
The objective is to train the parameters $\Lambda$ of the propagation channel $f_{\Lambda}$ to maximize the coordination reward $R_{c}$ over sampled intentions $I$ and messages $m$ from the distribution defined by $f_{\Lambda}$ .
Algorithm 3 Intention Propagation Mechanism
0: Current intention $I_{i}$ of agent $i$ , message $m_{ij}$ from teammate $j$
0: Belief $b_{i}(I_{j}|m_{ij})$ over teammate $j$ ’s intention $I_{j}$
1: Initialization:
2: Intention propagation channel $f_{\Lambda}$ parameterized by $\Lambda$
3: $f_{\Lambda}$ is a recurrent neural network
4: Intention Inference:
5: Encode message: $h_{ij}←\text{Encoder}(m_{ij})$
6: Infer intention belief: $b_{i}(I_{j}|m_{ij})← f_{\Lambda}(m_{ij})$
7: Objective:
8: Sample intentions $I$ and messages $m$ from $f_{\Lambda}$
9: Maximize coordination reward $R_{c}$ over intentions and messages:
10: $\Lambda^{*}←\arg\max_{\Lambda}\mathbb{E}_{I,m\sim f_{\Lambda}}[R_{c}(%
I,m)]$
Algorithm 4 Bidirectional Coordination
0: Experience tuples $(s_{t},a_{t},r_{t},s_{t+1})$ for all agents
0: Execution policies $\pi_{\xi_{i}}(a_{i}|s_{i},I_{i})$ and coordination feedback $c_{t}$
1: Execution Policy:
2: for each agent $i$ do
3: Get agent state $s_{i,t}$ and intention $I_{i,t}$
4: $a_{i,t}\sim\pi_{\xi_{i}}(a_{i}|s_{i,t},I_{i,t})$ {Execution policy}
5: end for
6: Coordination Feedback:
7: Collect execution encodings $h^{exec}_{t}=[\phi_{1}(o_{1}),...,\phi_{N}(o_{N})]$ {Encode observations}
8: $c_{t}←\Phi(h^{exec}_{t})$ {Summarize coordination patterns}
9: Objective:
10: Maximize team reward $R$ and auxiliary loss $L_{aux}$ :
11: $\xi^{*}←\arg\max_{\xi}\mathbb{E}_{(s,a)\sim\pi_{\xi}}[R+\lambda L_{%
aux}]$
This algorithm 4 describes the bidirectional coordination mechanism in the proposed recursive multi-agent learning framework. It involves executing actions based on the agents’ policies and generating coordination feedback from the execution experiences.
Our algorithm takes experience tuples $(s_{t},a_{t},r_{t},s_{t+1})$ for all agents as input, where $s_{t}$ is the state, $a_{t}$ is the action taken, $r_{t}$ is the reward received, and $s_{t+1}$ is the next state.
The execution policy part works as follows:
1. For each agent $i$ , get the agent’s state $s_{i,t}$ and intention $I_{i,t}$ .
1. Sample an action $a_{i,t}$ from the execution policy $\pi_{\xi_{i}}(a_{i}|s_{i,t},I_{i,t})$ , parameterized by $\xi_{i}$ .
The coordination feedback part works as follows:
1. Collect execution encodings $h^{exec}_{t}=[\phi_{1}(o_{1}),...,\phi_{N}(o_{N})]$ by encoding the observations $o_{i}$ of each agent $i$ using an encoder $\phi_{i}$ .
1. Summarize the coordination patterns $c_{t}$ from the execution encodings $h^{exec}_{t}$ using a function $\Phi$ .
The objective is to maximize the team reward $R$ and an auxiliary loss $L_{aux}$ by optimizing the execution policy parameters $\xi$ . The auxiliary loss $L_{aux}$ is used to incorporate additional regularization or constraints.
The bidirectional coordination mechanism allows execution agents to act based on their policies and intentions, while also generating coordination feedback $c_{t}$ that summarizes the emerging coordination patterns. This feedback can be used to guide the planning and grounding modules in the recursive multi-agent learning framework.
Appendix G Discussion
The results demonstrate the efficacy of the proposed ReMALIS framework in enabling coordinated multi-agent collaboration for complex tasks. By propagating intentions between agents, establishing bidirectional feedback channels, and integrating recursive reasoning architectures, ReMALIS outperformed single-agent baselines and concurrent methods across difficulty levels on both the traffic flow prediction and web activities datasets.
The performance gains highlight the importance of fostering a shared understanding of goals and sub-tasks among agents through intention propagation. Communicating local beliefs allows agents to align their actions towards common objectives, leading to emergent coordinated behaviors that reduce misaligned sub-tasks and miscoordination errors. Furthermore, the bidirectional feedback channels play a crucial role in shaping the reasoning strategies of the planning and grounding modules based on the coordination patterns observed during execution. This adaptability enables the agents to adjust their comprehension and planning policies dynamically, resulting in more flexible and responsive behaviors.
The integration of recursive reasoning architectures also contributes to the superior performance of ReMALIS. By modeling the intentions and strategies of other agents, the execution agents can engage in more contextual and holistic reasoning, enhancing their ability to handle complex temporal dependencies and long-term planning horizons. This recursive reasoning capability further amplifies the benefits of intention propagation and bidirectional feedback, as agents can better interpret and leverage the shared information and coordination signals.
It is important to note that while ReMALIS demonstrates substantial improvements over single-agent frameworks, there are still limitations and potential areas for further research. For instance, the current implementation relies on a centralized training paradigm, which may hinder scalability in fully decentralized environments. Additionally, the framework does not explicitly handle dynamic agent arrival or departure during execution, which could impact coordination in real-world applications with fluid team compositions.
Future work could explore decentralized training approaches that maintain the benefits of multi-agent collaboration while addressing scalability concerns. Moreover, developing mechanisms to adaptively handle changes in the agent team during execution could enhance the robustness and flexibility of the framework in dynamic environments.
Appendix H Supplementary application description of the overall framework
To further illustrate the practical applicability and versatility of our proposed ReMALIS framework, we present a supplementary application scenario. Figure 2 depicts a high-level overview of how ReMALIS can be employed in a real-world setting to tackle complex, multi-step tasks that require orchestrating multiple agents with diverse capabilities. This exemplary use case demonstrates the framework’s ability to decompose intricate problems into manageable sub-tasks, dynamically allocate appropriate agents, and seamlessly coordinate their actions to achieve the overarching goal efficiently and effectively.
Planning Module (Figure 4): 1.
Analyze the current traffic conditions, including vehicle counts, road incidents, and construction zones. 2.
Identify intersections experiencing congestion and potential bottlenecks. 3.
Formulate high-level goals to alleviate congestion and optimize traffic flow. 4.
Break down the goals into a sequence of subgoals and subtasks. 5.
Determine the dependencies and coordination needs between subtasks. 6.
Plan the assignment of subtasks to specialized execution agents based on their expertise.
<details>
<summary>extracted/5737747/11.png Details</summary>

### Visual Description
\n
## Diagram: LLM-Driven Process Flow
### Overview
The image depicts a diagram illustrating a process flow driven by a Large Language Model (LLM). The LLM receives inputs of "Data" and "Prior" and feeds into a central processing stage, which then outputs "Task" and "Goal". The central processing stage is further broken down into sub-processes of "Optimize Planning Process", "Respond Dynamic Changes", and "Establish Priority Policy".
### Components/Axes
The diagram consists of the following components:
* **LLM:** Represented by a swirling, circular icon.
* **Data:** Represented by a graph-like icon.
* **Prior:** Represented by a document-like icon.
* **Optimize Planning Process:** Represented by a gear icon.
* **Respond Dynamic Changes:** Represented by a flag icon.
* **Establish Priority Policy:** Represented by a star-like icon.
* **Process:** Represented by a swirling, circular icon.
* **Task:** Represented by a grid of squares.
* **Goal:** Represented by a network of connected nodes.
The diagram uses arrows to indicate the flow of information and influence between these components. A dashed-line box encapsulates the "Optimize Planning Process", "Respond Dynamic Changes", and "Establish Priority Policy" components, visually grouping them as a central processing unit.
### Detailed Analysis or Content Details
The diagram shows a unidirectional flow:
1. The LLM receives input from "Data" and "Prior".
2. The LLM outputs to the central processing unit (encapsulated by the dashed box).
3. Within the central processing unit:
* The LLM output influences "Optimize Planning Process", "Respond Dynamic Changes", and "Establish Priority Policy".
* There is a cyclical flow between these three components, indicated by bidirectional arrows.
4. The central processing unit outputs to "Process".
5. "Process" then outputs to "Task" and "Goal".
There are no numerical values or specific data points present in the diagram. It is a conceptual representation of a process.
### Key Observations
The diagram emphasizes the LLM's role as a central driver of the process. The cyclical flow within the central processing unit suggests an iterative refinement or adjustment of planning, response, and prioritization. The final outputs, "Task" and "Goal", represent the actionable outcomes of the process.
### Interpretation
This diagram illustrates a system where an LLM is used to translate raw "Data" and pre-defined "Prior" into concrete "Task" assignments and overarching "Goal" definitions. The central processing unit, with its iterative loop, suggests a dynamic and adaptive system capable of responding to changing conditions and optimizing its planning process. The LLM doesn't directly produce tasks or goals, but rather informs a more complex internal process that does. The diagram highlights a modern approach to problem-solving, leveraging the capabilities of LLMs for intelligent automation and decision-making. The absence of specific details suggests this is a high-level architectural overview rather than a detailed implementation plan.
</details>
Figure 4: Overview of the proposed ReMALIS Planning Module for predicting sub-goals based on current goals, intentions, grounded embeddings, and agent feedback.
<details>
<summary>extracted/5737747/22.png Details</summary>

### Visual Description
\n
## Diagram: Collaborative Process for Goal Implementation
### Overview
The image depicts a diagram illustrating a collaborative process, likely within a healthcare or therapeutic context, for achieving specific goals. The process involves initial assessments, analysis, collaboration, and ultimately, the implementation of plans. The diagram is divided into two main sections separated by a dashed vertical line: a section detailing initial assessments and a section illustrating the collaborative process itself.
### Components/Axes
The diagram consists of the following components:
* **Left Section (Assessments):** A dashed, rounded rectangle containing icons representing different assessment areas.
* Functional assessment (icon: person with head gears)
* Behavioral history (icon: magnifying glass)
* Pre-diagnosis problems (icon: brain with connecting lines)
* Expert guidance (icon: person with tie)
* **Right Section (Collaboration):** A series of interconnected shapes representing collaborative stages.
* Collaboration with neighbors (yellow square)
* Collaboration with functions (pink squares with red diagonal lines)
* Collaboration with (cyan square)
* **Output:** A yellow, layered cylinder shape labeled "Specific plans for the implementation of goals".
* **Arrows:** Arrows indicating the flow of the process.
### Detailed Analysis or Content Details
The left section lists four assessment areas. The right section shows a flow starting with "Collaboration with neighbors" (yellow square), then branching to "Collaboration with functions" (pink squares with red diagonal lines), and finally converging into "Collaboration with" (cyan square). An arrow then points from the cyan square to the "Specific plans for the implementation of goals" (yellow cylinder).
The diagram does not contain numerical data or precise scales. It is a conceptual representation of a process.
### Key Observations
The diagram emphasizes the importance of collaboration between different entities ("neighbors," "functions") in developing specific plans. The initial assessment phase appears to feed into the collaborative process. The use of different colors and shapes suggests distinct roles or areas of focus within the collaboration. The final output is a concrete set of plans for implementation.
### Interpretation
This diagram likely represents a multidisciplinary approach to problem-solving or goal setting. The initial assessments (functional assessment, behavioral history, etc.) provide a foundation of information. This information is then used in collaborative efforts with various stakeholders ("neighbors" could represent family, community resources, or other professionals; "functions" could represent different departments or specialties). The final outcome is a tailored plan for achieving specific goals.
The diagram suggests a holistic approach, recognizing that effective solutions require input from multiple perspectives. The layered cylinder shape representing the final plans implies a structured and well-defined outcome. The diagram does not specify the nature of the "problems" or "goals," but it highlights the process of arriving at a solution through collaboration and informed assessment. The use of icons suggests a simplified, easily understandable representation of a complex process.
</details>
Figure 5: Framework of the proposed ReMALIS Grounding Module that contextualizes symbol embeddings using the current state, intentions, and feedback signals.
Grounding Module (Figure 5): 1.
Contextualize the abstract traffic concepts and symbols into grounded representations. 2.
Map entities like intersections, vehicles, and signal phases to their physical counterparts. 3.
Resolve ambiguities and uncertainties in grounding based on the current traffic context. 4.
Adjust grounding strategies based on feedback from execution agents and emerging coordination patterns. 5.
Provide grounded embeddings to inform the execution agents’ decision-making.
<details>
<summary>extracted/5737747/33.png Details</summary>

### Visual Description
\n
## Diagram: LLM Correction and Task Evaluation Flow
### Overview
The image depicts a diagram illustrating a process of Large Language Model (LLM) correction and task evaluation. It shows a workflow starting with human input and LLM correction, leading to task design and instruction fine-tuning, and finally, a validation stage with positive and negative examples.
### Components/Axes
The diagram consists of four main sections arranged horizontally:
1. **Human & LLM Correction:** Depicted by a human head icon and a building icon with a plus sign, connected to a swirling green shape labeled "LLM correction".
2. **Task Design & Instruction Fine-tuning:** The LLM correction output feeds into a box labeled "Task design" (with a checklist icon) and then into a series of blue checkmark icons labeled "Instruction fine-tuning".
3. **Evaluation Stage:** A dashed box containing pairs of input-output examples, some marked with a green checkmark (positive) and others with a red 'X' (negative).
4. **Input Examples:** Various icons representing different inputs are used within the evaluation stage (e.g., vegetables, fish, paper, bottle).
There are no explicit axes or scales in this diagram. It's a flow diagram representing a process.
### Detailed Analysis or Content Details
The diagram shows a sequential flow of information:
1. **Initial Input:** The process begins with input from a human (represented by the head icon) and potentially a knowledge base or institution (represented by the building icon).
2. **LLM Correction:** This input is then processed by an LLM correction module (the swirling green shape).
3. **Task Design:** The corrected output is used for "Task design," which appears to involve defining the task and its parameters.
4. **Instruction Fine-tuning:** The task design is then used to "Instruction fine-tuning" the LLM, represented by a series of blue checkmarks.
5. **Evaluation:** The fine-tuned LLM is evaluated using a set of input-output pairs.
* **Positive Examples:**
* Vegetables -> Green box with vegetables inside (checkmark)
* Fish -> Green box with fish inside (checkmark)
* **Negative Examples:**
* Skull & Crossbones -> Green box with skull & crossbones inside (X)
* Paper -> Green box with paper inside (X)
* Bottle -> Green box with bottle inside (X)
* Orange object -> Green box with orange object inside (X)
The green checkmark and red 'X' symbols are positioned in the top-right corner of the evaluation stage, indicating the success or failure of the LLM's output.
### Key Observations
* The diagram emphasizes a cyclical process of correction, design, and evaluation.
* The use of icons for input examples suggests a diverse range of tasks being evaluated.
* The clear distinction between positive and negative examples highlights the importance of accurate evaluation.
* The LLM correction stage is central to the process, suggesting its critical role in improving the LLM's performance.
### Interpretation
This diagram illustrates a methodology for improving the performance of a Large Language Model (LLM) through a process of human feedback, correction, task design, and instruction fine-tuning. The evaluation stage, with its positive and negative examples, is crucial for assessing the LLM's ability to generate appropriate outputs. The diagram suggests a focus on safety and correctness, as evidenced by the inclusion of negative examples like skulls and potentially harmful objects (bottle). The overall process appears to be iterative, with the evaluation results potentially feeding back into the LLM correction and task design stages to further refine the model. The diagram doesn't provide quantitative data, but it visually represents a robust workflow for LLM development and improvement. It is a conceptual illustration of a process, not a presentation of data.
</details>
Figure 6: Overview of our ReMALIS Cooperative Execution Module consisting of specialized agents that collaboratively execute actions and propagate intentions.
Execution Module (Figure 6, 7): 1.
Specialized agents monitor their respective domains (vehicle counts, road conditions, signal timings, etc.). 2.
Agents communicate their local intentions and goals to relevant teammates. 3.
Agents align their actions based on shared intentions and the coordinated plans. 4.
Agents execute their assigned subtasks (adjusting signal phases, routing emergency vehicles, etc.). 5.
Agents observe the impact of their actions and provide feedback on emerging coordination patterns. 6.
Agents adapt their strategies dynamically based on the feedback and changing traffic conditions. 7.
Agents continuously monitor and respond to fluctuations in vehicle arrival rates and traffic patterns. 8.
Agents collaborate and coordinate their efforts to collectively alleviate congestion and optimize traffic flow.
<details>
<summary>extracted/5737747/44.png Details</summary>

### Visual Description
\n
## Diagram: Cooperative Process Flow
### Overview
The image depicts a diagram illustrating a process flow for cooperative work, starting with initial inputs of communication, historical data, and expert guidance, progressing through collaborative evaluation, and culminating in communication and coordination for cooperation. The diagram uses icons and text labels to represent each stage.
### Components/Axes
The diagram is structured linearly from left to right, with three main sections: Input, Process, and Output.
* **Input Section:** Contains three elements:
* "Communication between agents" - Represented by a circular arrow icon with two speech bubbles.
* "Historical process" - Represented by a computer screen with a bar chart and a gear icon.
* "Expert guidance" - Represented by a graduation cap icon.
* **Process Section:** Contains one element:
* "Collaborative evaluation" - Represented by an hourglass icon.
* **Output Section:** Contains one element:
* "Communicate and Coordinate for Cooperation" - Represented by a dashed-border rectangle containing multiple human figure icons, each holding or interacting with different objects.
### Detailed Analysis or Content Details
The diagram shows a flow from the Input section to the Process section via an arrow, and then from the Process section to the Output section via another arrow.
The Output section contains a group of six human figures arranged in three rows of two. Each figure is depicted with a different object:
* **Row 1:** One figure holds a stack of books and a green file folder. The other figure holds a blue book.
* **Row 2:** One figure holds a calculator and a smartphone. The other figure holds a tablet.
* **Row 3:** One figure holds a pen and a notepad. The other figure is empty-handed.
### Key Observations
The diagram emphasizes the importance of combining different types of input (communication, historical data, expertise) to facilitate collaborative evaluation, which ultimately leads to effective communication and coordination for cooperation. The variety of objects held by the figures in the output section suggests diverse roles and contributions within the cooperative process.
### Interpretation
This diagram illustrates a model for successful teamwork or collaborative problem-solving. The initial inputs represent the foundational elements needed for effective cooperation. Communication ensures information exchange, historical data provides context and learning opportunities, and expert guidance offers specialized knowledge. The collaborative evaluation stage acts as a filter or refinement process, ensuring that the inputs are properly assessed and integrated. The final output highlights the importance of clear communication and coordinated action to achieve a common goal. The objects held by the figures suggest that different team members may contribute different skills and resources to the cooperative effort. The diagram is conceptual and does not contain specific numerical data or measurable values. It is a visual representation of a process rather than a quantitative analysis.
</details>
Figure 7: Overview of the collaborative evaluation setup in the proposed ReMALIS framework.