# Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models
**Authors**: Xihe Qiu, Haoyu Wang, Xiaoyu Tan, Chao Qu, Yujie Xiong, Yuan Cheng, Yinghui Xu, Wei Chu, Yuan Qi
Abstract
Effective collaboration in multi-agent systems requires communicating goals and intentions between agents. Current agent frameworks often suffer from dependencies on single-agent execution and lack robust inter-module communication, frequently leading to suboptimal multi-agent reinforcement learning (MARL) policies and inadequate task coordination. To address these challenges, we present a framework for training large language models (LLMs) as collaborative agents to enable coordinated behaviors in cooperative MARL. Each agent maintains a private intention consisting of its current goal and associated sub-tasks. Agents broadcast their intentions periodically, allowing other agents to infer coordination tasks. A propagation network transforms broadcast intentions into teammate-specific communication messages, sharing relevant goals with designated teammates. The architecture of our framework is structured into planning, grounding, and execution modules. During execution, multiple agents interact in a downstream environment and communicate intentions to enable coordinated behaviors. The grounding module dynamically adapts comprehension strategies based on emerging coordination patterns, while feedback from execution agents influnces the planning module, enabling the dynamic re-planning of sub-tasks. Results in collaborative environment simulation demonstrate intention propagation reduces miscoordination errors by aligning sub-task dependencies between agents. Agents learn when to communicate intentions and which teammates require task details, resulting in emergent coordinated behaviors. This demonstrates the efficacy of intention sharing for cooperative multi-agent RL based on LLMs.
Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models
1 Introduction
With the recent advancements of large language models (LLMs), developing intelligent agents that can perform complex reasoning and long-horizon planning has attracted increasing research attention Sharan et al. (2023); Huang et al. (2022). A variety of agent frameworks have been proposed, such as ReAct Yao et al. (2022), LUMOS Yin et al. (2023), Chameleon Lu et al. (2023) and BOLT Chiu et al. (2024). These frameworks typically consist of modules for high-level planning, grounding plans into executable actions, and interacting with environments or tools to execute actions Rana et al. (2023).
Despite their initial success, existing agent frameworks may experience some limitations. Firstly, most of them rely on a single agent for execution Song et al. (2023); Hartmann et al. (2022). However, as tasks become more complex, the action dimension can be increased exponentially, and it poses significant challenges for a single agent to handle all execution functionalities Chebotar et al. (2023); Wen et al. (2023). Secondly, existing frameworks lack inter-module communication mechanisms. Typically, the execution results are directly used as input in the planning module without further analysis or coordination Zeng et al. (2023); Wang et al. (2024b). When execution failures occur, the agent may fail to adjust its strategies accordingly Chaka (2023). Thirdly, the grounding module in existing frameworks operates statically, without interactions with downstream modules. It grounds plans independently without considering feedback or states of the execution module Xi et al. (2023). LLMs struggle to handle emergent coordination behaviors and lack common grounding on shared tasks. Moreover, existing multi-agent reinforcement learning (MARL) methods often converge on suboptimal policies that fail to exhibit a certain level of cooperation Gao et al. (2023); Yu et al. (2023).
How can the agents with LLMs effectively communicate and collaborate with each other? we propose a novel approach, Re cursive M ulti- A gent L earning with I ntention S haring (ReMALIS The code can be accessed at the following URL: https://github.com/AnonymousBoy123/ReMALIS.) to address the limitations of existing cooperative artificial intelligence (AI) multi-agent frameworks with LLMs. ReMALIS employs intention propagation between LLM agents to enable a shared understanding of goals and tasks. This common grounding allows agents to align intentions and reduce miscoordination. Additionally, we introduce bidirectional feedback loops between downstream execution agents and upstream planning and grounding modules. This enables execution coordination patterns to guide adjustments in grounding strategies and planning policies, resulting in more flexible emergent behaviors Topsakal and Akinci (2023). By integrating these mechanisms, ReMALIS significantly improves the contextual reasoning and adaptive learning capabilities of LLM agents during complex collaborative tasks. The execution module utilizes specialized agents that collaboratively execute actions, exchange information, and propagate intentions via intention networks. These propagated intentions reduce miscoordination errors and guide grounding module adjustments to enhance LLM comprehension based on coordination patterns Dong et al. (2023). Furthermore, execution agents can provide feedback to prompt collaborative re-planning in the planning module when necessary.
Compared to single-agent frameworks, the synergistic work of multiple specialized agents enhances ReMALIS ’s collective intelligence and leads to emerging team-level behaviors Wang et al. (2023). The collaborative design allows for dealing with more complex tasks that require distributed knowledge and skills. We demonstrate that:
- Intention propagation between execution agents enables emergent coordination behaviors and reduces misaligned sub-tasks.
- Grounding module strategies adjusted by intention sharing improve LLM scene comprehension.
- Planning module re-planning guided by execution feedback increases goal-oriented coordination.
Compared to various single-agent baselines and existing state-of-the-art MARL Hu and Sadigh (2023); Zou et al. (2023) methods using LLMs, our ReMALIS framework demonstrates improved performance on complex collaborative tasks, utilizing the publicly available large-scale traffic flow prediction (TFP) dataset and web-based activities dataset. This demonstrates its effectiveness in deploying LLMs as collaborative agents capable of intention communication, strategic adjustments, and collaborative re-planning Du et al. (2023).
2 Preliminary
In this section, we introduce the methods of the proposed ReMALIS framework in detail. As illustrated in Figure 1, ReMALIS consists of four key components:
<details>
<summary>x1.png Details</summary>

### Visual Description
## System Diagram: Enhanced LLM Planning and Execution
### Overview
The image presents a system diagram illustrating an enhanced Large Language Model (LLM) planning and execution framework. It details the interaction between a Planning Module, a Grounding Module, and an Execution Learning component, emphasizing iterative feedback loops and attention mechanisms.
### Components/Axes
* **Modules:**
* Planning Module (bottom): Responsible for task decomposition, knowledge elicitation, and generating plans based on task requirements.
* Grounding Module (top-right): Connects the LLM's plans to real-world actions, using action history trajectories.
* Execution Learning (top-right): Evaluates the execution of plans, identifies incorrect behaviors, and refines the system through feedback.
* **Processes:**
* Iterative Update Reward Feedback (top): Refines the attention mechanisms based on the outcomes of actions.
* Intentional Transmission (bottom-center): Represents the flow of information from the Planning Module to guide the Grounding Module.
* Planning Guidance (top-center): Directs the Grounding Module based on the Planning Module's output.
* Effective Supervision and Guidance (bottom-center): Provides human intervention and feedback to improve the LLM's performance.
* **Nodes:**
* Nodes labeled N<sub>i</sub><sup>1</sup> to N<sub>i</sub><sup>6</sup> represent different aspects or features considered by the attention mechanisms.
* Nodes labeled α<sub>1</sub> to α<sub>3</sub> and β<sub>1</sub> to β<sub>6</sub> represent attention weights or parameters.
* **Data Flow:**
* Purple arrows indicate the flow of information and control signals between modules and processes.
* Blue arrows indicate feedback loops and refinement processes.
### Detailed Analysis
* **Planning Module (Bottom):**
* Inputs: Task Requirements (gray rectangle), Situation Analysis (orange rectangle), External Information (orange rectangle).
* Processes: Goal Decomposition (orange rectangle), Knowledge Elicitation (orange rectangle).
* Core: LLM (green stylized icon).
* Output: The planning module p<sub>θ</sub> predicts the next pending sub-goal s<sub>(t+1)</sub> (lavender rectangle).
* **Attention Mechanisms (Top-Center):**
* Collaborative Communication: Nodes N<sub>i</sub><sup>1</sup> to N<sub>i</sub><sup>6</sup> communicate with a central node 'i' (purple).
* Type-level Attention: Nodes N<sub>i</sub><sup>1</sup> to N<sub>i</sub><sup>6</sup> are associated with attention weights α<sub>1</sub> to α<sub>3</sub> and β<sub>1</sub> to β<sub>6</sub>.
* Dropout: Represents a regularization technique where some nodes are randomly dropped during training.
* **Grounding Module (Top-Right):**
* Input: Action History Trajectory (purple rectangle).
* Process: Transforms plans into actions P<sub>1</sub>, P<sub>2</sub>, P<sub>3</sub> (orange and green rectangles).
* **Execution Learning (Top-Right):**
* Confidence Thresholds: T<sub>p</sub>, T<sub>n</sub>.
* Evaluation: Compares predicted outcomes (μ<sub>1</sub>, μ<sub>2</sub>, μ<sub>3</sub> - orange rectangles; σ<sub>1</sub>, σ<sub>2</sub>, σ<sub>3</sub> - green rectangles) with actual results.
* Outcomes:
* Meets Expectations (red rectangle).
* Incorrect Behavior (red rectangle).
* Uncertainty Thresholds: K<sub>p</sub>, K<sub>n</sub>.
* **Feedback Loops:**
* Iterative Update Reward Feedback: Refines the attention mechanisms based on the outcomes of actions.
* Execution and Forward Feedback: Provides feedback to the Planning Module.
* Effective Supervision and Guidance: Allows for human intervention to correct and refine the LLM's behavior.
* Expand the Task Prompt: Modifies the task prompt based on the LLM's performance.
* Redefine Planning: Adjusts the planning strategy based on the LLM's performance.
* **Effective Supervision and Guidance (Bottom-Center):**
* Includes a visual representation of a person using a laptop, labeled "Pre-training LLM Intervention".
### Key Observations
* The diagram emphasizes the iterative nature of the planning and execution process, with multiple feedback loops for refinement.
* Attention mechanisms play a crucial role in focusing the LLM's resources on the most relevant aspects of the task.
* Human intervention is incorporated to provide supervision and guidance, ensuring the LLM's behavior aligns with desired outcomes.
* The system incorporates mechanisms for handling uncertainty and correcting incorrect behaviors.
### Interpretation
The diagram illustrates a sophisticated framework for enhancing LLM planning and execution. By incorporating attention mechanisms, feedback loops, and human supervision, the system aims to improve the LLM's ability to generate effective plans and execute them successfully in real-world scenarios. The iterative nature of the process allows the LLM to learn from its mistakes and refine its planning strategies over time. The inclusion of a Grounding Module bridges the gap between the LLM's abstract plans and concrete actions, enabling it to interact with the environment effectively. The system's ability to handle uncertainty and correct incorrect behaviors makes it more robust and reliable.
</details>
Figure 1: This framework introduces a multi-agent learning strategy designed to enhance the capabilities of LLMs through cooperative coordination. It enables agents to collaborate and share intentions for effective coordination, and utilizes recursive reasoning to model and adapt to each other’s strategies.
Planning Module $p_{\theta}$ predicts the next pending sub-goal $s_{t+1}$ , given the current sub-goal $s_{t}$ and other inputs $s_{t+1}=p_{\theta}(s_{t},I_{t},e_{t},f_{t}),$ where $I_{t}$ is the current intention, $e_{t}$ is the grounded embedding, and $f_{t}$ is agent feedback. $p_{\theta}$ first encode information through encoding layers $h_{t}=Encoder(s_{t},I_{t},e_{t},f_{t})$ and subsequently predict the sub-goal through $s_{t+1}=Softmax(T_{\theta}(h_{t}))$ , where $T_{\theta}$ utilizes the graph neural network (GNN) architecture.
The module is trained to maximize the likelihood of all sub-goals along the decision sequences given the current information on time step $t$ . This allows the dynamic re-planning of sub-task dependencies based on agent feedback.
$$
\theta^{*}=\arg\max_{\theta}\prod_{t=1}^{T}p_{\theta}(s_{t+1}|s_{t},I_{t},e_{t%
},f_{t}). \tag{1}
$$
Grounding Module $g_{\phi}$ contextualizes symbol embeddings $e_{t}=g_{\phi}(s_{t},I_{t},f_{1:t})$ , where $s_{t}$ , $I_{t}$ , and $f_{1:t}$ represent the states, intention, and feedback up to time step $t$ , respectively. These embeddings are processed by encoders $h_{t}=\text{Encoder}(s_{t},I_{t},f_{1:t})$ and then by cross-attention layers and convolutional feature extractors: $e_{t}=Conv(Attn(h_{t},V))+P_{t}$ over vocabulary $V$ . Here, $P_{t}$ includes agent feedback to enhance grounding accuracy based on coordination signals for more accurate contextual understanding. The module maps language symbols to physical environment representations through:
$$
g(x)=f_{\theta}\left(\sum_{i=1}^{N}w_{i}g(x_{i})\right), \tag{2}
$$
where $g(x)$ is the grounded embeddings of policy set $x$ and $g(x_{i})$ represents its individual action embedding on agent $i$ , respectively, and $w_{i}$ are learnable weights. The grounding function $f_{\theta}$ utilizes a GNN architecture for structural composition. Additionally, we employ an uncertainty modeling module that represents ambiguities in grounding:
$$
q_{\phi}(z|x)=\text{Normal}\big{(}z;\mu_{\phi}(x),\sigma^{2}_{\phi}(x)\big{)}, \tag{3}
$$
where $z$ is a latent variable modeled as a normal distribution, enabling the capture of multimodal uncertainties in grounding.
Cooperative Execution Module comprises $N$ specialized agents $\{A_{1},...,A_{N}\}$ . This architecture avoids using a single agent to handle all tasks. Instead, each agent is dedicated to a distinct semantic domain, cultivating expertise specific to that domain. For instance, agents $A_{1},A_{2},$ and $A_{3}$ may be dedicated to query processing, information retrieval, and arithmetic operations, respectively. This specialization promotes an efficient distribution of tasks and reduces overlap in capabilities.
Decomposing skills into specialized agents risks creating isolated capabilities that lack coordination. To address this, it is essential that agents not only excel individually but also comprehend the capacities and limitations of their peers. We propose an integrated training approach where specialized agents are trained simultaneously to foster collaboration and collective intelligence. We represent the parameters of agent $A_{i}$ as $\theta_{i}$ . Each agent’s policy, denoted as $y_{i}\sim\pi_{\theta_{i}}(·|s)$ , samples an output $y_{i}$ from a given input state $s$ . The training objective for our system is defined by the following equation:
$$
L_{exe}=\sum_{i=1}^{N}\mathbb{E}_{(s,y^{\star})\sim\mathcal{D}}{\ell(\pi_{%
\theta_{i}}(y_{i}|s),y^{\star})}, \tag{4}
$$
where $\ell(·)$ represents the task-specific loss function, comparing the agent-generated output $y_{i}$ with the ground-truth label $y^{\star}$ . $\mathcal{D}$ denotes the distribution of training data. By optimizing this objective collectively across all agents, each agent not only improves its own output accuracy but also enhances the overall team’s ability to produce coherent and well-coordinated results.
During training, we adjust the decomposition of grounding tasks to enhance collaboration, which is represented by the soft module weights $\{w_{1},...,w_{N}\}$ . These weights indicate how the distribution of grounding commands can be optimized to better utilize the capabilities of different agents. The objective of this training is defined by the following loss function: $L_{com}=\ell(d,w^{\star})$ , where $\ell$ represents the loss function, $d$ is expressed as subgoal task instruction data, and $w^{\star}$ signifies the optimal set of weights.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Process Diagram: LLM-Driven Task Execution
### Overview
The image is a process diagram illustrating a workflow that involves planning, grounding, and execution, potentially driven by a Large Language Model (LLM). It shows the flow of information and actions between different stages, including task decomposition, collaboration, instruction fine-tuning, and expert guidance.
### Components/Axes
* **Stages:** The diagram is divided into three main stages: Planning, Grounding, and Execution.
* **Elements:** Each stage contains several elements representing specific tasks, processes, or data sources.
* **Arrows:** Arrows indicate the flow of information or actions between elements.
* **Text Labels:** Labels describe the purpose or function of each element.
### Detailed Analysis
**1. Planning Stage (Top-Left, Yellow Background):**
* **Process:** Starts with a "Process" icon (blue intertwined circles).
* **Task Coding:** Followed by "Task coding" (orange squares) and "Goal decomposition" (purple shapes).
* **LLM Input:** An arrow points from "LLM" (green icon) to the "Task coding" and "Goal decomposition" elements.
* **LLM Output:** The LLM receives input from "Logical judgment," "Data analysis," and "Prior knowledge."
* **Planning Loop:** A dashed box encloses "Optimize Planning Process" (globe icon), "Establish Priority Policy" (blue asterisk), and "Respond Dynamic Changes" (purple icon). Arrows indicate a feedback loop between these elements.
**2. Grounding Stage (Center, No Background):**
* **Specific Plans:** Starts with "Specific plans for the implementation of goals" (yellow cylinder).
* **Collaboration:** Includes "Collaboration with neighbors" (orange/purple grid), "Collaboration with functions" (purple grid), and "Collaboration with targets" (blue/purple grid). These are combined with a "+" symbol.
* **LLM Input:** The LLM receives input from a dashed box containing:
* "Functional assessment" (building icon)
* "Behavioral history" (magnifying glass icon)
* "Pre-diagnosis problems" (stethoscope icon)
* "Expert guidance" (doctor icon)
* "Specialty analysis" (building icon)
* "Current status" (heart rate monitor icon)
* "Future deductions" (leaf icon)
* "Planned ployment" (pill icon)
**3. Execution Stage (Top-Right, Light Blue Background):**
* **Instruction Fine-Tuning:** Starts with "Instruction fine-tuning" (blue checklist icon).
* **Task Design and LLM Correction:** "Task design" (pink checklist icon) and "LLM correction" (green intertwined circles) feed into "Instruction fine-tuning."
* **Agents Execution:** "Agents execution" (red people icon) and "Guided by experts" (pink building icon) feed into "Task design."
* **Communication and Coordination:** Leads to "Communicate and Coordinate for Cooperation" (people icons).
* **Collaborative Evaluation:** Involves "Collaborative evaluation" (orange hourglass icon), "Communication between agents" (red circular arrows), "Historical process" (computer screen with graph icon), and "Expert guidance" (graduate icon).
**4. Data Flow and Decision Points:**
* **Planning to Grounding:** A double arrow indicates the transition from Planning to Grounding.
* **Grounding to Execution:** A double arrow indicates the transition from Grounding to Execution.
* **Execution Feedback:** A double arrow indicates the transition from Execution back to the start.
* **Checkmarks and X Marks:** In the "Execution" stage, there are checkmarks and X marks above the food icons, indicating success or failure.
**5. Text at the Bottom:**
* "The current error is too large. Please plan and deploy again" (A, pink circle).
* "Is there any feedback on re planning?" (question mark, light blue circle).
### Key Observations
* The diagram illustrates a cyclical process involving planning, grounding, and execution.
* The LLM plays a central role in both planning and grounding, receiving input from various sources and providing guidance.
* Expert guidance is present in both the grounding and execution stages, suggesting human oversight.
* The feedback loop from execution to planning indicates an iterative process of refinement.
### Interpretation
The diagram represents a system where an LLM is used to drive task execution, with human experts providing guidance and feedback. The planning stage involves decomposing goals and establishing priorities. The grounding stage involves gathering information and collaborating with various entities. The execution stage involves fine-tuning instructions, executing tasks, and coordinating cooperation. The feedback loop ensures that the system can adapt to changing conditions and improve its performance over time. The questions at the bottom suggest a focus on error handling and continuous improvement. The checkmarks and X marks in the "Execution" stage suggest a binary outcome for each task, which could be a simplification of a more complex reality.
</details>
Figure 2: Overview of the proposed ReMALIS: This framework comprises a planning module, grounding module, cooperative execution module, and intention coordination channels.
3 Approach
The collaborative MARL of ReMALIS focuses on three key points: intention propagation for grounding, bidirectional coordination channels, and integration with recursive reasoning agents. Detailed parameter supplements and pseudocode details can be found in Appendix C and Appendix F.
3.1 Planning with Intention Propagation
We formulate a decentralized, partially observable Markov game for multi-agent collaboration. Each agent $i$ maintains a private intention $\mathcal{I}_{i}$ encoded as a tuple $\mathcal{I}_{i}=(\gamma_{i},\Sigma_{i},\pi_{i},\delta_{i})$ , where $\gamma_{i}$ is the current goal, $\Sigma_{i}=\{\sigma_{i1},\sigma_{i2},...\}$ is a set of related sub-goals, $\pi_{i}(\sigma)$ is a probability distribution over possible next sub-goals, and $\delta_{i}(\sigma)$ is the desired teammate assignment for sub-goal $\sigma$ .
Intentions are propagated through a communication channel $f_{\Lambda}$ parameterized by $\Lambda$ . For a received message $m_{ij}$ from agent $j$ , agent $i$ infers a belief over teammate $j$ ’s intention $b_{i}(\mathcal{I}_{j}|m_{ij})=f_{\Lambda}(m_{ij})$ , where $\Lambda$ is a recurrent neural network. The channel $f_{\theta}$ is trained in an end-to-end manner to maximize the coordination reward function $R_{c}$ . This propagates relevant sub-task dependencies to enhance common grounding on collaborative goals.
$$
\Lambda^{*}=\arg\max_{\Lambda}\mathbb{E}_{\mathcal{I},m\sim f_{\Lambda}}[R_{c}%
(\mathcal{I},m)]. \tag{5}
$$
At each time-step $t$ , the LLM witll processinputs comprising the agent’s state $s_{t}$ , the intention $\mathcal{I}_{t}$ , and the feedback $f_{1:t}$ .
3.2 Grounding with Bidirectional Coordination Channels
The execution agent policies, denoted by $\pi_{\xi}(a_{i}|s_{i},\mathcal{I}_{i})$ , are parameterized by $\xi$ and conditioned on the agent’s state $s_{i}$ and intention $\mathcal{I}_{i}$ . Emergent coordination patterns are encoded in a summary statistic $c_{t}$ and passed to upstream modules to guide planning and grounding adjustments. For example, frequent miscoordination on sub-goal $\sigma$ indicates the necessity to re-plan $\sigma$ dependencies in $\mathcal{I}$ .
This bidirectional feedback aligns low-level execution with high-level comprehension strategies. In addition to the downstream propagation of intents, execution layers provide bidirectional feedback signals $\psi(t)$ to upstream modules $\psi(t)=\Phi(h^{\text{exec}}_{t})$ :
$$
h^{\text{exec}}_{t}=[\phi_{1}(o_{1}),\ldots,\phi_{N}(o_{N})], \tag{6}
$$
where $\Phi(·)$ aggregates agent encodings to summarize emergent coordination, and $\phi_{i}(·)$ encodes the observation $o_{i}$ for agent $i$ .
Execution agents generate feedback $f_{t}$ to guide upstream LLM modules through: $f_{t}=g_{\theta}(\tau_{1:t})$ , where $g_{\theta}$ processes the action-observation history $\tau_{1:t}$ . These signals include coordination errors $\mathcal{E}_{t}$ which indicate misalignment of sub-tasks; grounding uncertainty $\mathcal{U}_{t}$ , measured as entropy over grounded symbol embeddings; and re-planning triggers $\mathcal{R}_{t}$ , which flag the need for sub-task reordering. These signals can reflect inconsistencies between sub-task objectives, the ambiguity of symbols in different contexts, and the need to adjust previous sub-task sequencing.
Algorithm 1 ReMALIS: Recursive Multi-Agent Learning with Intention Sharing
1: Initialize LLM parameters $\theta,\phi,\omega$
2: Initialize agent policies $\pi_{\xi}$ , communication channel $f_{\theta}$
3: Initialize grounding confusion matrix $C$ , memory $M$
4: for each episode do
5: for each time step $t$ do
6: Observe states $s_{t}$ and feedback $f_{1:t}$ for all agents
7: Infer intentions $\mathcal{I}_{t}$ from $s_{t},f_{1:t}$ using $\text{LLM}_{\theta}$
8: Propagate intentions $\mathcal{I}_{t}$ through channel $f_{\theta}$
9: Compute grounded embeddings $e_{t}=g_{\phi}(s_{t},\mathcal{I}_{t},f_{1:t})$
10: Predict sub-tasks $\Sigma_{t+1}=p_{\theta}(\mathcal{I}_{t},e_{t},f_{1:t})$
11: Generate actions $a_{t}=a_{\omega}(e_{t},\Sigma_{t+1},f_{1:t})$
12: Execute actions $a_{t}$ and observe rewards $r_{t}$ , new states $s_{t+1}$
13: Encode coordination patterns $c_{t}=\Phi(h^{\text{exec}}_{t})$
14: Update grounding confusion $C_{t},M_{t}$ using $c_{t}$
15: Update policies $\pi_{\xi}$ using $R$ and auxiliary loss $\mathcal{L}_{\text{aux}}$
16: Update LLM $\theta,\phi,\omega$ using $\mathcal{L}_{\text{RL}},\mathcal{L}_{\text{confusion}}$
17: end for
18: end for
3.3 Execution: Integration with Reasoning Agents
3.3.1 Agent Policy Generation
We parameterize agent policies $\pi_{\theta}(a_{t}|s_{t},\mathcal{I}_{t},c_{1:t})$ using an LLM with weights $\theta$ . At each time step, the LLM takes as input the agent’s state $s_{t}$ , intention $\mathcal{I}_{t}$ , and coordination feedback $c_{1:t}$ . The output is a distribution over the next actions $a_{t}$ :
$$
\pi_{\theta}(a_{t}|s_{t},\mathcal{I}_{t},c_{1:t})=\text{LLM}_{\theta}(s_{t},%
\mathcal{I}_{t},c_{1:t}). \tag{7}
$$
To leverage agent feedback $f_{1:t}$ , we employ an auxiliary regularization model $\hat{\pi}_{\phi}(a_{t}|s_{t},f_{1:t})$ :
$$
\mathcal{L}_{\text{aux}}(\theta;s_{t},f_{1:t})=\text{MSE}(\pi_{\theta}(s_{t}),%
\hat{\pi}_{\phi}(s_{t},f_{1:t})), \tag{8}
$$
where $\hat{\pi}_{\phi}$ is a feedback-conditioned policy approximation. The training loss to optimize $\theta$ is:
$$
\mathcal{L}(\theta)=\mathcal{L}_{\text{RL}}(\theta)+\lambda\mathcal{L}_{\text{%
aux}}(\theta), \tag{9}
$$
where $\mathcal{L}_{\text{RL}}$ is the reinforcement learning objective and $\lambda$ a weighting factor.
3.3.2 Grounding Strategy Adjustment
We model action dependencies using a graph neural policy module $h_{t}^{a}=\text{GNN}(s_{t},a)$ , where $h_{t}^{a}$ models interactions between action $a$ and the state $s_{t}$ . The policy is then given by $\pi_{\theta}(a_{t}|s_{t})=\prod_{i=1}^{|A|}h_{t}^{a_{i}}$ . This captures the relational structure in the action space, enabling coordinated action generation conditioned on agent communication.
The coordination feedback $c_{t}$ is used to guide adjustments in the grounding module’s strategies. We define a grounding confusion matrix $C_{t}$ , where $C_{t}(i,j)$ represents grounding errors between concepts $i$ and $j$ . The confusion matrix constrains LLM grounding as:
$$
f_{\phi}(s_{t},\mathcal{I}_{t})=\text{LLM}_{\phi}(s_{t},\mathcal{I}_{t})\odot%
\lambda C_{t} \tag{10}
$$
where $\odot$ is element-wise multiplication and $\lambda$ controls the influence of $C_{t}$ , reducing uncertainty on error-prone concept pairs.
We propose a modular regularization approach, with the grounding module $g_{\phi}$ regularized by a coordination confusion estimator:
$$
\mathcal{L}_{\text{confusion}}=\frac{1}{N}\sum_{i,j}A_{\psi}(c_{i},c_{j})\cdot%
\text{Conf}(c_{i},c_{j}) \tag{11}
$$
where $\mathcal{L}_{\text{task}}$ is the task reward, $\text{Conf}(c_{i},c_{j})$ measures confusion between concepts $c_{i}$ and $c_{j}$ , and $A_{\psi}(c_{i},c_{j})$ are attention weights assigning importance based on grounding sensitivity.
An episodic confusion memory $M_{t}$ accumulates long-term grounding uncertainty statistics:
$$
M_{t}(i,j)=M_{t-1}(i,j)+\mathbb{I}(\text{Confuse}(c_{i},c_{j})_{t}), \tag{12}
$$
where $\mathbb{I}(·)$ are indicator functions tracking confusion events. By regularizing with a coordination-focused confusion estimator and episodic memory, the grounding module adapts to avoid miscoordination.
3.4 Collective Learning and Adaptation
The coordination feedback signals $c_{t}$ and interpretability signals $\mathcal{E}_{t},\mathcal{U}_{t},\mathcal{R}_{t}$ play a crucial role in enabling the LLM agents to adapt and learn collectively. By incorporating these signals into the training process, the agents can adjust their strategies and policies to better align with the emerging coordination patterns and requirements of the collaborative tasks.
The collective learning process can be formalized as an optimization problem, where the goal is to minimize the following objective function $\mathcal{L}(\eta,\gamma,\zeta,\xi)=\mathbb{E}_{s_{t},\mathcal{I}_{t},f_{1:t}}%
\left[\alpha\mathcal{U}_{t}+\beta\mathcal{E}_{t}-\mathcal{R}\right]+\Omega(%
\eta,\gamma,\zeta,\xi)$ . Here, $\alpha$ and $\beta$ are weighting factors that balance the contributions of the grounding uncertainty $\mathcal{U}_{t}$ and coordination errors $\mathcal{E}_{t}$ , respectively. The team reward $\mathcal{R}$ is maximized to encourage collaborative behavior. The term $\Omega(\eta,\gamma,\zeta,\xi)$ represents regularization terms or constraints on the model parameters to ensure stable and robust learning.
The objective function $\mathcal{L}$ is defined over the current state $s_{t}$ , the interpretability signals $\mathcal{I}_{t}=\{\mathcal{E}_{t},\mathcal{U}_{t},\mathcal{R}_{t}\}$ , and the trajectory of feedback signals $f_{1:t}=\{c_{1},\mathcal{I}_{1},...,c_{t},\mathcal{I}_{t}\}$ up to the current time step $t$ . The expectation $\mathbb{E}_{s_{t},\mathcal{I}_{t},f_{1:t}}[·]$ is taken over the distribution of states, interpretability signals, and feedback signal trajectories encountered during training.
| Method | Web | TFP | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Easy | Medium | Hard | All | Easy | Medium | Hard | Hell | |
| GPT-3.5-Turbo | | | | | | | | |
| CoT | 65.77 | 51.62 | 32.45 | 17.36 | 81.27 | 68.92 | 59.81 | 41.27 |
| Zero-Shot Plan | 57.61 | 52.73 | 28.92 | 14.58 | 82.29 | 63.77 | 55.39 | 42.38 |
| Llama2-7B | | | | | | | | |
| CoT | 59.83 | 54.92 | 30.38 | 15.62 | 82.73 | 65.81 | 57.19 | 44.58 |
| ReAct | 56.95 | 41.86 | 27.59 | 13.48 | 81.15 | 61.65 | 53.97 | 43.25 |
| ART | 62.51 | 52.34 | 33.81 | 18.53 | 81.98 | 63.23 | 51.78 | 46.83 |
| ReWOO | 63.92 | 53.17 | 34.95 | 19.37 | 82.12 | 71.38 | 61.23 | 47.06 |
| AgentLM | 62.14 | 46.75 | 30.84 | 15.98 | 82.96 | 66.03 | 57.16 | 43.91 |
| FireAct | 64.03 | 50.68 | 32.78 | 17.49 | 83.78 | 68.19 | 58.94 | 45.06 |
| LUMOS | 66.27 | 53.81 | 35.37 | 19.53 | 84.03 | 71.75 | 62.57 | 51.49 |
| Llama3-8B | | | | | | | | |
| Code-Llama (PoT) | 64.85 | 49.49 | 32.16 | 17.03 | 83.34 | 68.47 | 59.15 | 52.64 |
| AgentLM | 66.77 | 51.45 | 31.59 | 16.58 | 85.26 | 71.81 | 58.68 | 53.39 |
| FiReAct | 68.92 | 53.27 | 32.95 | 17.64 | 84.11 | 72.15 | 58.63 | 51.65 |
| DGN | 69.15 | 54.78 | 33.63 | 18.17 | 83.42 | 71.08 | 62.34 | 53.57 |
| LToS | 68.48 | 55.03 | 33.06 | 17.71 | 85.77 | 74.61 | 59.37 | 54.81 |
| AUTOACT | 67.62 | 56.25 | 31.84 | 16.79 | 87.89 | 76.29 | 58.94 | 52.87 |
| ReMALIS(Ours) | 73.92 | 58.64 | 38.37 | 21.42 | 89.15 | 77.62 | 64.53 | 55.37 |
Table 1: Comparative analysis of the ReMALIS framework against single-agent baselines and contemporary methods across two datasets
4 Experiments
4.1 Datasets
To assess the performance of our models, we conducted evaluations using two large-scale real-world datasets: the traffic flow prediction (TFP) dataset and the web-based activities dataset.
TFP dataset comprises 100,000 traffic scenarios, each accompanied by corresponding flow outcomes. Each example is detailed with descriptions of road conditions, vehicle count, weather, and traffic control measures, and is classified as traffic flow: smooth, congested, or jammed. The raw data was sourced from traffic cameras, incident reports, and simulations, and underwent preprocessing to normalize entities and eliminate duplicates.
Web activities dataset contains over 500,000 examples of structured web interactions such as booking flights, scheduling appointments, and making reservations. Each activity follows a template with multiple steps like searching, selecting, filling forms, and confirming. User utterances and system responses were extracted to form the input-output pairs across 150 domains, originating from real anonymized interactions with chatbots, virtual assistants, and website frontends.
4.2 Implementation Details
To handle the computational demands of training our framework with LLMs, we employ 8 Nvidia A800-80G GPUs Chen et al. (2024) under the DeepSpeed Aminabadi et al. (2022) training framework, which can effectively accommodate the extensive parameter spaces and activations required by our framework’s LLM components and multi-agent architecture Rasley et al. (2020).
For the TFP dataset, we classified the examples into four difficulty levels: “Easy”, “Medium”, “Hard”, and “Hell”. The “Easy” level comprises small grid networks with low, stable vehicle arrival rates. The “Medium” level includes larger grids with variable arrival rates. “Hard” tasks feature large, irregular networks with highly dynamic arrival rates and complex intersection configurations. The “Hell” level introduces challenges such as partially observable states, changing road conditions, and fully decentralized environments.
For the web activities dataset, we divided the tasks into “Easy”, “Medium”, “Hard”, and “All” levels. “Easy” tasks required basic single-click or short phrase interactions. “Medium” involved complex multi-page sequences like form submissions. “Hard” tasks demanded significant reasoning through ambiguous, dense websites. The “All” level combined tasks across the full difficulty spectrum.
The dataset was divided into 80% for training, 10% for validation, and 10% for testing, with examples shuffled. These large-scale datasets offer a challenging and naturalistic benchmark to evaluate our multi-agent framework on complex, real-world prediction and interaction tasks.
4.3 Results and Analysis
Table 1 displays the principal experimental results of our ReMALIS framework in comparison with various single-agent baselines and contemporary methods using the web activities dataset. We evaluated the models across four levels of task difficulty: “Easy”, “Medium”, “Hard”, and “All”.
The results from our comparative analysis indicate that ReMALIS (7B), equipped with a 7B parameter LLM backbone, significantly outperforms competing methods. On the comprehensive “All” difficulty level, which aggregates tasks across a range of complexities, ReMALIS achieved a notable score of 55.37%, surpassing the second-highest scoring method, LUMOS, which scored 51.49%. Additionally, ReMALIS (7B) also excelled against AUTOACT, which utilizes a larger 13B parameter model, by achieving a score that is over 3 percentage points higher at 52.87%. These findings highlight the efficacy of ReMALIS ’s parameter-efficient design and its advanced multi-agent collaborative training approach, which allow it to outperform larger single-agent LLMs significantly.
Notably, ReMALIS (7B) also exceeded the performance of GPT-3.5 (Turbo), a substantially larger foundation model, across all difficulty levels. On “Hard” tasks, ReMALIS ’s 21.42% surpassed GPT-3.5’s 17.36% by over 4 points. This indicates that ReMALIS ’s coordination mechanisms transform relatively modest LLMs into highly capable collaborative agents.
Despite their larger sizes, single-agent approaches like GPT-3.5 CoT, ReAct, and AgentLM significantly underperformed. Notably, even the advanced single-agent method LUMOS (13B) could not rival the performance of ReMALIS (7B). The superiority of ReMALIS, attributed to its specialized multi-agent design and novel features such as intention propagation, bidirectional feedback, and recursive reasoning, was particularly evident. On complex “Hard” tasks that required extensive reasoning, ReMALIS achieved a notable performance of 21.42%, surpassing LUMOS by over 2 percentage points, thus highlighting the benefits of its multi-agent architecture and collaborative learning mechanisms.
The exceptional performance of our proposed ReMALIS framework on the Traffic Flow Prediction (TFP) dataset can also be attributed to its innovative design and the effective integration of advanced techniques. On the "Easy" difficulty level, ReMALIS achieved an impressive accuracy of 89.15%, outperforming the second-best method, AUTOACT, by a substantial margin of 1.26%. In the "Medium" category, ReMALIS secured an accuracy of 77.62%, surpassing AUTOACT’s 76.29% by 1.33%. Even in the most challenging "Hard" and "Hell" levels, ReMALIS maintained its lead with accuracies of 64.53% and 55.37%, respectively, outperforming the next best methods, DGN (62.34%) and LToS (54.81%), by 2.19% and 0.56%.
4.4 Ablation Studies
1)The Impact on Improving Multi-Agent Coordination Accuracy We conduct ablation studies to evaluate the impact of each component within the ReMALIS framework. The observations can be found in Table 2. Excluding intention propagation results in a decrease in accuracy by over 6% across both datasets, highlighting difficulties in achieving common grounding among agents without shared local beliefs This highlights the importance of intention sharing for emergent team behaviors.
The absence of bidirectional coordination channels leads to a 4.37% decline in performance across various metrics, illustrating the importance of execution-level signals in shaping planning and grounding strategies. Without feedback coordination, agents become less responsive to new scenarios that require re-planning.
Table 2: Ablation studies on Traffic and Web datasets
| Traffic | Single Agent Baseline | 42.5% | 0.217 | 0.384 |
| --- | --- | --- | --- | --- |
| Intention Propagation | 47.3% | 0.251 | 0.425 | |
| Bidirectional Feedback | 49.8% | 0.278 | 0.461 | |
| Recursive Reasoning | 53.2% | 0.311 | 0.503 | |
| ReMALIS (Full) | 58.7% | 0.342 | 0.538 | |
| Web | Single Agent Baseline | 38.9% | 0.255 | 0.416 |
| Intention Propagation | 42.7% | 0.283 | 0.453 | |
| Bidirectional Feedback | 46.3% | 0.311 | 0.492 | |
| Recursive Reasoning | 50.6% | 0.345 | 0.531 | |
| ReMALIS (Full) | 55.4% | 0.379 | 0.567 | |
Substituting recursive reasoning with convolutional and recurrent neural networks reduces contextual inference accuracy by 5.86%. Non-recursive agents display short-sighted behavior compared to the holistic reasoning enabled by recursive transformer modeling. This emphasizes that recursive architectures are vital for complex temporal dependencies.
<details>
<summary>extracted/5737747/m1.png Details</summary>

### Visual Description
## Line Chart: Performance Comparison of Different Models
### Overview
The image contains two line charts comparing the performance of different models (GPT-3.5, ReAct, ART, ReWOO, AgentLM, FireAct, LUMOS, and REMALIS) over time, measured in milliseconds (ms). The left chart shows performance values ranging from approximately 60 to 85, while the right chart shows performance values ranging from approximately 15 to 22.
### Components/Axes
**Left Chart:**
* **X-axis:** "ms" (milliseconds), ranging from 0 to 100.
* **Y-axis:** "Performance", ranging from 60 to 85.
* **Legend (top-left):**
* GPT-3.5 (Blue)
* ReAct (Orange)
* ART (Green)
* ReWOO (Red)
* AgentLM (Purple)
* FireAct (Brown)
* LUMOS (Pink)
* REMALIS (Gray)
**Right Chart:**
* **X-axis:** "ms" (milliseconds), ranging from 0 to 100.
* **Y-axis:** "Performance", ranging from 15 to 22.
* **Legend (top-left):**
* GPT-3.5 (Blue)
* ReAct (Orange)
* ART (Green)
* ReWOO (Red)
* AgentLM (Purple)
* FireAct (Brown)
* LUMOS (Pink)
* REMALIS (Gray)
### Detailed Analysis
**Left Chart:**
* **GPT-3.5 (Blue):** The line is relatively stable and high, fluctuating around 83-84.
* **ReAct (Orange):** The line is the lowest, fluctuating around 60.
* **ART (Green):** The line fluctuates around 65.
* **ReWOO (Red):** The line fluctuates around 66-67.
* **AgentLM (Purple):** The line fluctuates around 64-65.
* **FireAct (Brown):** The line fluctuates around 65.
* **LUMOS (Pink):** The line fluctuates around 67.
* **REMALIS (Gray):** The line is stable and around 75.
**Right Chart:**
* **GPT-3.5 (Blue):** The line fluctuates between 18 and 19.
* **ReAct (Orange):** The line fluctuates between 15 and 16.
* **ART (Green):** The line fluctuates between 21 and 22.
* **ReWOO (Red):** The line fluctuates between 21 and 22.
* **AgentLM (Purple):** The line fluctuates between 16 and 17.
* **FireAct (Brown):** The line fluctuates between 17 and 18.
* **LUMOS (Pink):** The line fluctuates between 19 and 20.
* **REMALIS (Gray):** The line fluctuates between 19 and 20.
### Key Observations
* In the left chart, GPT-3.5 and REMALIS show the highest and most stable performance. ReAct consistently shows the lowest performance.
* In the right chart, ART and ReWOO show the highest performance, while ReAct remains the lowest.
* All models exhibit performance fluctuations over time.
### Interpretation
The two charts present performance comparisons of different models under two different performance scales. The left chart shows a higher range of performance values, while the right chart shows a lower range. This suggests that the two charts are showing the performance of the models under different conditions or metrics. GPT-3.5 and REMALIS show relatively high and stable performance in the left chart, while ReAct consistently shows the lowest performance in both charts. The fluctuations in performance over time indicate variability in the models' behavior. The different performance ranges suggest that the models are being evaluated under different conditions or using different metrics.
</details>
Figure 3: Comparative performance evaluation across varying task difficulty levels for the web activities dataset, which indicates the accuracy scores achieved by ReMALIS and several state-of-the-art baselines.
Table 3: Ablation on agent coordination capabilities
| No Communication | 31% | 23% | 17% | 592 | 873 | 1198 |
| --- | --- | --- | --- | --- | --- | --- |
| REACT | 42% | 34% | 29% | 497 | 732 | 984 |
| AgentLM | 48% | 39% | 32% | 438 | 691 | 876 |
| FiReAct | 58% | 47% | 37% | 382 | 569 | 745 |
| Basic Propagation | 68% | 53% | 41% | 314 | 512 | 691 |
| Selective Propagation | 79% | 62% | 51% | 279 | 438 | 602 |
| Full Intention Sharing | 91% | 71% | 62% | 248 | 386 | 521 |
2)The Impact on Improving Multi-Agent Coordination Capability As presented in Table 3, on aligned sub-task percentage, the proposed Basic Propagation, Selective Propagation, and Full Intention Sharing methods consistently outperform baseline models like REACT and AgentLM across varying difficulty levels (“easy”, “medium”, and “hard”). For example, Full Intention Sharing achieves alignment of 91%, 71%, and 62% across these levels, respectively. These results are substantially higher compared to scenarios with no communication (31%, 23%, and 17%).
Similarly, coordination time metrics exhibit major efficiency gains from intention propagation. On “Hard” tasks, Full Intention Sharing reduces coordination time to 521 ms, 57% faster than the 1198 ms for No Communication. As task complexity increases from easy to hard, the coordination time savings compared to baselines grows from 138 ms to 677 ms. This reveals that intention sharing mitigates growing coordination delays for difficult scenarios.
The highlighted propagation mechanisms also demonstrate clear incremental performance improvements over increasingly selective information sharing. As agents propagate more precise intentions to relevant teammates, both sub-task alignment and coordination efficiency improve. Moving from Basic to Selective to Full sharing provides gains on top of gains.
5 Conclusion
In this paper, we introduce a novel framework, ReMALIS, designed to enhance collaborative capabilities within multi-agent systems using LLMs. Our approach incorporates three principal innovations: intention propagation for establishing a shared understanding among agents, bidirectional coordination channels to adapt reasoning processes in response to team dynamics, and recursive reasoning architectures that provide agents with advanced contextual grounding and planning capabilities necessary for complex coordination tasks. Experimental results indicate that ReMALIS significantly outperforms several baseline methods, underscoring the efficacy of cooperative multi-agent AI systems. By developing frameworks that enable LLMs to acquire cooperative skills analogous to human team members, we advance the potential for LLM agents to manage flexible coordination in complex collaborative environments effectively.
6 Limitiation
While ReMALIS demonstrates promising results in collaborative multi-agent tasks, our framework relies on a centralized training paradigm, which may hinder scalability in fully decentralized environments. The current implementation does not explicitly handle dynamic agent arrival or departure during execution, which could impact coordination in real-world applications, the recursive reasoning component may struggle with long-term dependencies and planning horizons beyond a certain time frame.
References
- Aminabadi et al. (2022) Reza Yazdani Aminabadi et al. 2022. Deepspeed-inference: Enabling efficient inference of transformer models at unprecedented scale. In SC22: International Conference for High Performance Computing, Networking, Storage and Analysis.
- Chaka (2023) Chaka Chaka. 2023. Generative ai chatbots-chatgpt versus youchat versus chatsonic: Use cases of selected areas of applied english language studies. International Journal of Learning, Teaching and Educational Research, 22(6):1–19.
- Chebotar et al. (2023) Yevgen Chebotar et al. 2023. Q-transformer: Scalable offline reinforcement learning via autoregressive q-functions. In Conference on Robot Learning. PMLR.
- Chen et al. (2023) Baian Chen et al. 2023. Fireact: Toward language agent fine-tuning. arXiv preprint arXiv:2310.05915.
- Chen et al. (2024) Yushuo Chen et al. 2024. Towards coarse-to-fine evaluation of inference efficiency for large language models. arXiv preprint arXiv:2404.11502.
- Chiu et al. (2024) Yu Ying Chiu et al. 2024. A computational framework for behavioral assessment of llm therapists. arXiv preprint arXiv:2401.00820.
- Dong et al. (2023) Yihong Dong et al. 2023. Codescore: Evaluating code generation by learning code execution. arXiv preprint arXiv:2301.09043.
- Du et al. (2023) Yali Du et al. 2023. A review of cooperation in multi-agent learning. arXiv preprint arXiv:2312.05162.
- Fan et al. (2020) Cheng Fan et al. 2020. Statistical investigations of transfer learning-based methodology for short-term building energy predictions. Applied Energy, 262:114499.
- Foerster et al. (2018) Jakob Foerster et al. 2018. Counterfactual multi-agent policy gradients. In Proceedings of the AAAI conference on artificial intelligence, volume 32.
- Gao et al. (2023) Yunfan Gao et al. 2023. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997.
- Hartmann et al. (2022) Valentin N. Hartmann et al. 2022. Long-horizon multi-robot rearrangement planning for construction assembly. IEEE Transactions on Robotics, 39(1):239–252.
- He et al. (2021) Junxian He et al. 2021. Towards a unified view of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366.
- Hu and Sadigh (2023) Hengyuan Hu and Dorsa Sadigh. 2023. Language instructed reinforcement learning for human-ai coordination. arXiv preprint arXiv:2304.07297.
- Huang et al. (2022) Baichuan Huang, Abdeslam Boularias, and Jingjin Yu. 2022. Parallel monte carlo tree search with batched rigid-body simulations for speeding up long-horizon episodic robot planning. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE.
- Khamparia et al. (2021) Aditya Khamparia et al. 2021. An internet of health things-driven deep learning framework for detection and classification of skin cancer using transfer learning. Transactions on Emerging Telecommunications Technologies, 32(7):e3963.
- Lee and Perret (2022) Irene Lee and Beatriz Perret. 2022. Preparing high school teachers to integrate ai methods into stem classrooms. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36.
- Li et al. (2020) Chuan Li et al. 2020. A systematic review of deep transfer learning for machinery fault diagnosis. Neurocomputing, 407:121–135.
- Li et al. (2022) Weihua Li et al. 2022. A perspective survey on deep transfer learning for fault diagnosis in industrial scenarios: Theories, applications and challenges. Mechanical Systems and Signal Processing, 167:108487.
- Loey et al. (2021) Mohamed Loey et al. 2021. A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the covid-19 pandemic. Measurement, 167:108288.
- Lotfollahi et al. (2022) Mohammad Lotfollahi et al. 2022. Mapping single-cell data to reference atlases by transfer learning. Nature biotechnology, 40(1):121–130.
- Lu et al. (2023) Pan Lu et al. 2023. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842.
- Lyu et al. (2021) Xueguang Lyu et al. 2021. Contrasting centralized and decentralized critics in multi-agent reinforcement learning. arXiv preprint arXiv:2102.04402.
- Mao et al. (2022) Weichao Mao et al. 2022. On improving model-free algorithms for decentralized multi-agent reinforcement learning. In International Conference on Machine Learning. PMLR.
- Martini et al. (2021) Franziska Martini et al. 2021. Bot, or not? comparing three methods for detecting social bots in five political discourses. Big data & society, 8(2):20539517211033566.
- Miao et al. (2023) Ning Miao, Yee Whye Teh, and Tom Rainforth. 2023. Selfcheck: Using llms to zero-shot check their own step-by-step reasoning. arXiv preprint arXiv:2308.00436.
- Qiu et al. (2024) Xihe Qiu et al. 2024. Chain-of-lora: Enhancing the instruction fine-tuning performance of low-rank adaptation on diverse instruction set. IEEE Signal Processing Letters.
- Raman et al. (2022) Shreyas Sundara Raman et al. 2022. Planning with large language models via corrective re-prompting. In NeurIPS 2022 Foundation Models for Decision Making Workshop.
- Rana et al. (2023) Krishan Rana et al. 2023. Sayplan: Grounding large language models using 3d scene graphs for scalable task planning. arXiv preprint arXiv:2307.06135.
- Rashid et al. (2020) Tabish Rashid et al. 2020. Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning. In Advances in neural information processing systems 33, pages 10199–10210.
- Rasley et al. (2020) Jeff Rasley et al. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
- Saber et al. (2021) Abeer Saber et al. 2021. A novel deep-learning model for automatic detection and classification of breast cancer using the transfer-learning technique. IEEE Access, 9:71194–71209.
- Schroeder de Witt et al. (2019) Christian Schroeder de Witt et al. 2019. Multi-agent common knowledge reinforcement learning. In Advances in Neural Information Processing Systems 32.
- Schuchard and Crooks (2021) Ross J. Schuchard and Andrew T. Crooks. 2021. Insights into elections: An ensemble bot detection coverage framework applied to the 2018 us midterm elections. Plos one, 16(1):e0244309.
- Schumann et al. (2024) Raphael Schumann et al. 2024. Velma: Verbalization embodiment of llm agents for vision and language navigation in street view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38.
- Shanahan et al. (2023) Murray Shanahan, Kyle McDonell, and Laria Reynolds. 2023. Role play with large language models. Nature, 623(7987):493–498.
- Sharan et al. (2023) S. P. Sharan, Francesco Pittaluga, and Manmohan Chandraker. 2023. Llm-assist: Enhancing closed-loop planning with language-based reasoning. arXiv preprint arXiv:2401.00125.
- Shen et al. (2020) Sheng Shen et al. 2020. Deep convolutional neural networks with ensemble learning and transfer learning for capacity estimation of lithium-ion batteries. Applied Energy, 260:114296.
- Singh et al. (2023) Ishika Singh et al. 2023. Progprompt: Generating situated robot task plans using large language models. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE.
- Song et al. (2023) Chan Hee Song et al. 2023. Llm-planner: Few-shot grounded planning for embodied agents with large language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision.
- Topsakal and Akinci (2023) Oguzhan Topsakal and Tahir Cetin Akinci. 2023. Creating large language model applications utilizing langchain: A primer on developing llm apps fast. In International Conference on Applied Engineering and Natural Sciences, volume 1.
- Valmeekam et al. (2022) Karthik Valmeekam et al. 2022. Large language models still can’t plan (a benchmark for llms on planning and reasoning about change). arXiv preprint arXiv:2206.10498.
- Wang et al. (2024a) Haoyu Wang et al. 2024a. Carbon-based molecular properties efficiently predicted by deep learning-based quantum chemical simulation with large language models. Computers in Biology and Medicine, page 108531.
- Wang et al. (2024b) Haoyu Wang et al. 2024b. Subequivariant reinforcement learning framework for coordinated motion control. arXiv preprint arXiv:2403.15100.
- Wang et al. (2023) Lei Wang et al. 2023. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432.
- Wang et al. (2020) Tonghan Wang et al. 2020. Roma: Multi-agent reinforcement learning with emergent roles. arXiv preprint arXiv:2003.08039.
- Wen et al. (2023) Hao Wen et al. 2023. Empowering llm to use smartphone for intelligent task automation. arXiv preprint arXiv:2308.15272.
- Xi et al. (2023) Zhiheng Xi et al. 2023. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864.
- Yao et al. (2022) Shunyu Yao et al. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629.
- Yin et al. (2023) Da Yin et al. 2023. Lumos: Learning agents with unified data, modular design, and open-source llms. arXiv preprint arXiv:2311.05657.
- Yu et al. (2023) Shengcheng Yu et al. 2023. Llm for test script generation and migration: Challenges, capabilities, and opportunities. In 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security (QRS). IEEE.
- Zeng et al. (2023) Fanlong Zeng et al. 2023. Large language models for robotics: A survey. arXiv preprint arXiv:2311.07226.
- Zhang and Gao (2023) Xuan Zhang and Wei Gao. 2023. Towards llm-based fact verification on news claims with a hierarchical step-by-step prompting method. arXiv preprint arXiv:2310.00305.
- Zhao et al. (2024) Andrew Zhao et al. 2024. Expel: Llm agents are experiential learners. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38.
- Zhu et al. (2023) Zhuangdi Zhu et al. 2023. Transfer learning in deep reinforcement learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence.
- Zhuang et al. (2020) Fuzhen Zhuang et al. 2020. A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1):43–76.
- Zimmer et al. (2021a) Matthieu Zimmer et al. 2021a. Learning fair policies in decentralized cooperative multi-agent reinforcement learning. In International Conference on Machine Learning. PMLR.
- Zimmer et al. (2021b) Matthieu Zimmer et al. 2021b. Learning fair policies in decentralized cooperative multi-agent reinforcement learning. In International Conference on Machine Learning. PMLR.
- Zou et al. (2023) Hang Zou et al. 2023. Wireless multi-agent generative ai: From connected intelligence to collective intelligence. arXiv preprint arXiv:2307.02757.
Appendix A Related Work
A.1 Single Agent Frameworks
Early agent frameworks such as Progprompt Singh et al. (2023) directly prompt large language models (LLMs) to plan, execute actions, and process feedback in a chained manner within one model Song et al. (2023). Despite its conceptual simplicity Valmeekam et al. (2022), an integrated framework imposes a substantial burden on a single LLM, leading to challenges in managing complex tasks Raman et al. (2022); Wang et al. (2024a).
To reduce the reasoning burden, recent works explore modular designs by separating high-level planning and low-level execution into different modules. For example, LUMOS Yin et al. (2023) consists of a planning module, a grounding module, and an execution module. The planning and grounding modules break down complex tasks into interpretable sub-goals and executable actions. FiReAct Chen et al. (2023) introduces a similar hierarchical structure, with a focus on providing step-by-step explanations Zhang and Gao (2023). Although partitioning into modules specializing for different skills is reasonable, existing modular frameworks still rely on a single agent for final action execution Miao et al. (2023); Qiu et al. (2024). Our work pushes this idea further by replacing the single execution agent with a cooperative team of multiple agents.
A.2 Multi-Agent Reinforcement Learning
Collaborative multi-agent reinforcement learning has been studied to solve complex control or game-playing tasks. Representative algorithms include COMA Foerster et al. (2018), QMIX Rashid et al. (2020) and ROMA Wang et al. (2020). These methods enable decentralized execution of different agents but allow centralized training by sharing experiences or parameters Lyu et al. (2021). Drawing on this concept, our ReMALIS framework places greater emphasis on integrating modular LLMs to address complex language tasks. In ReMALIS, each execution agent specializes in specific semantic domains such as query, computation, or retrieval, and is coordinated through a communication module Mao et al. (2022).
The concept of multi-agent RL has recently influenced the design of conversational agents Zimmer et al. (2021a); Schumann et al. (2024). EnsembleBot Schuchard and Crooks (2021) utilizes multiple bots trained on distinct topics, coordinated by a routing model. However, this approach primarily employs a divide-and-conquer strategy with independent skills Martini et al. (2021), and communication within EnsembleBot predominantly involves one-way dispatching rather than bidirectional coordination. In contrast, our work focuses on fostering a more tightly integrated collaborative system for addressing complex problems Schroeder de Witt et al. (2019); Zimmer et al. (2021b).
A.3 Integrated & Collaborative Learning
Integrated learning techniques originate from transfer learning Zhuang et al. (2020); Zhu et al. (2023), aiming to improve a target model by incorporating additional signals from other modalities Lotfollahi et al. (2022); Shanahan et al. (2023). For multi-agent systems, Li et al. (2022); Zhao et al. (2024) find joint training of multiple agents simultaneously boosts performance over separately trained independent agents Lee and Perret (2022). Recently, integrated learning has been used in single agent frameworks like Shen et al. (2020) and Loey et al. (2021), where auxiliary losses of interpretable outputs facilitate main model training through multi-tasking Khamparia et al. (2021); Saber et al. (2021).
Our work adopts integrated learning to train specialized execution agents that are semantically consistent. At the team level, a communication module learns to attentively aggregate and propagate messages across agents, which indirectly coordinates their strategies and behaviors Fan et al. (2020). The integrated and collaborative learning synergizes individual skills and leads to emerged collective intelligence, enhancing the overall reasoning and planning capabilities when dealing with complex tasks He et al. (2021); Li et al. (2020).
Appendix B Methodology and Contributions
Based on the motivations and inspirations above, we propose recursive multi-agent learning with intention sharing framework (ReMALIS), an innovative multi-agent framework empowered by integrated learning for communication and collaboration. The main contributions are:
1. We design a cooperative execution module with multiple agents trained by integrated learning. Different execution agents specialize in different semantic domains while understanding peer abilities, which reduces redundant capacities and improves efficient division of labor.
2. We propose an attentive communication module that propagates informative cues across specialized agents. The module coordinates agent execution strategies without explicit supervision, acting as the role of team leader.
3. The collaborative design allows ReMALIS to handle more complex tasks compared to single-agent counterparts. Specialized agents focus on their specialized domain knowledge while collaborating closely through communicative coordination, leading to strong emergent team intelligence.
4. We enable dynamic feedback loops from communication to the grounding module and re-planning of the planning module, increasing adaptability when execution difficulties arise.
We expect the idea of integrating specialized collaborative agents with dynamic coordination mechanisms to inspire more future research toward developing intelligent collaborative systems beyond conversational agents.
Appendix C Key variables and symbols
Table 4: Key variables and symbols in the proposed recursive multi-agent learning framework.
| $p_{\theta}$ $s_{t}$ $I_{t}$ | Planning module parameterized by $\theta$ Current sub-goal at time $t$ Current intention at time $t$ |
| --- | --- |
| $e_{t}$ | Grounded embedding at time $t$ |
| $f_{t}$ | Agent feedback at time $t$ |
| $g_{\phi}$ | Grounding module parameterized by $\phi$ |
| $\pi_{\xi_{i}}$ | Execution policy of agent $i$ parameterized by $\xi_{i}$ |
| $f_{\Lambda}$ | Intention propagation channel parameterized by $\Lambda$ |
| $m_{ij}$ | Message sent from agent $j$ to agent $i$ |
| $b_{i}(I_{j}|m_{ij})$ | Agent $i$ ’s belief over teammate $j$ ’s intention $I_{j}$ given message $m_{ij}$ |
| $R_{c}$ | Coordination reward |
| $\pi_{\xi}(a_{i}|s_{i},I_{i})$ | Execution agent policy conditioned on state $s_{i}$ and intention $I_{i}$ |
| $a_{i}$ | Action of agent $i$ |
| $s_{i}$ | State of agent $i$ |
| $I_{i}=(\gamma_{i},\Sigma_{i},\pi_{i},\delta_{i})$ | Intention of agent $i$ |
| $\gamma_{i}$ | Current goal of agent $i$ |
| $\Sigma_{i}=\{\sigma_{i1},\sigma_{i2},...\}$ | Set of sub-goals for agent $i$ |
| $\pi_{i}(\sigma)$ | Probability distribution over possible next sub-goals for agent $i$ |
| $\delta_{i}(\sigma)$ | Desired teammate assignment for sub-goal $\sigma$ of agent $i$ |
Table 4 summarizes the key variables and symbols used in the proposed recursive multi-agent learning framework called ReMALIS. It includes symbols representing various components like the planning module, grounding module, execution policies, intentions, goals, sub-goals, and the intention propagation channel.
Table 5: Comparison of Traffic Network Complexity Levels
| Difficulty Level | Grid Size | Intersections | Arrival Rates | Phases per Intersection |
| --- | --- | --- | --- | --- |
| Easy | 3x3 | 9 | Low and stable (0.5 vehicles/s) | Less than 10 |
| Medium | 5x5 | 25 | Fluctuating (0.5-2 vehicles/s) | 10-15 |
| Hard | 8x8 | 64 | Highly dynamic (0.1 to 3 vehicles/s) | More than 15 |
| Hell | Irregular | 100+ | Extremely dynamic with spikes | $>$ 25 |
Table 6: Training hyperparameters and configurations
| Hyperparameter/Configuration Language Model Size Optimizer | ReMALIS 7B AdamW | LUMOS 13B Adam | AgentLM 6B AdamW | GPT-3.5 175B Adam |
| --- | --- | --- | --- | --- |
| Learning Rate | 1e-4 | 2e-5 | 1e-4 | 2e-5 |
| Batch Size | 32 | 64 | 32 | 64 |
| Dropout | 0 | 0.1 | 0 | 0.1 |
| Number of Layers | 12 | 8 | 6 | 48 |
| Model Dimension | 768 | 512 | 768 | 1024 |
| Number of Heads | 12 | 8 | 12 | 16 |
| Training Epochs | 15 | 20 | 10 | 20 |
| Warmup Epochs | 1 | 2 | 1 | 2 |
| Weight Decay | 0.01 | 0.001 | 0.01 | 0.001 |
| Network Architecture | GNN | Transformer | Transformer | Transformer |
| Planning Module | GNN, 4 layers, 512 hidden size | 2-layer GNN, 1024 hidden size | - | - |
| Grounding Module | 6-layer Transformer, $d_{\text{model}}=768$ | 4-layer Transformer, $d_{\text{model}}=512$ | - | - |
| Execution Agents | 7 specialized, integrated training | Single agent | 8 agent | 4 agent |
| Intention Propagation | 4-layer GRU, 256 hidden size | - | - | - |
| Coordination Feedback | GAT, 2 heads, $\alpha=0.2$ | - | - | - |
| Trainable Parameters | 5.37B | 6.65B | 4.61B | 17.75B |
Appendix D Tasks Setup
D.1 Traffic Control
We define four levels of difficulty for our traffic control tasks: Easy, Medium, Hard, and Hell in Table 5.
D.2 Web Tasks
Similarly, we categorize the web tasks in our dataset into four levels of difficulty: Easy, Medium, Hard, and All.
Easy: The easy web tasks involve basic interactions like clicking on a single link or typing a short phrase. They require navigating simple interfaces with clear options to reach the goal.
Medium: The medium-difficulty tasks demand more complex sequences of actions across multiple pages, such as selecting filters or submitting forms. They test the agent’s ability to understand the site structure and flow.
Hard: The hard web tasks feature more open-ended exploration through dense sites with ambiguity. Significant reasoning is needed to chain obscure links and controls to achieve aims.
All: The all-level combines tasks across the spectrum of difficulty. Both simple and complex interactions are blended to assess generalized web agent skills. The performance here correlates to readiness for real-world web use cases.
Appendix E Experimental Setups
In this study, we compare the performance of several state-of-the-art language models, including ReMALIS, LUMOS, AgentLM, and GPT-3.5. These models vary in size, architecture, and training configurations, reflecting the diversity of approaches in the field of natural language processing in Table 6.
ReMALIS is a 7 billion parameter model trained using the AdamW optimizer with a learning rate of 1e-4, a batch size of 32, and no dropout. It has 12 layers, a model dimension of 768, and 12 attention heads. The model was trained for 15 epochs with a warmup period of 1 epoch and a weight decay of 0.01. ReMALIS employs a Graph Neural Network (GNN) architecture, which is particularly suited for modeling complex relationships and structures.
LUMOS, a larger model with 13 billion parameters, was trained using the Adam optimizer with a learning rate of 2e-5, a batch size of 64, and a dropout rate of 0.1. It has 8 layers, a model dimension of 512, and 8 attention heads. The model was trained for 20 epochs with a warmup period of 2 epochs and a weight decay of 0.001. LUMOS follows a Transformer architecture, which has proven effective in capturing long-range dependencies in sequential data.
AgentLM, a 6 billion parameter model, was trained using the AdamW optimizer with a learning rate of 1e-4, a batch size of 32, and no dropout. It has 6 layers, a model dimension of 768, and 12 attention heads. The model was trained for 10 epochs with a warmup period of 1 epoch and a weight decay of 0.01. AgentLM also uses a Transformer architecture.
GPT-3.5, the largest model in this study with 175 billion parameters, was trained using the Adam optimizer with a learning rate of 2e-5, a batch size of 64, and a dropout rate of 0.1. It has 48 layers, a model dimension of 1024, and 16 attention heads. The model was trained for 20 epochs with a warmup period of 2 epochs and a weight decay of 0.001. GPT-3.5 follows the Transformer architecture, which has been widely adopted for large language models.
In addition to the base language models, the table provides details on the specialized modules and configurations employed by ReMALIS and LUMOS. ReMALIS incorporates a planning module with a 4-layer GNN and a 512 hidden size, a grounding module with a 6-layer Transformer and a model dimension of 768, 7 specialized and integrated execution agents, a 4-layer Gated Recurrent Unit (GRU) with a 256 hidden size for intention propagation, and a Graph Attention Network (GAT) with 2 heads and an alpha value of 0.2 for coordination feedback.
LUMOS, on the other hand, employs a 2-layer GNN with a 1024 hidden size for planning, a 4-layer Transformer with a model dimension of 512 for grounding, and a single integrated execution agent.
Appendix F Pseudo-code
This algorithm 2 presents the hierarchical planning and grounding processes in the proposed recursive multi-agent learning framework. The planning module $p_{\theta}$ takes the current sub-goal $s_{t}$ , intention $I_{t}$ , grounded embedding $e_{t}$ , and feedback $f_{t}$ as inputs, and predicts the next sub-goal $s_{t+1}$ . It first encodes the inputs using an encoder, and then passes the encoded representation through a graph neural network $T_{\theta}$ parameterized by $\theta$ . The output of $T_{\theta}$ is passed through a softmax layer to obtain the probability distribution over the next sub-goal.
The grounding module $g_{\phi}$ takes the current state $s_{t}$ , intention $I_{t}$ , and feedback trajectory $f_{1:t}$ as inputs, and produces the grounded embedding $e_{t}$ . It encodes the inputs using an encoder, and then applies cross-attention over the vocabulary $V$ , followed by a convolutional feature extractor. The output is combined with agent feedback $P_{t}$ to enhance the grounding accuracy. The grounding module is parameterized by $\phi$ .
This algorithm 3 describes the intention propagation mechanism in the proposed recursive multi-agent learning framework. The goal is for each agent $i$ to infer a belief $b_{i}(I_{j}|m_{ij})$ over the intention $I_{j}$ of a teammate $j$ , given a message $m_{ij}$ received from $j$ .
Algorithm 2 Hierarchical Planning and Grounding
1: Input: Current sub-goal $s_{t}$ , intention $I_{t}$ , grounded embedding $e_{t}$ , feedback $f_{t}$
2: Output: Next sub-goal $s_{t+1}$
3: $h_{t}=\text{Encoder}(s_{t},I_{t},e_{t},f_{t})$ {Encode inputs}
4: $s_{t+1}=\text{Softmax}(T_{\theta}(h_{t}))$ {Predict next sub-goal}
5: $T_{\theta}$ is a graph neural network parameterized by $\theta$ {Planning module $p_{\theta}$ }
6: Input: Current state $s_{t}$ , intention $I_{t}$ , feedback $f_{1:t}$
7: Output: Grounded embedding $e_{t}$
8: $h_{t}=\text{Encoder}(s_{t},I_{t},f_{1:t})$ {Encode inputs}
9: $e_{t}=\text{Conv}(\text{Attn}(h_{t},V))+P_{t}$ {Grounded embedding}
10: $\text{Attn}(·,·)$ is a cross-attention layer over vocabulary $V$
11: $\text{Conv}(·)$ is a convolutional feature extractor
12: $P_{t}$ includes agent feedback to enhance grounding accuracy
13: $g_{\phi}$ is the grounding module parameterized by $\phi$
It initializes an intention propagation channel $f_{\Lambda}$ , parameterized by $\Lambda$ , which is implemented as a recurrent neural network.
The intention inference process works as follows:
1. The received message $m_{ij}$ is encoded using an encoder to obtain a representation $h_{ij}$ .
1. The encoded message $h_{ij}$ is passed through the propagation channel $f_{\Lambda}$ to infer the belief $b_{i}(I_{j}|m_{ij})$ over teammate $j$ ’s intention $I_{j}$ .
The objective is to train the parameters $\Lambda$ of the propagation channel $f_{\Lambda}$ to maximize the coordination reward $R_{c}$ over sampled intentions $I$ and messages $m$ from the distribution defined by $f_{\Lambda}$ .
Algorithm 3 Intention Propagation Mechanism
0: Current intention $I_{i}$ of agent $i$ , message $m_{ij}$ from teammate $j$
0: Belief $b_{i}(I_{j}|m_{ij})$ over teammate $j$ ’s intention $I_{j}$
1: Initialization:
2: Intention propagation channel $f_{\Lambda}$ parameterized by $\Lambda$
3: $f_{\Lambda}$ is a recurrent neural network
4: Intention Inference:
5: Encode message: $h_{ij}←\text{Encoder}(m_{ij})$
6: Infer intention belief: $b_{i}(I_{j}|m_{ij})← f_{\Lambda}(m_{ij})$
7: Objective:
8: Sample intentions $I$ and messages $m$ from $f_{\Lambda}$
9: Maximize coordination reward $R_{c}$ over intentions and messages:
10: $\Lambda^{*}←\arg\max_{\Lambda}\mathbb{E}_{I,m\sim f_{\Lambda}}[R_{c}(%
I,m)]$
Algorithm 4 Bidirectional Coordination
0: Experience tuples $(s_{t},a_{t},r_{t},s_{t+1})$ for all agents
0: Execution policies $\pi_{\xi_{i}}(a_{i}|s_{i},I_{i})$ and coordination feedback $c_{t}$
1: Execution Policy:
2: for each agent $i$ do
3: Get agent state $s_{i,t}$ and intention $I_{i,t}$
4: $a_{i,t}\sim\pi_{\xi_{i}}(a_{i}|s_{i,t},I_{i,t})$ {Execution policy}
5: end for
6: Coordination Feedback:
7: Collect execution encodings $h^{exec}_{t}=[\phi_{1}(o_{1}),...,\phi_{N}(o_{N})]$ {Encode observations}
8: $c_{t}←\Phi(h^{exec}_{t})$ {Summarize coordination patterns}
9: Objective:
10: Maximize team reward $R$ and auxiliary loss $L_{aux}$ :
11: $\xi^{*}←\arg\max_{\xi}\mathbb{E}_{(s,a)\sim\pi_{\xi}}[R+\lambda L_{%
aux}]$
This algorithm 4 describes the bidirectional coordination mechanism in the proposed recursive multi-agent learning framework. It involves executing actions based on the agents’ policies and generating coordination feedback from the execution experiences.
Our algorithm takes experience tuples $(s_{t},a_{t},r_{t},s_{t+1})$ for all agents as input, where $s_{t}$ is the state, $a_{t}$ is the action taken, $r_{t}$ is the reward received, and $s_{t+1}$ is the next state.
The execution policy part works as follows:
1. For each agent $i$ , get the agent’s state $s_{i,t}$ and intention $I_{i,t}$ .
1. Sample an action $a_{i,t}$ from the execution policy $\pi_{\xi_{i}}(a_{i}|s_{i,t},I_{i,t})$ , parameterized by $\xi_{i}$ .
The coordination feedback part works as follows:
1. Collect execution encodings $h^{exec}_{t}=[\phi_{1}(o_{1}),...,\phi_{N}(o_{N})]$ by encoding the observations $o_{i}$ of each agent $i$ using an encoder $\phi_{i}$ .
1. Summarize the coordination patterns $c_{t}$ from the execution encodings $h^{exec}_{t}$ using a function $\Phi$ .
The objective is to maximize the team reward $R$ and an auxiliary loss $L_{aux}$ by optimizing the execution policy parameters $\xi$ . The auxiliary loss $L_{aux}$ is used to incorporate additional regularization or constraints.
The bidirectional coordination mechanism allows execution agents to act based on their policies and intentions, while also generating coordination feedback $c_{t}$ that summarizes the emerging coordination patterns. This feedback can be used to guide the planning and grounding modules in the recursive multi-agent learning framework.
Appendix G Discussion
The results demonstrate the efficacy of the proposed ReMALIS framework in enabling coordinated multi-agent collaboration for complex tasks. By propagating intentions between agents, establishing bidirectional feedback channels, and integrating recursive reasoning architectures, ReMALIS outperformed single-agent baselines and concurrent methods across difficulty levels on both the traffic flow prediction and web activities datasets.
The performance gains highlight the importance of fostering a shared understanding of goals and sub-tasks among agents through intention propagation. Communicating local beliefs allows agents to align their actions towards common objectives, leading to emergent coordinated behaviors that reduce misaligned sub-tasks and miscoordination errors. Furthermore, the bidirectional feedback channels play a crucial role in shaping the reasoning strategies of the planning and grounding modules based on the coordination patterns observed during execution. This adaptability enables the agents to adjust their comprehension and planning policies dynamically, resulting in more flexible and responsive behaviors.
The integration of recursive reasoning architectures also contributes to the superior performance of ReMALIS. By modeling the intentions and strategies of other agents, the execution agents can engage in more contextual and holistic reasoning, enhancing their ability to handle complex temporal dependencies and long-term planning horizons. This recursive reasoning capability further amplifies the benefits of intention propagation and bidirectional feedback, as agents can better interpret and leverage the shared information and coordination signals.
It is important to note that while ReMALIS demonstrates substantial improvements over single-agent frameworks, there are still limitations and potential areas for further research. For instance, the current implementation relies on a centralized training paradigm, which may hinder scalability in fully decentralized environments. Additionally, the framework does not explicitly handle dynamic agent arrival or departure during execution, which could impact coordination in real-world applications with fluid team compositions.
Future work could explore decentralized training approaches that maintain the benefits of multi-agent collaboration while addressing scalability concerns. Moreover, developing mechanisms to adaptively handle changes in the agent team during execution could enhance the robustness and flexibility of the framework in dynamic environments.
Appendix H Supplementary application description of the overall framework
To further illustrate the practical applicability and versatility of our proposed ReMALIS framework, we present a supplementary application scenario. Figure 2 depicts a high-level overview of how ReMALIS can be employed in a real-world setting to tackle complex, multi-step tasks that require orchestrating multiple agents with diverse capabilities. This exemplary use case demonstrates the framework’s ability to decompose intricate problems into manageable sub-tasks, dynamically allocate appropriate agents, and seamlessly coordinate their actions to achieve the overarching goal efficiently and effectively.
Planning Module (Figure 4): 1.
Analyze the current traffic conditions, including vehicle counts, road incidents, and construction zones. 2.
Identify intersections experiencing congestion and potential bottlenecks. 3.
Formulate high-level goals to alleviate congestion and optimize traffic flow. 4.
Break down the goals into a sequence of subgoals and subtasks. 5.
Determine the dependencies and coordination needs between subtasks. 6.
Plan the assignment of subtasks to specialized execution agents based on their expertise.
<details>
<summary>extracted/5737747/11.png Details</summary>

### Visual Description
## Diagram: LLM-Driven Planning Process
### Overview
The image is a diagram illustrating a planning process driven by a Large Language Model (LLM). The diagram shows the LLM feeding into data and prior information, which then informs an iterative optimization loop. This loop involves optimizing the planning process, responding to dynamic changes, and establishing priority policies. The output of this loop leads to tasks and goals.
### Components/Axes
* **LLM:** Located at the top-left, represented by a green icon resembling the OpenAI logo.
* **Data:** Located below the LLM, represented by a green waveform icon.
* **Prior:** Located below the LLM, represented by a blue clipboard icon.
* **Optimize Planning Process:** Located within a dashed rounded rectangle, represented by a blue and green globe icon with a gear.
* **Respond Dynamic Changes:** Located within the dashed rounded rectangle, represented by a purple excavator icon.
* **Establish Priority Policy:** Located within the dashed rounded rectangle, represented by a blue molecule icon.
* **Process:** Located at the top-right, represented by a blue icon of three interconnected circles.
* **Task:** Located at the bottom-right, represented by an orange grid icon.
* **Goal:** Located at the bottom-right, represented by a purple tree-like icon.
### Detailed Analysis
* The LLM feeds into "Data" and "Prior" information via downward-pointing arrows.
* An arrow points from "Prior" to the dashed rounded rectangle.
* Inside the dashed rounded rectangle, double-headed arrows connect "Optimize Planning Process", "Respond Dynamic Changes", and "Establish Priority Policy", indicating an iterative loop.
* An arrow points from the dashed rounded rectangle to "Process".
* "Process" leads to "Task" and "Goal".
### Key Observations
* The diagram highlights the LLM's role in providing data and prior information to a dynamic planning process.
* The iterative loop within the dashed rounded rectangle suggests continuous refinement and adaptation.
* The final output is structured into tasks and goals, indicating a clear objective-driven approach.
### Interpretation
The diagram illustrates a system where an LLM provides the initial information for a planning process. This process is dynamic and iterative, allowing for continuous optimization and adaptation to changing circumstances. The ultimate goal is to generate actionable tasks and achieve specific goals. The diagram suggests a closed-loop system where the LLM's output is continuously refined based on feedback from the planning process. The use of an LLM allows for the incorporation of large amounts of data and prior knowledge, potentially leading to more effective and efficient planning.
</details>
Figure 4: Overview of the proposed ReMALIS Planning Module for predicting sub-goals based on current goals, intentions, grounded embeddings, and agent feedback.
<details>
<summary>extracted/5737747/22.png Details</summary>

### Visual Description
## Diagram: Healthcare Planning Process
### Overview
The image is a diagram illustrating a healthcare planning process. It shows the flow from initial assessments and data collection, through collaboration, to the creation of specific implementation plans.
### Components/Axes
* **Left Side (Input):** A rounded-rectangle with a dashed border contains a list of elements related to initial patient assessment and data gathering.
* "Functional assessment" (icon: person wearing a mask)
* "Behavioral history" (icon: magnifying glass)
* "Pre-diagnosis problems" (icon: stethoscope)
* "Expert guidance" (icon: doctor)
* "Specialty analysis" (icon: hospital building)
* "Current status" (icon: heart rate monitor)
* "Future deductions" (icon: microscope)
* "Pland ployment" (icon: pill)
* **Middle (Process):** A plus sign connects the initial assessment to a set of three overlapping cubes, representing collaboration.
* "Collaboration with neighbors" (cubes with orange/magenta, blue/magenta)
* "Collaboration with functions" (cubes with purple/magenta)
* **Right Side (Output):** An arrow points from the collaboration cubes to a database icon.
* "Specific plans for the implementation of goals"
### Detailed Analysis or ### Content Details
The diagram visually represents a process where initial patient data and assessments are combined with collaborative efforts to produce specific plans for implementing healthcare goals.
The "Collaboration with neighbors" cubes are colored orange/magenta and blue/magenta. The "Collaboration with functions" cubes are colored purple/magenta.
### Key Observations
* The diagram emphasizes the importance of both initial assessment and collaboration in healthcare planning.
* The use of icons provides a visual representation of each element, making the diagram easier to understand.
* The flow from left to right clearly illustrates the progression of the planning process.
### Interpretation
The diagram suggests a holistic approach to healthcare planning. It highlights that effective planning requires a thorough understanding of the patient's condition (functional assessment, behavioral history, current status), input from experts, consideration of future possibilities, and collaboration with various stakeholders ("neighbors" and "functions"). The ultimate goal is to create specific, actionable plans for implementing healthcare goals. The diagram implies that collaboration is a key step in translating initial assessments into concrete plans.
</details>
Figure 5: Framework of the proposed ReMALIS Grounding Module that contextualizes symbol embeddings using the current state, intentions, and feedback signals.
Grounding Module (Figure 5): 1.
Contextualize the abstract traffic concepts and symbols into grounded representations. 2.
Map entities like intersections, vehicles, and signal phases to their physical counterparts. 3.
Resolve ambiguities and uncertainties in grounding based on the current traffic context. 4.
Adjust grounding strategies based on feedback from execution agents and emerging coordination patterns. 5.
Provide grounded embeddings to inform the execution agents’ decision-making.
<details>
<summary>extracted/5737747/33.png Details</summary>

### Visual Description
## Diagram: Task Design and LLM Correction Workflow
### Overview
The image is a diagram illustrating a workflow involving task design, LLM (Large Language Model) correction, and instruction fine-tuning, leading to either a successful or unsuccessful outcome based on the input. The diagram uses icons and arrows to represent the flow of information and processes.
### Components/Axes
* **Input Sources (Left):**
* Icon of multiple people (doctors/medical staff)
* Icon of a hospital building
* **Processes (Center):**
* "Task design" (represented by a checklist icon)
* "LLM correction" (represented by an icon resembling a swirling neural network)
* "Instruction fine-tuning" (represented by a list with checkmarks)
* **Outcomes (Right):**
* Two sections enclosed in dashed lines, representing successful and unsuccessful outcomes.
* **Successful Outcome:**
* Green checkmark at the top
* Flow: Vegetables (bok choy) -> Green basket with vegetables -> Cooking utensils -> Salad bowl
* **Unsuccessful Outcome:**
* Red "X" at the top
* Flow: Poison bottle -> Green basket with vegetables -> Cooking utensils -> Alcohol bottle with a "no" symbol
### Detailed Analysis
1. **Input Sources:** The workflow begins with input from medical staff and a hospital.
2. **Task Design & LLM Correction:** The input feeds into "Task design" and "LLM correction" processes.
3. **Instruction Fine-tuning:** Both "Task design" and "LLM correction" lead to "Instruction fine-tuning."
4. **Successful Outcome:**
* The successful outcome involves a sequence of steps: starting with vegetables, placing them in a green basket, using cooking utensils, and resulting in a salad bowl.
* A green checkmark indicates the success of this path.
5. **Unsuccessful Outcome:**
* The unsuccessful outcome involves a sequence of steps: starting with a poison bottle, placing it in a green basket with vegetables, using cooking utensils, and resulting in an alcohol bottle with a "no" symbol.
* A red "X" indicates the failure of this path.
### Key Observations
* The diagram clearly distinguishes between a positive and negative outcome based on the initial input.
* The use of icons makes the diagram easily understandable.
* The "Instruction fine-tuning" step appears to be crucial in determining the final outcome.
### Interpretation
The diagram illustrates a system where input from medical staff and hospitals is processed through task design, LLM correction, and instruction fine-tuning. The goal is to achieve a successful outcome, represented by the creation of a healthy salad. However, if the input is incorrect or harmful (represented by the poison bottle), the system leads to an unsuccessful outcome, symbolized by the alcohol bottle with a "no" symbol. This suggests that the system is designed to filter and correct information to promote positive results, but it is still susceptible to errors if the initial input is flawed. The LLM correction and instruction fine-tuning steps are likely designed to mitigate these errors and improve the overall reliability of the system.
</details>
Figure 6: Overview of our ReMALIS Cooperative Execution Module consisting of specialized agents that collaboratively execute actions and propagate intentions.
Execution Module (Figure 6, 7): 1.
Specialized agents monitor their respective domains (vehicle counts, road conditions, signal timings, etc.). 2.
Agents communicate their local intentions and goals to relevant teammates. 3.
Agents align their actions based on shared intentions and the coordinated plans. 4.
Agents execute their assigned subtasks (adjusting signal phases, routing emergency vehicles, etc.). 5.
Agents observe the impact of their actions and provide feedback on emerging coordination patterns. 6.
Agents adapt their strategies dynamically based on the feedback and changing traffic conditions. 7.
Agents continuously monitor and respond to fluctuations in vehicle arrival rates and traffic patterns. 8.
Agents collaborate and coordinate their efforts to collectively alleviate congestion and optimize traffic flow.
<details>
<summary>extracted/5737747/44.png Details</summary>

### Visual Description
## Diagram: Collaborative Process
### Overview
The image illustrates a collaborative process involving communication between agents, historical data analysis, expert guidance, collaborative evaluation, and coordinated cooperation. The diagram uses icons and arrows to represent the flow of information and activities.
### Components/Axes
* **Icons:**
* Red circular arrows connecting two red circles: Represents communication between agents.
* Teal computer monitor with a gear and bar graph: Represents historical process.
* Pink graduation cap: Represents expert guidance.
* Orange hourglass: Represents collaborative evaluation.
* Group of people icons with speech bubbles, books, a presentation board, a calculator with a graph, and a document with a checkmark: Represents communication and coordination for cooperation.
* **Arrows:** Black arrows indicate the flow of the process from left to right.
* **Text Labels:**
* Communication between agents
* Historical process
* Expert guidance
* Collaborative evaluation
* Communicate and Coordinate for Cooperation
### Detailed Analysis or ### Content Details
The diagram depicts a sequential process:
1. **Communication between agents:** Represented by red circular arrows connecting two red circles.
2. **Historical process:** Represented by a teal computer monitor displaying a bar graph and a gear icon.
3. **Expert guidance:** Represented by a pink graduation cap.
4. **Collaborative evaluation:** Represented by an orange hourglass.
5. **Communicate and Coordinate for Cooperation:** Represented by a group of people icons with speech bubbles, books, a presentation board, a calculator with a graph, and a document with a checkmark, enclosed in a dashed-line box.
The process flows from left to right, with each step leading to the next.
### Key Observations
* The diagram emphasizes the importance of communication, historical data, expert knowledge, and collaboration in achieving cooperation.
* The use of icons makes the diagram visually appealing and easy to understand.
* The sequential flow of the process is clearly indicated by the arrows.
### Interpretation
The diagram illustrates a structured approach to problem-solving or decision-making in a collaborative environment. It suggests that effective communication, data analysis, expert input, and collaborative evaluation are essential for successful cooperation. The final stage, "Communicate and Coordinate for Cooperation," implies that the process culminates in a coordinated effort based on the insights gained from the previous steps. The diagram could be used to explain a workflow, a project management strategy, or a general framework for collaborative work.
</details>
Figure 7: Overview of the collaborative evaluation setup in the proposed ReMALIS framework.