2506.11304v1
Model: gemini-2.0-flash
## A Hybrid Adaptive Nash Equilibrium Solver for Distributed MultiAgent Systems with Game-Theoretic Jump Triggering
Qiuyu Miao 1* and Zhigang Wu 2
School of Aeronautics and Astronautics, Sun Yat-sen University, Shenzhen, China
Abstract οΌ This paper presents a hybrid adaptive Nash equilibrium solver for distributed multi-agent systems incorporating game-theoretic jump triggering mechanisms. The approach addresses fundamental scalability and computational challenges in multi-agent hybrid systems by integrating distributed game-theoretic optimization with systematic hybrid system design. A novel game-theoretic jump triggering mechanism coordinates discrete mode transitions across multiple agents while maintaining distributed autonomy. The Hybrid Adaptive Nash Equilibrium Solver (HANES) algorithm integrates these methodologies. Sufficient conditions establish exponential convergence to consensus under distributed information constraints. The framework provides rigorous stability guarantees through coupled Hamilton-Jacobi-Bellman equations while enabling rapid emergency response capabilities through coordinated jump dynamics. Simulation studies in pursuit-evasion and leader-follower consensus scenarios demonstrate significant improvements in convergence time, computational efficiency, and scalability compared to existing centralized and distributed approaches.
: Hybrid dynamical systems, Multi-agent coordination, Nash equilibrium,
Keywords
Distributed control
## I. Introduction
Modern distributed autonomous systems face unprecedented challenges in coordinating multiple agents that must simultaneously handle continuous physical dynamics and discrete decision-making processes. This dual nature appears across critical engineering domains where system failures can have severe consequences and traditional control approaches prove inadequate.
Unmanned aerial vehicle swarms must navigate complex environments through continuous flight control while executing discrete task assignments and obstacle avoidance decisions. Power grid operations demand real-time continuous power balancing coupled with discrete switching operations for load management and fault protection. These applications share a fundamental characteristic: the inability of purely continuous or discrete control methods to capture the essential system behaviors, necessitating hybrid system approaches that can rigorously handle both types of dynamics within a unified mathematical framework.
Hybrid dynamical systems provide the mathematical foundation for such complex behaviors. The design of flow sets πΆπΆ and jump sets π·π· forms the core of hybrid system architecture [2,12]. Recent advances in hybrid motion planning [13] and distributed state estimation under switching networks [14] have demonstrated the practical importance of systematic flow and jump set construction. However, in multi-agent contexts, the flow set πΆπΆ must accommodate the collective state space where all agents maintain coordination through local information exchange with neighbors, while the jump set π·π· coordinates discrete interventions across the network when continuous operation becomes insufficient. This design challenge differs fundamentally from single-agent cases due to the requirement for distributed coordination without global state knowledge.
While hybrid system theory has matured significantly for single-agent applications [15,16], the extension to multi-agent scenarios reveals profound theoretical and computational challenges in coordinating discrete mode switches across multiple interacting agents. The complexity of multi-agent hybrid systems emerges from the intricate coupling between individual agent dynamics and collective coordination requirements, where agents typically know only their neighbors' states yet must achieve robust synchronization and coordination through hybrid mechanisms [17,18]. However, the interaction between autonomous agent dynamics and the stochastic and intermittent nature of network traffic, combined with delays and asynchrony in information flow, further complicates the goal of ensuring system autonomy.
Existing hybrid system approaches primarily focus on single-agent or simple two-agent interactions, as exemplified by foundational works such as Leudo et al. [1] on two-player zero-sum hybrid games. The complexity of designing flow sets for multi-agent systems scales exponentially with agent numbers due to coupled constraints among all possible agent pairs [19,20]. Recent surveys on multi-agent consensus control acknowledge this fundamental scalability barrier, noting that current distributed control approaches resort to overly conservative designs that sacrifice performance for computational tractability, while alternative ad-hoc methods lack rigorous theoretical foundations for stability and convergence guarantees [5,21,22]. Furthermore, the computational complexity of Nash equilibrium computation, which is PPADcomplete even for continuous games [23,24], compounds these challenges when integrating game-theoretic approaches with hybrid dynamics.
Jump triggering mechanisms in multi-agent hybrid systems present additional challenges that existing literature has not systematically addressed. Recent advances have demonstrated the potential of hybrid systems frameworks for distributed multi-agent optimization, where agents perform continuous computations (such as gradient descent) while exchanging information at discrete communication instants through "update-and-hold" strategies [7,25]. Event-triggered control approaches in multi-agent systems have evolved significantly since Tabuada's foundational work [26], with recent developments in dynamic event-triggered mechanisms [27,28] and distributed Nash equilibrium seeking under event-triggered protocols [29,30]. However, these approaches focus primarily on continuous dynamics and lack the theoretical framework necessary for hybrid system applications where discrete mode switches fundamentally alter agent interactions and strategic landscapes.
Game-theoretic approaches to multi-agent control have shown promise in continuous domains [3,4], but their integration with hybrid system frameworks remains largely unexplored despite recent advances in learning generalized Nash equilibria through hybrid adaptive extremum seeking control [31,32]. Traditional Nash equilibrium computation assumes continuous action spaces and static interaction patterns, which are inadequate for hybrid systems where discrete mode switches create dynamic strategic environments. Current distributed optimization methods for multi-agent systems [12,13,33] focus primarily on continuous domains and lack computational frameworks for hybrid Nash equilibrium problems that couple continuous strategy optimization within discrete modes with discrete mode selection strategies. The fundamental PPAD-completeness of computing Nash equilibria [23,24] creates additional computational barriers that existing distributed algorithms have not adequately addressed in hybrid settings.
To address these fundamental limitations, this paper presents a comprehensive framework for multi-agent hybrid system design that integrates systematic flow set construction with distributed game-theoretic optimization. The main contributions are:
(1) In contrast to existing approaches that treat hybrid dynamics and multi-agent coordination separately [19,20] and lack systematic integration of game theory with hybrid systems [34,35], a unified distributed framework that formulates multi-agent coordination as a strategic game was developed within the hybrid dynamical systems context. Unlike current methods that rely on purely continuous game formulations or ad-hoc hybrid system designs that sacrifice theoretical rigor for computational tractability [5,21], this framework systematically integrates the hybrid inclusion formulation with distributed Nash equilibrium computation, addressing the fundamental challenge of coordinating discrete mode switches across multiple agents while maintaining individual agent autonomy and preserving rigorous stability guarantees.
(2) While existing event-triggered approaches [26,27,28] focus primarily on continuous dynamics and recent Nash equilibrium seeking methods [29,30] lack hybrid system integration, an intelligent jump triggering strategy based on distributed game-theoretic analysis that coordinates discrete mode transitions across multiple agents was introduced. In contrast to current event-triggered mechanisms that cannot handle the dynamic strategic environments created by discrete mode switches, this mechanism leverages strategic interaction modeling to optimize jump timing for system-wide objectives while maintaining individual agent autonomy and computational efficiency through three-layer triggering criteria. Unlike purely continuous control methods that cannot achieve rapid emergency response, the mechanism enables rapid emergency mode switching upon detecting communication interruptions, agent failures, or environmental disruptions, providing essential fast response capabilities that existing approaches cannot achieve.
(3) Addressing the fundamental computational barriers posed by PPAD-complete Nash equilibrium computation [23,24] and the exponential complexity scaling of existing distributed approaches [19,20], a novel distributed algorithm that integrates the hierarchical flow set design and game-theoretic jump triggering mechanisms was proposed to compute Nash equilibria in hybrid multi-agent systems. While traditional centralized approaches suffer from high computational complexity and existing distributed methods lack hybrid system capability [31,32,33], this algorithm employs dual-layer iterative optimization that separates continuous strategy optimization within modes from discrete mode selection optimization, achieving significant improvements in computational efficiency compared to traditional centralized approaches.
The theoretical framework provides rigorous mathematical foundations while offering practical computational tools for real-world implementation.
Notation : Throughout this paper, we employ standard mathematical notation where π₯π₯ ππ β π
π
ππ denotes the state of agent ππ , π’π’ ππ β π
π
ππ represents the control input, and π΅π΅ = {1,2, β¦ , ππ } defines the agent index set. In the hybrid system, πΆπΆ β π
π
ππ Γ π
π
ππ and π·π· β π
π
ππ Γ π
π
ππ represent the flow and jump sets respectively, while πΉπΉ : π
π
ππ Γ π
π
ππ βπ
π
ππ and πΊπΊ : π
π
ππ Γ π
π
ππ βπ
π
ππ denote the corresponding flow and jump maps. The consensus error for agent ππ is defined as Ξ΄ππ , while ππ ππ represents the tracking error. Cost function parameters include state weight matrices ππ ππ = ππ ππ ππ βͺ° 0 , control weight matrices π
π
ππ = π
π
ππ ππ β» 0 , and jump penalty weights ππ ππ > 0 . The Kronecker product is denoted by β , the gradient operator by β , and the signum function by sign( β
) . Value functions are represented as ππ ππ ( ππ ππ ): π
π
ππ βπ
π
, and the post-jump state is indicated by the superscript ( + ) as in π₯π₯ + .
## II. Problem Statement and Preliminaries
This section presents the mathematical foundations for multi-agent hybrid systems operating under distributed game-theoretic control. Firstly, basic hybrid dynamical system formulation is established, then develop the multi-agent framework with communication constraints, and finally introduce the distributed optimization problem that forms the core of this approach.
Fig. 1. Overall Framework Architecture of the Hybrid Adaptive Nash Equilibrium Solver
<details>
<summary>Image 1 Details</summary>

### Visual Description
## System Diagram: Hybrid Dynamical System Framework
### Overview
The image is a system diagram illustrating a hybrid dynamical system framework. It outlines the components and their interactions, from high-level system design to individual agent control, emphasizing distributed Nash equilibrium computation.
### Components/Axes
* **Top Level (Blue Rectangle):** "Hybrid Dynamical System" with sub-components "Flow Set C | Continuous Dynamics" and "Jump Set D | Discrete Transitions".
* **Second Level (Orange and Purple Rectangles):**
* Left: "Hierarchical Flow Set Design" with details: "Individual Constraints", "Pairwise Interactions", "Global Coordination", and "O(N) Complexity".
* Right: "Game-Theoretic Jump Triggering" with details: "Strategic Coordination", "Mode Transitions", "Emergency Response", and "Three-Layer Criteria".
* **Third Level (Green Oval):** "Distributed Nash Equilibrium Computation" with details: "HANES Algorithm | Dual-Layer Optimization | Strategic Interaction".
* **Fourth Level (Orange Dashed Oval):** "Communication Network | Graph Topology".
* **Fifth Level (Orange Rectangles):** Agent representations:
* "Agent 1": "State xβ", "Control uβ*", "Cost Jβ".
* "Agent i": "State xα΅’", "Control uα΅’*", "Cost Jα΅’".
* "Agent N": "State xβ", "Control uβ*", "Cost Jβ".
* "HANES": "Algorithm", "Optimization", "O(N) Complexity".
* **Bottom Level (Orange Rectangle):** "Framework Achievements: Exponential Convergence | Distributed Control | Scalable Architecture | Optimal Nash Strategies".
### Detailed Analysis or ### Content Details
* **Hybrid Dynamical System:** This is the overarching system, combining continuous dynamics (Flow Set C) and discrete transitions (Jump Set D).
* **Hierarchical Flow Set Design:** Focuses on designing the continuous flow of the system, considering individual constraints, pairwise interactions, and global coordination. The complexity is O(N).
* **Game-Theoretic Jump Triggering:** Deals with the discrete transitions, using game theory to trigger jumps based on strategic coordination, mode transitions, and emergency responses, evaluated using three-layer criteria.
* **Distributed Nash Equilibrium Computation:** Employs the HANES algorithm for dual-layer optimization and strategic interaction to compute the Nash equilibrium in a distributed manner.
* **Communication Network:** Represents the communication topology between agents, essential for distributed computation.
* **Agents:** Individual agents are represented with their state (x), control input (u*), and cost function (J). Agent 1, Agent i, and Agent N are shown.
* **HANES (Algorithm Block):** This block represents the HANES algorithm, which is used for optimization and has a complexity of O(N).
* **Framework Achievements:** The framework achieves exponential convergence, distributed control, scalable architecture, and optimal Nash strategies.
### Key Observations
* The diagram illustrates a hierarchical structure, starting from the high-level system definition and drilling down to individual agent control.
* The HANES algorithm plays a central role in computing the distributed Nash equilibrium.
* The framework emphasizes both continuous dynamics and discrete transitions, reflecting the hybrid nature of the system.
* Communication between agents is crucial for the distributed computation.
### Interpretation
The diagram presents a framework for controlling a hybrid dynamical system in a distributed manner. The system combines continuous dynamics and discrete transitions, with the goal of achieving optimal Nash strategies. The hierarchical design allows for managing complexity by breaking down the problem into smaller, more manageable components. The use of the HANES algorithm and game-theoretic jump triggering suggests a sophisticated approach to optimization and control. The framework's achievements, such as exponential convergence and scalable architecture, highlight its potential for real-world applications. The diagram suggests a system designed for complex, interconnected systems where distributed control and strategic interactions are essential.
</details>
## Hybrid Systems with Multi-Agent
Consider hybrid dynamical systems that exhibit both continuous and discrete behavior. A hybrid system β is described by the hybrid inclusion:
<!-- formula-not-decoded -->
where π₯π₯ β π
π
ππ represents the system state, π’π’ πΆπΆ β π
π
πππΆπΆ denotes the continuous control input, and π’π’ π·π· β π
π
πππ·π· represents the discrete control input. The flow set πΆπΆ β π
π
ππ Γ π
π
πππΆπΆ defines the state-input combinations where continuous evolution is permitted, governed by the flow map πΉπΉ : π
π
ππ Γ π
π
πππΆπΆ βπ
π
ππ . The jump set π·π· β π
π
ππ Γ π
π
πππ·π· characterizes conditions triggering discrete state transitions, with the jump map πΊπΊ : π
π
ππ Γ π
π
πππ·π· β π
π
ππ determining the post-transition state values.
For multi-agent systems with ππ agents, the hybrid framework was extended to accommodate distributed control architectures. Each agent ππ β π΅π΅ = {1,2, β¦ , ππ } possesses individual dynamics while being coupled through communication and coordination requirements. The individual agent dynamics are described by:
<!-- formula-not-decoded -->
where π₯π₯ ππ β π
π
ππ is the state of agent ππ , π’π’ ππ β π
π
ππ is the control input, π΄π΄ β π
π
ππ Γ ππ is the system matrix, and π΅π΅ β π
π
ππ Γ ππ is the input matrix. The collective system state is defined as π₯π₯ = [ π₯π₯1 ππ , π₯π₯ 2 ππ , β¦ , π₯π₯ ππ ππ ] ππ β π
π
ππππ , and the global control input as π’π’ = [ π’π’1 ππ , π’π’ 2 ππ , β¦ , π’π’ ππ ππ ] ππ β π
π
ππππ .
The set-valued nature of mappings πΉπΉ and πΊπΊ accommodates system uncertainties, modeling approximations, and non-deterministic responses arising from environmental disturbances, measurement noise, and actuator imperfections that are particularly relevant in multi-agent scenarios where communication delays and packet losses introduce additional uncertainties.
## Graph-Theoretic Communication Framework and Network Dynamics
The multi-agent system operates under limited sensing capabilities, where each agent can only access information from its local neighborhood. This communication structure is represented by a directed graph π’π’ = ( π±π± , β° ) with vertex set π±π± = {1,2, β¦ , ππ } and edge set β° β π±π± Γ π±π± . The adjacency matrix π΄π΄ = οΏ½ππ πππποΏ½ β π
π
ππ Γ ππ captures the communication topology, where ππ ππππ = 1 if agent ππ can transmit information to agent ππ , and ππ ππππ = 0 otherwise.
The communication constraints fundamentally alter the hybrid system behavior compared to centralized approaches. Define the neighbor set of agent ππ as π©π© πΎπΎ = { ππ β π±π± : ππ ππππ = 1} , and the in-degree as ππ ππ = | π©π© πΎπΎ | = β ππ ππππ ππ ππ=1 . The degree matrix π·π· = diag( ππ1 , ππ2 , β¦ , ππ ππ ) and graph Laplacian πΏπΏ = π·π· - π΄π΄ characterize the algebraic connectivity properties essential for consensus analysis.
For leader-follower architectures, partition the agent set into leaders β and followers β± such that β βͺ β± = π΅π΅ and β© = β
ο ο β β© β± = β
. The interaction between leaders and followers is captured by the coupling matrix π΅π΅ πΏπΏπΏπΏ = οΏ½ππ ππππ οΏ½ where ππ ππππ = 1 if follower ππ receives information from leader ππ . This introduces additional complexity in the hybrid flow and jump set designs, as discrete transitions in leader agents can trigger cascading effects throughout the follower network.
The communication topology directly influences the convergence properties of the multi-agent hybrid system. Strong connectivity of the communication graph ensures that information from any agent can eventually reach all other agents, which is crucial for achieving global consensus. However, in hybrid systems, jump events can temporarily disrupt information flow, requiring careful consideration of the interplay between graph topology and discrete dynamics.
## Local Errors and Consensus Dynamics
For distributed coordination, define the local consensus error for agent ππ as the weighted deviation from its neighbors:
<!-- formula-not-decoded -->
This error captures the local disagreement between agent ππ and its communication neighbors, forming the basis for distributed consensus protocols. The global consensus error vector is compactly expressed as Ξ΄ = ( πΏπΏ β πΌπΌ ππ ) π₯π₯ .
Taking the time derivative of equation (3) and substituting the agent dynamics (2), the error is:
<!-- formula-not-decoded -->
Each agent must coordinate its control action π’π’ ππ with those of its neighbors to drive Ξ΄ππ β 0 .
For leader-follower configurations, distinguish between different error types. The leader error for agent ππ β β tracking reference trajectory π₯π₯ ref ( π‘π‘ ) is:
<!-- formula-not-decoded -->
The follower error for agent ππ β β± combines consensus with neighbors and tracking of leaders:
Μ
<!-- formula-not-decoded -->
where ππ ππl represents the connection weight between follower ππ and leader ππ .
The error dynamics under hybrid conditions incorporate both continuous evolution and discrete jumps. During continuous phases when ( π₯π₯ , π’π’ ) β πΆπΆ :
Μ
<!-- formula-not-decoded -->
During discrete transitions when ( π₯π₯ , π’π’ ) β π·π· , the error evolution becomes:
<!-- formula-not-decoded -->
This formulation captures how individual agent jumps affect the collective error dynamics, creating complex dependencies that require careful analysis for stability and convergence guarantees.
## III. Distributed Hybrid Game Formulation and Nash Equilibrium
Building upon the system model and hybrid dynamical framework established in Section II, this section develops a systematic framework for distributed multi-agent coordination through game-theoretic Nash equilibrium computation. The approach transforms the consensus problem into a strategic interaction where each agent optimizes its individual performance while accounting for the decisions of neighboring agents.
## Cost Function Design and Strategic Formulation
Each agent ππ seeks to minimize a performance index that balances consensus achievement with control effort:
<!-- formula-not-decoded -->
where ππ ππ = ππ ππ ππ βͺ° 0 is the state cost weight matrix, π
π
ππ = π
π
ππ ππ β» 0 is the control cost weight matrix, ππ ππ > 0 is the jump penalty weight, and { π‘π‘ ππ } ππ=0 β represents the sequence of jump times. The inclusion of jump costs ππ ππ || ππ ππ ( π‘π‘ ππ + )|| 2 penalizes large deviations from consensus immediately after discrete transitions, encouraging coordinated jumping strategies.
The distributed control problem is formulated as a multi-player game where each agent ππ solves:
<!-- formula-not-decoded -->
This creates a strategic interaction where each agent's optimal policy depends on the policies chosen by its neighbors, leading naturally to Nash equilibrium concepts. A Nash equilibrium is a strategy profile ( π’π’1 β , π’π’ 2 β , β¦ , π’π’ ππ β ) such that no agent can unilaterally improve its performance by deviating from its equilibrium strategy.
For agent ππ , we define the value function ππ ππ ( ππ ππ ): π
π
ππ βπ
π
as:
<!-- formula-not-decoded -->
where π°π° πΎπΎ denotes the admissible control set for agent ππ . The value function satisfies the hybrid HamiltonJacobi-Bellman equation. During flow phases:
<!-- formula-not-decoded -->
where ππ ππ ( ππ ππ , π’π’ ππ , π’π’ -ππ ) represents the flow dynamics from equation (7), and π’π’ -ππ denotes the control inputs of agent ππ 's neighbors.
During jump phases:
<!-- formula-not-decoded -->
The optimal continuous control law is obtained by minimizing the Hamiltonian in equation (12). Taking the derivative with respect to π’π’ ππ and setting it to zero:
<!-- formula-not-decoded -->
Therefore, the optimal control law is:
<!-- formula-not-decoded -->
This control law forms the foundation for the distributed game-theoretic approach, where each agent implements its optimal strategy while accounting for the strategic behavior of its neighbors. The coupling through the communication graph ensures that the resulting Nash equilibrium achieves distributed coordination while respecting the hybrid system constraints.
## Nash Equilibrium Characterization for Multi-Agent Hybrid Games
Definition 1 (Nash Equilibrium for Multi-Agent Hybrid Systems): A strategy profile ( π’π’1 β , π’π’ 2 β , β¦ , π’π’ ππ β ) constitutes a Nash equilibrium if for each agent ππ β π΅π΅ and for all alternative strategies π’π’ ππ β π°π°πΎπΎ :
<!-- formula-not-decoded -->
where π’π’ -ππ β = ( π’π’1 β , β¦ , π’π’ ππ-1 β , π’π’ ππ+1 β , β¦ , π’π’ ππ β ) represents the equilibrium strategies of all agents except agent ππ .
The Nash equilibrium condition requires that each agent's strategy minimizes its cost functional given the strategies of all other agents. In the hybrid setting, this condition must hold for both continuous and discrete phases of the system evolution.
Lemma 1 (Necessary Conditions for Nash Equilibrium): If ( π’π’1 β , π’π’ 2 β , β¦ , π’π’ ππ β ) is a Nash equilibrium, then for each agent ππ , the following conditions must be satisfied:
During flow phases ( ππ , π’π’ ) β πΆπΆ :
During jump phases ( ππ , π’π’ ) β π·π· :
<!-- formula-not-decoded -->
<!-- formula-not-decoded -->
where the Hamiltonian function for agent ππ is defined as:
<!-- formula-not-decoded -->
with Ξ»ππ = βππ ππ ( ππ ππ ) being the costate variable.
Proof of Lemma 1 : The proof follows from the application of Pontryagin's maximum principle to the optimal control problem (10). For the continuous phase, the optimality condition βπ»π»ππ βπ’π’ππ = 0 yields:
<!-- formula-not-decoded -->
<!-- formula-not-decoded -->
For the discrete phase, the jump optimality condition follows from minimizing the post-jump cost, leading to equation (17).
Theorem 1 (Existence of Nash Equilibrium): Consider the multi-agent hybrid system (1)-(2) with performance indices (9) under Assumption 1. If the following conditions hold:
- (i) The communication graph π’π’ contains a spanning tree,
- (ii) The matrices ( π΄π΄ , π΅π΅ ) are stabilizable for each agent,
- (iii) The matrices οΏ½π΄π΄ , ππ ππ 1 / 2 οΏ½ are observable for each agent,
- (iv) The coupling weights satisfy β ππ ππππ ππβπ©π© πΎπΎ + β ππ ππl lββ < Ξ± for some Ξ± < 2 οΏ½Ξ»ππππππ ( π
π
ππ )/ Ξ»ππππππ ( π΅π΅ ππ ππ ππ π΅π΅ ), then there exists a unique Nash equilibrium in quadratic strategies.
Proof of Theorem 1 : The proof proceeds through several steps:
Step 1 : Establish contractivity of the mapping π―π― : π«π« β π«π« where π«π« = { ππ1 , ππ2 , β¦ , ππ ππ } and π―π― ( ππ ππ ) solves equation (25).
Step 2 : Define the operator π―π― πΎπΎ : ππ++ ππ βππ++ ππ for each agent ππ as:
<!-- formula-not-decoded -->
where Ξππππ οΏ½ππ πποΏ½ represents the coupling terms and ππ++ ππ denotes the set of positive definite ππ Γ ππ matrices. Step 3 : Show that under condition (iv), the operator π―π― is a contraction mapping. The coupling term can be bounded as:
<!-- formula-not-decoded -->
where π½π½ < 1 is determined by the communication weights and system parameters.
Solving for π’π’ ππ β :
Step 4 : Apply the Banach fixed-point theorem to conclude existence and uniqueness of the fixed point ππ β = ( ππ 1 β ππ 2 β β¦, ππ ππ β ) satisfying π―π― ( ππ β ) = ππ β .
Step 5 : Verify that the corresponding control strategies π’π’ ππ β ( ππ ππ ) = -1 2 ( ππ ππ + β ππ ππl lββ ) π
π
ππ -1 π΅π΅ ππ ππ ππ β ππ ππ constitute a Nash equilibrium by checking condition (15).
## Hamilton-Jacobi-Bellman System
The Nash equilibrium strategies satisfy a system of coupled Hamilton-Jacobi-Bellman (HJB) equations. For agent ππ , the value function ππ ππ ( ππ ππ ) satisfies:
<!-- formula-not-decoded -->
Substituting the optimal control law (19) into equation (20):
<!-- formula-not-decoded -->
where Ξ¦ππ ( π’π’ -ππ β ) = -π΅π΅ β ππ ππππ π’π’ ππ β ππβπ©π© πΎπΎ -π΅π΅ β ππ ππl π’π’ l β lββ represents the coupling term from neighboring agents.
For steady-state analysis, set βππππ βπ‘π‘ = 0 , yielding the algebraic HJB equation:
<!-- formula-not-decoded -->
Assumption 1: For each agent ππ , there exists a quadratic value function of the form:
<!-- formula-not-decoded -->
where ππ ππ = ππ ππ ππ β» 0 is a positive definite matrix to be determined.
Under Assumption 1 , have βππ ππ ( ππ ππ ) = 2 ππ ππ ππ ππ . Substituting into equation (22):
<!-- formula-not-decoded -->
For this equation to hold for all ππ ππ , require:
<!-- formula-not-decoded -->
where π―π― ππ = π΄π΄ ππβππππ π€π€ ππππ οΏ½ππ πποΏ½ captures the coupling effects from neighboring agents and will be analyzed in the convergence proof.
Theorem 2 (Exponential Convergence to Nash Equilibrium): Under the conditions of Theorem 1 , the distributed Nash equilibrium strategies achieve exponential convergence of the consensus errors to zero.
Specifically, there exist constants ππ > 0 and ππ > 0 such that:
<!-- formula-not-decoded -->
where ππ ( π‘π‘ ) = [ ππ1 ππ ( π‘π‘ ), ππ 2 ππ ( π‘π‘ ), β¦ , ππ ππ ππ ( π‘π‘ )] ππ is the global error vector.
Proof of Theorem 2 : Taking the time derivative of the Lyapunov function along system trajectories during flow phases:
Μ
<!-- formula-not-decoded -->
Substituting the closed-loop error dynamics with Nash equilibrium strategies, consider the error dynamics for agent ππ in a multi-agent system, given by:
Μ
<!-- formula-not-decoded -->
Assume the optimal control law, derived from the Hamilton-Jacobi-Bellman equation, is:
<!-- formula-not-decoded -->
where π
π
ππ = π
π
ππ ππ β» 0 is the control cost matrix, and ππ ππ = ππ ππ ππ βͺ° 0 is the solution to the Riccati equation. Similarly, for neighbor ππ and leader l :
<!-- formula-not-decoded -->
Substitute π’π’ ππ β into the first control term:
<!-- formula-not-decoded -->
Substitute π’π’ ππ β and π’π’ l β into the coupling terms:
<!-- formula-not-decoded -->
<!-- formula-not-decoded -->
<!-- formula-not-decoded -->
Combine all terms:
<!-- formula-not-decoded -->
Rewrite in compact form, where the last two terms are coupling terms:
Μ
<!-- formula-not-decoded -->
The coupling terms are:
<!-- formula-not-decoded -->
The coupling terms can be shown to satisfy:
<!-- formula-not-decoded -->
where Ξ³ depends on the system parameters and communication weights.
Using the matrix inequality and condition (iv) from Theorem 1:
<!-- formula-not-decoded -->
for some ππ > 0 . This establishes exponential stability with decay rate ππ = ππ / οΏ½ 2 Ξ»ππππππ ( ππ β ) οΏ½ here ππ β = block diag( ππ 1 , ππ 2 , β¦, ππ ππ β ) .
Corollary 1 (Hybrid Stability): The Nash equilibrium strategies also guarantee stability through jump phases. If the jump maps satisfy | πΊπΊ ππ ( ππ ππ , π’π’ ππ β )| β€ Ο | ππ ππ | for some Ο < 1 , then the hybrid system maintains exponential stability across discrete transitions.
## Optimization Framework
Based on the theoretical foundation established in Sections II and III, the complete Hybrid Adaptive Nash Equilibrium Solver (HANES) Algorithm algorithmic framework for distributed Nash equilibrium computation in multi-agent hybrid system was presented. The algorithm integrates hierarchical flow set design with game-theoretic jump triggering mechanisms.
## Initialize:
- Initial states π₯π₯ ππ (0) , ππ β π΅π΅ Β· Communication topology adjacency matrix π΄π΄ = οΏ½ππ ππππ οΏ½ Β· Control parameters Ξ² , Ο , jump thresholds ΞΌ , Ο , οΏ½ Ο Β· Select positive definite matrices ππ ππ , π
π
ππ , ππ ππ for π‘π‘ = 0 to π‘π‘ ππππππ o Step 1 : Data Collection and Critic Update Β· Collect neighbor states π₯π₯ ππ ( π‘π‘ ) , ππ β π©π© πΎπΎ Β· Construct consensus errors ππ ππ ( π‘π‘ ) = β ππ ππππ οΏ½π₯π₯ ππ -π₯π₯πποΏ½ ππβπ©π©ππ Β· Update leader-follower errors using equation (6) Step 2 : Hybrid State Verification Β· Check flow condition: if | π₯π₯ππ -ΞΌ | β₯ then ( π₯π₯ , π’π’ ) β πΆπΆ Β· Check jump condition: if | π₯π₯ππ -ΞΌ | < then ( π₯π₯ , π’π’ ) β π·π· Step 3 : Nash Strategy Update Β· Solve coupled HJB equations (22) for value matrices ππ ππ Β· Compute optimal control π’π’ ππ β = -πΎπΎππ ππ ππ using equation (19) Β· Apply hybrid jump dynamics if ( π₯π₯ , π’π’ ) β π·π· Step 4: Convergence Check if max ππ | ππ ππ ( π‘π‘ )| < ππ then π’π’ ππ β = π’π’ ππ ππππππππππππππππππ return π’π’ ππ β else go to Step 1 end if Return: optimal control policies π’π’ ππ
The comprehensive experimental framework provides rigorous validation of the HANES algorithm's theoretical properties while demonstrating its effectiveness in practical multi-agent coordination scenarios. The results establish empirical evidence supporting the algorithm's convergence guarantees, computational efficiency, and robust performance across diverse operational conditions.
## IV. Experiments and Simulation
All simulations involve agents with scalar dynamics ( n =1) to clearly illustrate hybrid switching behaviors. The hybrid system nature is characterized by flow and jump sets. A common jump threshold ΞΌ =1.0 is used, with jump target intervals defined as [ Ο , οΏ½ Ο ] = [0.3,0.5] . When a jump occurs, the new state is randomly selected within this interval to model uncertainty in post-jump states. The time step for all simulations is dt=0.01.
## Algorithm: HANES (Hybrid Adaptive Nash Equilibrium Solver)
Μ
Μ
-0.5 1 οΏ½ 1 -0.3 -0.3 1 οΏ½ ; from pursuers to evaders are represented by π΄π΄ ππππ = οΏ½ 1.0 0.7 0.8 1 οΏ½ ; from evaders to pursuers are described by π΄π΄ = οΏ½ 0.9 0.5 οΏ½ .
The Pursuit-Evasion Game evaluates the framework's game-theoretic aspects and its ability to converge to a Nash equilibrium in a competitive multi-agent environment. The system comprises two pursuers and two evaders. Initial conditions are π₯π₯ pursuers (0) = οΏ½ 2.0 1.8 οΏ½ and π₯π₯ evaders (0) = οΏ½ 1.5 1.2 οΏ½ . The dynamics for pursuers are π₯π₯ π€π€ = ππ οΏ½π₯π₯ ππ + ππ ππ π’π’ ππ , ππ οΏ½ = -1 , and for evaders are π₯π₯ π₯π₯ = πππ₯π₯ ππ + ππ ππ π’π’ ππ , with a =-2. All input coefficients ππ ππ = ππ ππ = 1 . The interactions among agents are characterized by the following matrices. The pursuer-to-pursuer interactions are described by πΏπΏ ππ = οΏ½ 1 -0.5 οΏ½ ; while the evader-to-evader interactions are given πΏπΏ ππ =
<!-- formula-not-decoded -->
Pursuers aim to minimize their performance index (capture), while evaders maximize theirs (survival), forming a zero-sum game structure. Saddle-point strategies are implemented. The cost function weights are
<!-- formula-not-decoded -->
input cost weights for pursuers π
π
ππ , pursuers = diag(1.304,1.5) ; input cost weights for evaders π
π
ππ , evaders = diag( -4, -3.5) ; jump penalty weight P =0.4481; the simulation runs for π‘π‘ οΏ½inal = 3 seconds .
g
<details>
<summary>Image 2 Details</summary>

### Visual Description
## Multi-Agent Hybrid Pursuit-Evasion Game: State Evolution
### Overview
The image presents three line charts illustrating the state evolution, Nash equilibrium control strategies, and cost function evolution in a multi-agent hybrid pursuit-evasion game. The charts share a common x-axis representing time in seconds, ranging from 0 to 3.
### Components/Axes
**Chart 1: Multi-Agent Hybrid Pursuit-Evasion Game: State Evolution**
* **Title:** Multi-Agent Hybrid Pursuit-Evasion Game: State Evolution
* **Y-axis:** State x, ranging from 0 to 2
* **X-axis:** Time (s), ranging from 0 to 3
* **Legend (top-right):**
* Pursuer 1 (Green)
* Pursuer 2 (Blue)
* Evader 1 (Red)
* Evader 2 (Yellow/Orange)
* Jump Points (Black Star)
* Jump Trigger (ΞΌ) (Red dotted line)
* **Jump Trigger (ΞΌ):** A horizontal dotted red line at approximately y = 1.0
**Chart 2: Nash Equilibrium Control Strategies**
* **Title:** Nash Equilibrium Control Strategies
* **Y-axis:** Control Input u, ranging from -0.4 to 0.2
* **X-axis:** Time (s), ranging from 0 to 3
* **Legend (right):**
* uP1 (Green)
* uP2 (Blue)
* uE1 (Red)
* uE2 (Yellow/Orange)
**Chart 3: Cost Function Evolution and Nash Equilibrium Verification**
* **Title:** Cost Function Evolution and Nash Equilibrium Verification
* **Y-axis:** Cost Functional J, ranging from 0 to 2
* **X-axis:** Time (s), ranging from 0 to 3
* **Legend (right):**
* JP1 (Green)
* JP2 (Blue)
* JE1 (Red)
* JE2 (Yellow/Orange)
### Detailed Analysis
**Chart 1: State Evolution**
* **Pursuer 1 (Green):** Starts at approximately 1.7, decreases rapidly until t=0.5, where it jumps down to approximately 0.5, then decreases more gradually towards 0.
* **Pursuer 2 (Blue):** Starts at approximately 1.5, decreases rapidly until t=0.5, where it jumps down to approximately 0.4, then decreases more gradually towards 0.
* **Evader 1 (Red):** Starts at approximately 1.2, decreases rapidly towards 0.
* **Evader 2 (Yellow/Orange):** Starts at approximately 1.0, decreases rapidly towards 0.
* **Jump Points (Black Stars):** Located at t=0.5 for Pursuer 1 and Pursuer 2.
* **Jump Trigger (ΞΌ) (Red dotted line):** Horizontal line at State x = 1.0.
**Chart 2: Nash Equilibrium Control Strategies**
* **uP1 (Green):** Starts at approximately -0.4, jumps up to approximately -0.2 at t=0.5, then gradually increases towards 0.
* **uP2 (Blue):** Starts at approximately -0.3, jumps up to approximately -0.1 at t=0.5, then gradually increases towards 0.
* **uE1 (Red):** Starts at approximately 0.1, decreases slightly, then remains relatively constant near 0.
* **uE2 (Yellow/Orange):** Starts at approximately 0.15, decreases slightly, then remains relatively constant near 0.
**Chart 3: Cost Function Evolution**
* **JP1 (Green):** Starts at approximately 2.0, decreases rapidly until t=0.5, then decreases more gradually towards 0.
* **JP2 (Blue):** Starts at approximately 1.7, decreases rapidly until t=0.5, then decreases more gradually towards 0.
* **JE1 (Red):** Starts at approximately 1.0, decreases rapidly towards 0.
* **JE2 (Yellow/Orange):** Starts at approximately 0.6, decreases rapidly towards 0.
### Key Observations
* The state values of Pursuer 1 and Pursuer 2 experience a discrete jump at t=0.5, indicated by the black star markers.
* The control inputs for the pursuers (uP1 and uP2) also exhibit a jump at t=0.5.
* The cost functions for all agents decrease over time, converging towards 0.
* The jump trigger (ΞΌ) at State x = 1.0 seems to be related to the jump points of the pursuers' state values.
### Interpretation
The charts illustrate a pursuit-evasion game where the pursuers and evaders are attempting to optimize their strategies. The state evolution chart shows how the positions of the agents change over time. The jump points in the pursuers' state values likely correspond to a change in strategy or mode of operation, triggered when the state reaches a certain threshold (ΞΌ = 1.0). The Nash equilibrium control strategies chart shows the control inputs used by each agent to achieve their objectives. The cost function evolution chart shows how the cost (or loss) associated with each agent's strategy decreases over time, indicating that the agents are converging towards a Nash equilibrium. The sudden changes in the control strategies and state values at t=0.5 suggest a hybrid system with discrete mode switches.
</details>
g
y
y
Fig. 2. Pursuit-evasion game: (a) Agent state evolution with hybrid jump events (b) pursuer minimization (negative values) and evader maximization (positive values), and (c) Cost function evolution
Based on the experimental results shown in Figure 1, the HANES algorithm demonstrates successful implementation of the theoretical framework with clear validation of the hybrid system dynamics and Nash equilibrium convergence properties. The state evolution subplot reveals that all agents converge toward the theoretical equilibrium state near zero within approximately 0.8 seconds, with discrete jump events (marked by asterisks) occurring precisely at the predicted trigger threshold ΞΌ = 1.0 for both pursuers. The control strategy subplot confirms that the Nash equilibrium control inputs stabilize after the initial transient period, with pursuers implementing minimization strategies (negative control values) while evaders execute maximization strategies (positive control values), consistent with the zero-sum game formulation. The cost function evolution provides quantitative verification of the theoretical predictions, showing exponential convergence as guaranteed by Theorem 2, with all cost functionals approaching their optimal Nash equilibrium values. Notably, the jump events create brief discontinuities in the cost evolution but do not destabilize the overall convergence process, validating the hybrid system stability properties established in Corollary 1.
Fig. 3. trajectory visualization of the pursuit-evasion game
<details>
<summary>Image 3 Details</summary>

### Visual Description
## Trajectory Chart: Pursuit and Evasion
### Overview
The image is a 2D trajectory chart illustrating the movement of two pursuers and two evaders in a spatial environment. The chart displays the paths of each entity, marked with starting points and trajectories. The X and Y axes represent spatial units. The chart also indicates a "Pursuit Zone" in the top-left and an "Evasion Zone" in the bottom-right.
### Components/Axes
* **X Axis:** "X Position (spatial units)" ranging from 0 to 8.
* **Y Axis:** "Y Position (spatial units)" ranging from 0 to 5.
* **Legend (Right Side):**
* Blue Line: "Pursuer 1 Trajectory"
* Blue Circle: "Pursuer 1 Start"
* Green Line: "Pursuer 2 Trajectory"
* Green Circle: "Pursuer 2 Start"
* Red Dashed Line: "Evader 1 Trajectory"
* Red Triangle: "Evader 1 Start"
* Orange Dashed Line: "Evader 2 Trajectory"
* Orange Triangle: "Evader 2 Start"
* **Zones:**
* "Pursuit Zone" (Top-Left, Blue Text)
* "Evasion Zone" (Bottom-Right, Red Text)
### Detailed Analysis
* **Pursuer 1 (Blue):**
* **Trajectory:** Starts at approximately (1, 1.5) and curves upwards to approximately (5, 3.3).
* **Trend:** The trajectory shows an increasing Y position as the X position increases.
* **Pursuer 2 (Green):**
* **Trajectory:** Starts at approximately (1.5, 3.5) and curves slightly upwards to approximately (5, 3.9).
* **Trend:** The trajectory remains relatively flat with a slight increase in Y position.
* **Evader 1 (Red Dashed):**
* **Trajectory:** Starts at approximately (6.2, 2.0) and moves downwards and to the right, ending at approximately (7.8, 0.5).
* **Trend:** The trajectory shows a decreasing Y position as the X position increases.
* **Evader 2 (Orange Dashed):**
* **Trajectory:** Starts at approximately (5, 4) and moves upwards and to the right, ending at approximately (7.8, 4.5).
* **Trend:** The trajectory shows a slight increase in Y position as the X position increases.
### Key Observations
* The pursuers start in the lower-left region of the chart, while the evaders start in the upper-right.
* The trajectories of the pursuers generally move upwards, indicating an attempt to intercept the evaders.
* Evader 1 moves towards the "Evasion Zone," while Evader 2 moves away from it.
* The starting and ending points of each trajectory are connected by dotted lines to visually link them.
### Interpretation
The chart visualizes a pursuit-evasion scenario. The pursuers (1 and 2) attempt to intercept the evaders (1 and 2). The trajectories indicate the paths taken by each entity. Evader 1 seems to be heading towards the designated "Evasion Zone," while Evader 2 is moving in a different direction. The relative positions and movements suggest a dynamic interaction between the pursuers and evaders, potentially representing a game or simulation. The "Pursuit Zone" and "Evasion Zone" likely define areas of strategic importance for the pursuers and evaders, respectively.
</details>
The experimental results demonstrate that the HANES algorithm achieves distributed Nash equilibrium computation with linear computational complexity O(N) while maintaining theoretical rigor, providing compelling evidence for the practical applicability of the proposed framework in multi-agent pursuitevasion scenarios. The trajectory visualization demonstrates successful implementation of distributed Nash equilibrium strategies, with pursuers executing coordinated convergence behaviors from the pursuit zone while evaders perform strategic evasion maneuvers toward the evasion zone.
The multi-agent system operates under a distributed communication network as illustrated in Figure 4. The Leader-Follower Consensus experiment demonstrates cooperative coordination, validating the framework's ability to achieve distributed agreement under hybrid dynamics. The system consists of 4 agents, where agent 2 is the leader and agents 1, 3, and 4 are followers. Initial states are π₯π₯ (0) = [1.8,2.0,1.5,1.7] ππ .
Fig. 4. Multi-agent communication topology
<details>
<summary>Image 4 Details</summary>

### Visual Description
## Diagram: Network Diagram with Four Nodes
### Overview
The image presents a network diagram consisting of four nodes (1, 2, 3, and 4) connected by directed edges (arrows). Nodes 1, 3, and 4 are represented as circles, while node 2 is represented as a square. The arrows indicate the direction of the relationship or flow between the nodes.
### Components/Axes
* **Nodes:**
* Node 1: Circle shape, located at the top.
* Node 2: Square shape, located in the center-top.
* Node 3: Circle shape, located at the bottom-left.
* Node 4: Circle shape, located at the bottom-right.
* **Edges (Arrows):**
* Node 2 -> Node 1: Arrow pointing upwards from Node 2 to Node 1.
* Node 2 -> Node 3: Arrow pointing from Node 2 to Node 3.
* Node 2 -> Node 4: Arrow pointing from Node 2 to Node 4.
* Node 4 -> Node 3: Arrow pointing from Node 4 to Node 3.
### Detailed Analysis
* **Node Shapes:** The diagram uses different shapes (circle and square) to distinguish between different types of nodes.
* **Arrow Directions:** The arrows indicate the direction of the relationship or flow between the nodes.
* **Connectivity:** Node 2 is connected to all other nodes (1, 3, and 4). Node 4 is connected to Node 3.
### Key Observations
* Node 2 acts as a central hub, connecting to all other nodes.
* There is a direct connection from Node 4 to Node 3, but no direct connection from Node 3 to Node 4.
* Node 1 only receives a connection from Node 2.
### Interpretation
The diagram represents a network where Node 2 plays a central role, influencing or directing flow to Nodes 1, 3, and 4. The connection from Node 4 to Node 3 suggests a possible secondary relationship or flow within the network. The different shapes of the nodes may indicate different roles or characteristics within the system. The diagram could represent various real-world scenarios, such as information flow, resource distribution, or social relationships.
</details>
Μ
The dynamics for all agents are π₯π₯ π€π€ = ππ οΏ½π₯π₯ ππ + ππ ππ π’π’ ππ , ππ οΏ½ = -1 , and input coefficients ππ ππ = 1 for all agents. Communication Topology: The network connections are defined by the adjacency matrix: π΄π΄ topo =
<!-- formula-not-decoded -->
The leader tracks a time-varying reference π₯π₯ ref ( π‘π‘ ) = 2 ππ -0 . 3π‘π‘ cos(0.5 π‘π‘ ) . Control parameters are: Consensus gain: πΎπΎ consensus = 0.8 ; Tracking gain: πΎπΎ tracking = 1.2 ; Hybrid cost weight ππ hybrid = 0.4 The value function estimation also utilizes parameters Ξ=[0.8,0.9,0.85,0.75], discount factor Ξ² =0.95, and base parameter Ξ³ base = 0.5 . Cooperative Cost Structure: The multi-agent interaction cost matrix is
<!-- formula-not-decoded -->
Fig. 5. (a) Multi-agent state evolution with leader (b) Distributed control strategies with coordinated responses, (c) Consensus error convergence and stable value function evolution, and (d) Individual agent tracking performance demonstrating effective hierarchical coordination and bounded tracking errors
<details>
<summary>Image 5 Details</summary>

### Visual Description
## Chart: Leader-Follower Multi-Agent System Performance
### Overview
The image presents four plots illustrating the performance of a leader-follower multi-agent system. The plots show agent state evolution, distributed control strategies, consensus performance, and individual agent tracking performance over time.
### Components/Axes
**Top-Left Plot: Leader-Follower Multi-Agent State Evolution**
* **Title:** Leader-Follower Multi-Agent State Evolution
* **Y-axis:** Agent States
* Scale: -0.5 to 2.0, incrementing by 0.5
* **X-axis:** Time (s)
* Scale: 0 to 25, incrementing by 5
* **Legend (Top-Right):**
* Agent 1 (Follower) - Blue
* Agent 2 (Leader) - Orange
* Agent 3 (Follower) - Yellow
* Agent 4 (Follower) - Light Blue
* Leader Reference - Black dotted line
* Consensus Achieved - Red dashed line
**Top-Right Plot: Distributed Control Strategies**
* **Title:** Distributed Control Strategies
* **Y-axis:** Control Inputs
* Scale: -1 to 2.5, incrementing by 0.5
* **X-axis:** Time (s)
* Scale: 0 to 25, incrementing by 5
* **Legend (Top-Right):**
* u1 (Follower) - Blue
* u2 (Leader) - Orange
* u3 (Follower) - Yellow
* u4 (Follower) - Purple
**Bottom-Left Plot: Consensus Performance and Value Function Evolution**
* **Title:** Consensus Performance and Value Function Evolution
* **Left Y-axis:** Consensus Error
* Scale: 0 to 0.25, incrementing by 0.05
* **Right Y-axis:** Total Value Function
* Scale: 0 to 1.6, incrementing by 0.2
* **X-axis:** Time (s)
* Scale: 0 to 25, incrementing by 5
* Consensus Error - Blue
* Total Value Function - Red
**Bottom-Right Plot: Individual Agent Tracking Performance**
* **Title:** Individual Agent Tracking Performance
* **Y-axis:** Tracking Errors
* Scale: -1.2 to 0.6, incrementing by 0.2
* **X-axis:** Time (s)
* Scale: 0 to 25, incrementing by 5
* **Legend (Top-Right):**
* Follower 1 Error - Blue
* Leader Error (ref tracking) - Orange
* Follower 3 Error - Yellow
* Follower 4 Error - Purple
### Detailed Analysis
**Top-Left Plot: Leader-Follower Multi-Agent State Evolution**
* **Agent 1 (Follower) - Blue:** Starts at approximately 2, rapidly decreases to around 0 at t=5, then converges to 0.
* **Agent 2 (Leader) - Orange:** Starts at approximately 0.5, decreases to around -0.1 at t=5, then converges to 0.
* **Agent 3 (Follower) - Yellow:** Starts at approximately 2, rapidly decreases to around 0 at t=5, then converges to 0.
* **Agent 4 (Follower) - Light Blue:** Starts at approximately 2, rapidly decreases to around 0 at t=5, then converges to 0.
* **Leader Reference - Black dotted line:** Starts at approximately 0.5, decreases to around 0 at t=5, then remains at 0.
* **Consensus Achieved - Red dashed line:** Vertical line at approximately t=2.5.
**Top-Right Plot: Distributed Control Strategies**
* **u1 (Follower) - Blue:** Starts at approximately -1, increases rapidly to around 0.5, then converges to 0.
* **u2 (Leader) - Orange:** Starts at approximately 2.5, decreases rapidly to around 0, then converges to 0.
* **u3 (Follower) - Yellow:** Starts at approximately 0.5, decreases rapidly to around 0, then converges to 0.
* **u4 (Follower) - Purple:** Starts at approximately -0.5, increases rapidly to around 0, then converges to 0.
**Bottom-Left Plot: Consensus Performance and Value Function Evolution**
* **Consensus Error - Blue:** Starts at approximately 0.25, decreases rapidly to around 0 at t=5, then remains at 0.
* **Total Value Function - Red:** Starts at approximately 0, increases rapidly to around 1.5, then decreases to around 0 at t=10, then remains at 0.
**Bottom-Right Plot: Individual Agent Tracking Performance**
* **Follower 1 Error - Blue:** Starts at approximately 0.5, decreases rapidly to around 0 at t=5, then remains at 0.
* **Leader Error (ref tracking) - Orange:** Starts at approximately -1, increases rapidly to around 0.3 at t=5, then converges to 0.
* **Follower 3 Error - Yellow:** Starts at approximately -1, increases rapidly to around 0, then converges to 0.
* **Follower 4 Error - Purple:** Starts at approximately -0.2, increases rapidly to around 0.1 at t=5, then converges to 0.
### Key Observations
* The agent states converge to a consensus around time t=2.5, as indicated by the "Consensus Achieved" line in the top-left plot.
* The control inputs for all agents converge to 0 over time.
* The consensus error decreases to 0 over time.
* The tracking errors for all agents converge to 0 over time.
### Interpretation
The plots demonstrate the successful implementation of a distributed control strategy for a leader-follower multi-agent system. The agents achieve consensus, meaning their states converge to a common value. The control inputs stabilize over time, and the tracking errors are minimized, indicating that the agents are effectively following the leader's reference trajectory. The "Consensus Achieved" line indicates the time at which the agents' states are considered to be in agreement. The total value function represents a cost or performance metric that is optimized over time.
</details>
The experimental results successfully validate the proposed HANES algorithm, demonstrating key theoretical predictions from the paper. The hybrid jump event at t=2.5 seconds enables rapid leader reconfiguration while maintaining distributed coordination, with consensus error achieving exponential convergence below 0.05 within 8 seconds as predicted by Theorem 2. The coordinated control responses and stable value function evolution following the discrete transition confirm the framework's ability to preserve Nash equilibrium properties through hybrid dynamics, validating both the game-theoretic jump triggering mechanism and the distributed optimization approach for multi-agent coordination.
The performance of the proposed framework across these experiments is assessed using several quantitative metrics. These include convergence analysis, focusing on convergence time ( π‘π‘ conv ), final consensus or tracking error ( || ππ οΏ½inal || ), and convergence rate. For the pursuit-evasion scenario, Nash equilibrium verification is crucial, analyzing strategy stability. Additionally, computational efficiency (e.g., processing time) and robustness to variations (e.g., initial conditions) are considered to demonstrate the algorithm's practical applicability and resilience. These experiments collectively aim to demonstrate leader-follower coordination with hybrid dynamics, distributed control without global information, opponent strategy estimation and adaptation, and consensus achievement with bounded tracking errors.
## V. Conclusion and Future Work
In this paper, a comprehensive framework for distributed multi-agent hybrid systems operating under game-theoretic principles, as established in the hybrid dynamical systems theory. The framework addresses scenarios in which multiple autonomous agents must coordinate their actions through both continuous dynamics and discrete mode transitions while operating under distributed information constraints and strategic interactions.
By encoding the coordination objectives of agents in a distributed Nash equilibrium framework, sufficient conditions were provided to characterize optimal strategies that achieve consensus while maintaining individual agent autonomy. The main theoretical contributions establish rigorous mathematical foundations for multi-agent hybrid coordination through three key innovations: hierarchical flow set design methodology that decomposes complex multi-dimensional constraints into manageable subproblems, game-theoretic jump triggering mechanisms that coordinate discrete transitions across the agent network, and the Hybrid Adaptive Nash Equilibrium Solver (HANES) algorithm that achieves linear computational complexity O(N) compared to traditional cubic complexity O(NΒ³) approaches.
The theoretical framework demonstrates that the proposed distributed Nash equilibrium strategies guarantee exponential convergence to consensus, as established in Theorem 2, while maintaining system stability through discrete jump phases via the jump triggering mechanisms introduced in Section III. The hierarchical flow set construction methodology successfully addresses the exponential scaling problem inherent in multi-agent hybrid systems by systematically decomposing individual agent safety constraints, pairwise interaction requirements, and global coordination objectives. Furthermore, the game-theoretic jump triggering approach enables rapid emergency response capabilities for communication interruptions, agent failures, and environmental disruptions that cannot be addressed through continuous control methods alone.
Connections between optimality and stability for the studied class of multi-agent hybrid games were established through the value function analysis in Section III, demonstrating that the Nash equilibrium strategies serve dual roles as optimal control policies and Lyapunov-like functions for stability certification. The experimental validation through pursuit-evasion and leader-follower consensus scenarios confirms the practical applicability of the theoretical results, showing successful distributed coordination with bounded tracking errors and robust performance across diverse operational conditions.
The comprehensive simulation studies demonstrate significant improvements in convergence time, computational efficiency, and scalability compared to existing centralized approaches. The pursuit-evasion game simulation validated the framework's game-theoretic aspects and Nash equilibrium convergence properties in competitive multi-agent environments, while the leader-follower consensus experiment confirmed the cooperative coordination capabilities under hybrid dynamics with time-varying references and discrete mode transitions.
Future work includes extending the framework to accommodate heterogeneous agent dynamics where individual agents may have different state dimensions and control authorities, as the current formulation assumes homogeneous scalar dynamics. Investigating stochastic extensions of the hybrid game formulation to account for communication uncertainties, measurement noise, and environmental disturbances would enhance the framework's robustness for real-world applications. The development of adaptive algorithms that can learn optimal jump triggering thresholds and flow set parameters online, rather than requiring a priori specification, represents another promising research direction.
Additional future research directions include studying conditions to guarantee global optimality rather than local Nash equilibria, particularly for large-scale networks where multiple equilibria may exist. Exploring the integration of machine learning techniques with the HANES algorithm to handle unknown agent dynamics and environmental conditions would broaden the framework's applicability to scenarios with limited model knowledge. Furthermore, investigating the computational complexity and convergence guarantees for time-varying communication topologies and dynamic agent populations would address practical deployment scenarios in mobile autonomous systems such as UAV swarms and satellite formations.
## References
- [1] Leudo, S.J., et al. "On the optimal cost and asymptotic stability in two-player zero-sum set-valued hybrid games." American Control Conference , 2024.
- [2] Goebel, R., et al. Hybrid Dynamical Systems: Modeling, Stability, and Robustness . Princeton University Press, 2012.
- [3] De La Fuente, Neil, and Guim CasadellΓ . "Game Theory and Multi-Agent Reinforcement Learning: From Nash Equilibria to Evolutionary Dynamics." arXiv preprint arXiv:2412.20523 (2024).
- [4] Kim, Hansung, et al. "Learning Two-agent Motion Planning Strategies from Generalized Nash Equilibrium for Model Predictive Control." arXiv preprint arXiv:2411.13983 (2024).
- [5] Survey of containment control in multi-agent systems: concepts, communication, dynamics, and controller design, International Jthisnal of Systems Science, 2023.
- [6] Sanfelice, R.G. "Motion Planning for Hybrid Dynamical Systems." International Jthisnal of Robotics Research, 2025.
- [7] Sanfelice, R.G. "Distributed State Estimation of Jointly Observable Linear Systems under Directed Switching Networks." IEEE Transactions on Automatic Control, 2024.
- [8] Grammatico, S., et al. "Learning generalized Nash equilibria in multi-agent dynamical systems via extremum seeking control." Automatica, 2021.
- [9] Li, H., et al. "Centralized and Decentralized Event-Triggered Nash Equilibrium-Seeking Strategies for Heterogeneous Multi-Agent Systems." Mathematics, 2025.
- [10] Heemels, W.P.M.H., et al. "An introduction to event-triggered and self-triggered control." IEEE Conference on Decision and Control, 2012.
- [11] Xing, L., et al. "Dynamic Event-triggered Control and Estimation: A Survey." Machine Intelligence Research, 2021.
- [12] Sanfelice, R.G. "Robust Synergistic Hybrid Feedback." IEEE Transactions on Automatic Control, 2024.
- [13] Sanfelice, R.G. "Pointwise Exponential Stability of State Consensus with Intermittent Communication." IEEE Transactions on Automatic Control, 2024.
- [14] Sanfelice, R.G. "Forward Invariance of Sets for Hybrid Dynamical Systems." IEEE Transactions on Automatic Control, 2021.
- [15] Tabuada, P. Verification and Control of Hybrid Systems: A Symbolic Approach. Springer, 2009.
- [16] Lygeros, J., et al. "Hybrid Dynamical Systems: An Introduction to Control and Verification." Foundations and Trends in Systems and Control, 2008.
- [17] Chen, F., et al. "Consensus analysis of hybrid multiagent systems: A game-theoretic approach." IEEE Transactions on Cybernetics, 2019.
- [18] Nowzari, C., et al. "Analysis and control of epidemics: A survey of spreading processes on complex networks." IEEE Control Systems Magazine, 2016.
- [19] Daskalakis, C., et al. "The complexity of computing a Nash equilibrium." Communications of the ACM, 2009.
- [20] Chen, X., et al. "Settling the complexity of computing two-player Nash equilibria." Jthisnal of the ACM, 2009.
- [21] Blondel, V.D. and Tsitsiklis, J.N. "A survey of computational complexity results in systems and control." Automatica, 2000.
- [22] Bemporad, A. and Morari, M. "Verification of hybrid systems via mathematical programming." Hybrid Systems: Computation and Control, 1999.
- [23] Daskalakis, C., et al. "The complexity of computing a Nash equilibrium." SIAM Jthisnal on Computing, 2009.
- [24] Chen, X., et al. "Settling the complexity of computing two-player Nash equilibria." ACM Symposium on Theory of Computing, 2006.
- [25] Grammatico, S. "Distributed Nash equilibrium seeking in aggregative games." IEEE Conference on Decision and Control, 2020.
- [26] Tabuada, P. "Event-triggered real-time scheduling of stabilizing control tasks." IEEE Transactions on Automatic Control, 2007.
- [27] Girard, A. "Dynamic triggering mechanisms for event-triggered control." IEEE Transactions on Automatic Control, 2015.
- [28] Dolk, V.S., et al. "Event-triggered control systems under denial-of-service attacks." IEEE Transactions on Control of Network Systems, 2017.
- [29] Zhu, S., et al. "Distributed Nash Equilibrium Seeking Under Event-Triggered Mechanism." IEEE Transactions on Neural Networks and Learning Systems, 2021.
- [30] Ye, M., et al. "Nash Equilibrium Seeking for Graphic Games With Dynamic Event-Triggered Mechanism." IEEE Transactions on Cybernetics, 2021.
- [31] Bianchi, M., et al. "Learning generalized Nash equilibria in monotone games: A hybrid adaptive extremum seeking control approach." IEEE Conference on Decision and Control, 2021.
- [32] KrilaΕ‘eviΔ, S. and Grammatico, S. "Learning generalized Nash equilibria in multi -agent dynamical systems via extremum seeking control." Automatica, 2021.
- [33] Yi, P., et al. "A Survey of Distributed Algorithms for Aggregative Games." IEEE/CAA Jthisnal of Automatica Sinica, 2024.
- [34] Sanfelice, R.G. "Tracking Control for Hybrid Systems With State-Triggered Jumps." IEEE Transactions on Automatic Control, 2013.
- [35] Heemels, W.P.M.H. "Hybrid and Switched Systems: Modeling, Analysis, and Control." Eindhoven University of Technology, 2023.