2506.11304v1
Model: nemotron-free
## A Hybrid Adaptive Nash Equilibrium Solver for Distributed MultiAgent Systems with Game-Theoretic Jump Triggering
Qiuyu Miao 1* and Zhigang Wu 2
School of Aeronautics and Astronautics, Sun Yat-sen University, Shenzhen, China
Abstract οΌ This paper presents a hybrid adaptive Nash equilibrium solver for distributed multi-agent systems incorporating game-theoretic jump triggering mechanisms. The approach addresses fundamental scalability and computational challenges in multi-agent hybrid systems by integrating distributed game-theoretic optimization with systematic hybrid system design. A novel game-theoretic jump triggering mechanism coordinates discrete mode transitions across multiple agents while maintaining distributed autonomy. The Hybrid Adaptive Nash Equilibrium Solver (HANES) algorithm integrates these methodologies. Sufficient conditions establish exponential convergence to consensus under distributed information constraints. The framework provides rigorous stability guarantees through coupled Hamilton-Jacobi-Bellman equations while enabling rapid emergency response capabilities through coordinated jump dynamics. Simulation studies in pursuit-evasion and leader-follower consensus scenarios demonstrate significant improvements in convergence time, computational efficiency, and scalability compared to existing centralized and distributed approaches.
: Hybrid dynamical systems, Multi-agent coordination, Nash equilibrium,
Keywords
Distributed control
## I. Introduction
Modern distributed autonomous systems face unprecedented challenges in coordinating multiple agents that must simultaneously handle continuous physical dynamics and discrete decision-making processes. This dual nature appears across critical engineering domains where system failures can have severe consequences and traditional control approaches prove inadequate.
Unmanned aerial vehicle swarms must navigate complex environments through continuous flight control while executing discrete task assignments and obstacle avoidance decisions. Power grid operations demand real-time continuous power balancing coupled with discrete switching operations for load management and fault protection. These applications share a fundamental characteristic: the inability of purely continuous or discrete control methods to capture the essential system behaviors, necessitating hybrid system approaches that can rigorously handle both types of dynamics within a unified mathematical framework.
Hybrid dynamical systems provide the mathematical foundation for such complex behaviors. The design of flow sets πΆπΆ and jump sets π·π· forms the core of hybrid system architecture [2,12]. Recent advances in hybrid motion planning [13] and distributed state estimation under switching networks [14] have demonstrated the practical importance of systematic flow and jump set construction. However, in multi-agent contexts, the flow set πΆπΆ must accommodate the collective state space where all agents maintain coordination through local information exchange with neighbors, while the jump set π·π· coordinates discrete interventions across the network when continuous operation becomes insufficient. This design challenge differs fundamentally from single-agent cases due to the requirement for distributed coordination without global state knowledge.
While hybrid system theory has matured significantly for single-agent applications [15,16], the extension to multi-agent scenarios reveals profound theoretical and computational challenges in coordinating discrete mode switches across multiple interacting agents. The complexity of multi-agent hybrid systems emerges from the intricate coupling between individual agent dynamics and collective coordination requirements, where agents typically know only their neighbors' states yet must achieve robust synchronization and coordination through hybrid mechanisms [17,18]. However, the interaction between autonomous agent dynamics and the stochastic and intermittent nature of network traffic, combined with delays and asynchrony in information flow, further complicates the goal of ensuring system autonomy.
Existing hybrid system approaches primarily focus on single-agent or simple two-agent interactions, as exemplified by foundational works such as Leudo et al. [1] on two-player zero-sum hybrid games. The complexity of designing flow sets for multi-agent systems scales exponentially with agent numbers due to coupled constraints among all possible agent pairs [19,20]. Recent surveys on multi-agent consensus control acknowledge this fundamental scalability barrier, noting that current distributed control approaches resort to overly conservative designs that sacrifice performance for computational tractability, while alternative ad-hoc methods lack rigorous theoretical foundations for stability and convergence guarantees [5,21,22]. Furthermore, the computational complexity of Nash equilibrium computation, which is PPADcomplete even for continuous games [23,24], compounds these challenges when integrating game-theoretic approaches with hybrid dynamics.
Jump triggering mechanisms in multi-agent hybrid systems present additional challenges that existing literature has not systematically addressed. Recent advances have demonstrated the potential of hybrid systems frameworks for distributed multi-agent optimization, where agents perform continuous computations (such as gradient descent) while exchanging information at discrete communication instants through "update-and-hold" strategies [7,25]. Event-triggered control approaches in multi-agent systems have evolved significantly since Tabuada's foundational work [26], with recent developments in dynamic event-triggered mechanisms [27,28] and distributed Nash equilibrium seeking under event-triggered protocols [29,30]. However, these approaches focus primarily on continuous dynamics and lack the theoretical framework necessary for hybrid system applications where discrete mode switches fundamentally alter agent interactions and strategic landscapes.
Game-theoretic approaches to multi-agent control have shown promise in continuous domains [3,4], but their integration with hybrid system frameworks remains largely unexplored despite recent advances in learning generalized Nash equilibria through hybrid adaptive extremum seeking control [31,32]. Traditional Nash equilibrium computation assumes continuous action spaces and static interaction patterns, which are inadequate for hybrid systems where discrete mode switches create dynamic strategic environments. Current distributed optimization methods for multi-agent systems [12,13,33] focus primarily on continuous domains and lack computational frameworks for hybrid Nash equilibrium problems that couple continuous strategy optimization within discrete modes with discrete mode selection strategies. The fundamental PPAD-completeness of computing Nash equilibria [23,24] creates additional computational barriers that existing distributed algorithms have not adequately addressed in hybrid settings.
To address these fundamental limitations, this paper presents a comprehensive framework for multi-agent hybrid system design that integrates systematic flow set construction with distributed game-theoretic optimization. The main contributions are:
(1) In contrast to existing approaches that treat hybrid dynamics and multi-agent coordination separately [19,20] and lack systematic integration of game theory with hybrid systems [34,35], a unified distributed framework that formulates multi-agent coordination as a strategic game was developed within the hybrid dynamical systems context. Unlike current methods that rely on purely continuous game formulations or ad-hoc hybrid system designs that sacrifice theoretical rigor for computational tractability [5,21], this framework systematically integrates the hybrid inclusion formulation with distributed Nash equilibrium computation, addressing the fundamental challenge of coordinating discrete mode switches across multiple agents while maintaining individual agent autonomy and preserving rigorous stability guarantees.
(2) While existing event-triggered approaches [26,27,28] focus primarily on continuous dynamics and recent Nash equilibrium seeking methods [29,30] lack hybrid system integration, an intelligent jump triggering strategy based on distributed game-theoretic analysis that coordinates discrete mode transitions across multiple agents was introduced. In contrast to current event-triggered mechanisms that cannot handle the dynamic strategic environments created by discrete mode switches, this mechanism leverages strategic interaction modeling to optimize jump timing for system-wide objectives while maintaining individual agent autonomy and computational efficiency through three-layer triggering criteria. Unlike purely continuous control methods that cannot achieve rapid emergency response, the mechanism enables rapid emergency mode switching upon detecting communication interruptions, agent failures, or environmental disruptions, providing essential fast response capabilities that existing approaches cannot achieve.
(3) Addressing the fundamental computational barriers posed by PPAD-complete Nash equilibrium computation [23,24] and the exponential complexity scaling of existing distributed approaches [19,20], a novel distributed algorithm that integrates the hierarchical flow set design and game-theoretic jump triggering mechanisms was proposed to compute Nash equilibria in hybrid multi-agent systems. While traditional centralized approaches suffer from high computational complexity and existing distributed methods lack hybrid system capability [31,32,33], this algorithm employs dual-layer iterative optimization that separates continuous strategy optimization within modes from discrete mode selection optimization, achieving significant improvements in computational efficiency compared to traditional centralized approaches.
The theoretical framework provides rigorous mathematical foundations while offering practical computational tools for real-world implementation.
Notation : Throughout this paper, we employ standard mathematical notation where π₯π₯ ππ β π
π
ππ denotes the state of agent ππ , π’π’ ππ β π
π
ππ represents the control input, and π΅π΅ = {1,2, β¦ , ππ } defines the agent index set. In the hybrid system, πΆπΆ β π
π
ππ Γ π
π
ππ and π·π· β π
π
ππ Γ π
π
ππ represent the flow and jump sets respectively, while πΉπΉ : π
π
ππ Γ π
π
ππ βπ
π
ππ and πΊπΊ : π
π
ππ Γ π
π
ππ βπ
π
ππ denote the corresponding flow and jump maps. The consensus error for agent ππ is defined as Ξ΄ππ , while ππ ππ represents the tracking error. Cost function parameters include state weight matrices ππ ππ = ππ ππ ππ βͺ° 0 , control weight matrices π
π
ππ = π
π
ππ ππ β» 0 , and jump penalty weights ππ ππ > 0 . The Kronecker product is denoted by β , the gradient operator by β , and the signum function by sign( β
) . Value functions are represented as ππ ππ ( ππ ππ ): π
π
ππ βπ
π
, and the post-jump state is indicated by the superscript ( + ) as in π₯π₯ + .
## II. Problem Statement and Preliminaries
This section presents the mathematical foundations for multi-agent hybrid systems operating under distributed game-theoretic control. Firstly, basic hybrid dynamical system formulation is established, then develop the multi-agent framework with communication constraints, and finally introduce the distributed optimization problem that forms the core of this approach.
Fig. 1. Overall Framework Architecture of the Hybrid Adaptive Nash Equilibrium Solver
<details>
<summary>Image 1 Details</summary>

### Visual Description
## System Architecture Diagram: Hybrid Dynamical Control Framework
### Overview
The diagram illustrates a hierarchical control system integrating continuous/discrete dynamics, game-theoretic coordination, and distributed optimization. Key components include flow sets, jump triggers, Nash equilibrium computation, and agent-based control.
### Components/Axes
1. **Top Layer**:
- **Hybrid Dynamical System** (Sky Blue Box)
- Flow Set C | Continuous Dynamics
- Jump Set D | Discrete Transitions
- **Hierarchical Flow Set Design** (Light Orange Box)
- Individual Constraints
- Pairwise Interactions
- Global Coordination
- O(N) Complexity
- **Game-Theoretic Jump Triggering** (Purple Box)
- Strategic Coordination
- Mode Transitions
- Emergency Response
- Three-Layer Criteria
2. **Central Layer**:
- **Distributed Nash Equilibrium Computation** (Green Oval)
- HANES Algorithm | Dual-Layer Optimization | Strategic Interaction
3. **Bottom Layer**:
- **Communication Network** (Dashed Orange Oval)
- Graph Topology
- **Agent Nodes** (Light Orange Squares):
- Agent 1: State xβ, Control uβ*, Cost Jβ
- Agent i: State xα΅’, Control uα΅’*, Cost Jα΅’
- Agent N: State xβ, Control uβ*, Cost Jβ
- **HANES Algorithm** (Green Box):
- Optimization
- O(N) Complexity
4. **Framework Achievements** (Bottom Orange Banner):
- Exponential Convergence
- Distributed Control
- Scalable Architecture
- Optimal Nash Strategies
### Spatial Grounding
- **Legend**: Bottom banner (orange background) lists framework achievements.
- **Flow Direction**: Arrows connect components vertically (top β center β bottom) and horizontally (left β right).
- **Color Coding**:
- Sky blue: Hybrid Dynamical System
- Light orange: Flow Set Design/Agent Nodes
- Purple: Game-Theoretic Jump Triggering
- Green: Nash Equilibrium/HANES
### Detailed Analysis
- **Hybrid Dynamics**: Combines continuous (Flow Set C) and discrete (Jump Set D) transitions.
- **Hierarchical Design**: Flow sets manage constraints, interactions, and coordination with linear complexity (O(N)).
- **Game-Theoretic Triggers**: Enables strategic mode transitions and emergency responses via three-layer criteria.
- **Nash Equilibrium**: Centralized computation using HANES algorithm with dual-layer optimization.
- **Agent Communication**: Nodes (Agent 1 to N) share states, controls, and costs via a graph topology.
- **HANES Role**: Provides optimization with linear complexity, ensuring scalability.
### Key Observations
1. **Modularity**: Components are decoupled yet interconnected (e.g., flow sets feed into Nash computation).
2. **Scalability**: O(N) complexity in flow sets and HANES suggests efficient scaling with agent count.
3. **Redundancy**: Three-layer criteria in jump triggering imply robustness in emergency scenarios.
4. **Distributed Control**: Agent nodes operate autonomously but coordinate via the communication network.
### Interpretation
This framework integrates continuous/discrete control with game theory to manage complex systems. The hierarchical structure allows localized decision-making (agents) while maintaining global optimization (HANES). The emphasis on O(N) complexity and distributed control suggests applications in large-scale networks (e.g., power grids, autonomous vehicles). The three-layer jump criteria highlight adaptability to dynamic environments, balancing efficiency and responsiveness.
</details>
## Hybrid Systems with Multi-Agent
Consider hybrid dynamical systems that exhibit both continuous and discrete behavior. A hybrid system β is described by the hybrid inclusion:
<!-- formula-not-decoded -->
where π₯π₯ β π
π
ππ represents the system state, π’π’ πΆπΆ β π
π
πππΆπΆ denotes the continuous control input, and π’π’ π·π· β π
π
πππ·π· represents the discrete control input. The flow set πΆπΆ β π
π
ππ Γ π
π
πππΆπΆ defines the state-input combinations where continuous evolution is permitted, governed by the flow map πΉπΉ : π
π
ππ Γ π
π
πππΆπΆ βπ
π
ππ . The jump set π·π· β π
π
ππ Γ π
π
πππ·π· characterizes conditions triggering discrete state transitions, with the jump map πΊπΊ : π
π
ππ Γ π
π
πππ·π· β π
π
ππ determining the post-transition state values.
For multi-agent systems with ππ agents, the hybrid framework was extended to accommodate distributed control architectures. Each agent ππ β π΅π΅ = {1,2, β¦ , ππ } possesses individual dynamics while being coupled through communication and coordination requirements. The individual agent dynamics are described by:
<!-- formula-not-decoded -->
where π₯π₯ ππ β π
π
ππ is the state of agent ππ , π’π’ ππ β π
π
ππ is the control input, π΄π΄ β π
π
ππ Γ ππ is the system matrix, and π΅π΅ β π
π
ππ Γ ππ is the input matrix. The collective system state is defined as π₯π₯ = [ π₯π₯1 ππ , π₯π₯ 2 ππ , β¦ , π₯π₯ ππ ππ ] ππ β π
π
ππππ , and the global control input as π’π’ = [ π’π’1 ππ , π’π’ 2 ππ , β¦ , π’π’ ππ ππ ] ππ β π
π
ππππ .
The set-valued nature of mappings πΉπΉ and πΊπΊ accommodates system uncertainties, modeling approximations, and non-deterministic responses arising from environmental disturbances, measurement noise, and actuator imperfections that are particularly relevant in multi-agent scenarios where communication delays and packet losses introduce additional uncertainties.
## Graph-Theoretic Communication Framework and Network Dynamics
The multi-agent system operates under limited sensing capabilities, where each agent can only access information from its local neighborhood. This communication structure is represented by a directed graph π’π’ = ( π±π± , β° ) with vertex set π±π± = {1,2, β¦ , ππ } and edge set β° β π±π± Γ π±π± . The adjacency matrix π΄π΄ = οΏ½ππ πππποΏ½ β π
π
ππ Γ ππ captures the communication topology, where ππ ππππ = 1 if agent ππ can transmit information to agent ππ , and ππ ππππ = 0 otherwise.
The communication constraints fundamentally alter the hybrid system behavior compared to centralized approaches. Define the neighbor set of agent ππ as π©π© πΎπΎ = { ππ β π±π± : ππ ππππ = 1} , and the in-degree as ππ ππ = | π©π© πΎπΎ | = β ππ ππππ ππ ππ=1 . The degree matrix π·π· = diag( ππ1 , ππ2 , β¦ , ππ ππ ) and graph Laplacian πΏπΏ = π·π· - π΄π΄ characterize the algebraic connectivity properties essential for consensus analysis.
For leader-follower architectures, partition the agent set into leaders β and followers β± such that β βͺ β± = π΅π΅ and β© = β
ο ο β β© β± = β
. The interaction between leaders and followers is captured by the coupling matrix π΅π΅ πΏπΏπΏπΏ = οΏ½ππ ππππ οΏ½ where ππ ππππ = 1 if follower ππ receives information from leader ππ . This introduces additional complexity in the hybrid flow and jump set designs, as discrete transitions in leader agents can trigger cascading effects throughout the follower network.
The communication topology directly influences the convergence properties of the multi-agent hybrid system. Strong connectivity of the communication graph ensures that information from any agent can eventually reach all other agents, which is crucial for achieving global consensus. However, in hybrid systems, jump events can temporarily disrupt information flow, requiring careful consideration of the interplay between graph topology and discrete dynamics.
## Local Errors and Consensus Dynamics
For distributed coordination, define the local consensus error for agent ππ as the weighted deviation from its neighbors:
<!-- formula-not-decoded -->
This error captures the local disagreement between agent ππ and its communication neighbors, forming the basis for distributed consensus protocols. The global consensus error vector is compactly expressed as Ξ΄ = ( πΏπΏ β πΌπΌ ππ ) π₯π₯ .
Taking the time derivative of equation (3) and substituting the agent dynamics (2), the error is:
<!-- formula-not-decoded -->
Each agent must coordinate its control action π’π’ ππ with those of its neighbors to drive Ξ΄ππ β 0 .
For leader-follower configurations, distinguish between different error types. The leader error for agent ππ β β tracking reference trajectory π₯π₯ ref ( π‘π‘ ) is:
<!-- formula-not-decoded -->
The follower error for agent ππ β β± combines consensus with neighbors and tracking of leaders:
Μ
<!-- formula-not-decoded -->
where ππ ππl represents the connection weight between follower ππ and leader ππ .
The error dynamics under hybrid conditions incorporate both continuous evolution and discrete jumps. During continuous phases when ( π₯π₯ , π’π’ ) β πΆπΆ :
Μ
<!-- formula-not-decoded -->
During discrete transitions when ( π₯π₯ , π’π’ ) β π·π· , the error evolution becomes:
<!-- formula-not-decoded -->
This formulation captures how individual agent jumps affect the collective error dynamics, creating complex dependencies that require careful analysis for stability and convergence guarantees.
## III. Distributed Hybrid Game Formulation and Nash Equilibrium
Building upon the system model and hybrid dynamical framework established in Section II, this section develops a systematic framework for distributed multi-agent coordination through game-theoretic Nash equilibrium computation. The approach transforms the consensus problem into a strategic interaction where each agent optimizes its individual performance while accounting for the decisions of neighboring agents.
## Cost Function Design and Strategic Formulation
Each agent ππ seeks to minimize a performance index that balances consensus achievement with control effort:
<!-- formula-not-decoded -->
where ππ ππ = ππ ππ ππ βͺ° 0 is the state cost weight matrix, π
π
ππ = π
π
ππ ππ β» 0 is the control cost weight matrix, ππ ππ > 0 is the jump penalty weight, and { π‘π‘ ππ } ππ=0 β represents the sequence of jump times. The inclusion of jump costs ππ ππ || ππ ππ ( π‘π‘ ππ + )|| 2 penalizes large deviations from consensus immediately after discrete transitions, encouraging coordinated jumping strategies.
The distributed control problem is formulated as a multi-player game where each agent ππ solves:
<!-- formula-not-decoded -->
This creates a strategic interaction where each agent's optimal policy depends on the policies chosen by its neighbors, leading naturally to Nash equilibrium concepts. A Nash equilibrium is a strategy profile ( π’π’1 β , π’π’ 2 β , β¦ , π’π’ ππ β ) such that no agent can unilaterally improve its performance by deviating from its equilibrium strategy.
For agent ππ , we define the value function ππ ππ ( ππ ππ ): π
π
ππ βπ
π
as:
<!-- formula-not-decoded -->
where π°π° πΎπΎ denotes the admissible control set for agent ππ . The value function satisfies the hybrid HamiltonJacobi-Bellman equation. During flow phases:
<!-- formula-not-decoded -->
where ππ ππ ( ππ ππ , π’π’ ππ , π’π’ -ππ ) represents the flow dynamics from equation (7), and π’π’ -ππ denotes the control inputs of agent ππ 's neighbors.
During jump phases:
<!-- formula-not-decoded -->
The optimal continuous control law is obtained by minimizing the Hamiltonian in equation (12). Taking the derivative with respect to π’π’ ππ and setting it to zero:
<!-- formula-not-decoded -->
Therefore, the optimal control law is:
<!-- formula-not-decoded -->
This control law forms the foundation for the distributed game-theoretic approach, where each agent implements its optimal strategy while accounting for the strategic behavior of its neighbors. The coupling through the communication graph ensures that the resulting Nash equilibrium achieves distributed coordination while respecting the hybrid system constraints.
## Nash Equilibrium Characterization for Multi-Agent Hybrid Games
Definition 1 (Nash Equilibrium for Multi-Agent Hybrid Systems): A strategy profile ( π’π’1 β , π’π’ 2 β , β¦ , π’π’ ππ β ) constitutes a Nash equilibrium if for each agent ππ β π΅π΅ and for all alternative strategies π’π’ ππ β π°π°πΎπΎ :
<!-- formula-not-decoded -->
where π’π’ -ππ β = ( π’π’1 β , β¦ , π’π’ ππ-1 β , π’π’ ππ+1 β , β¦ , π’π’ ππ β ) represents the equilibrium strategies of all agents except agent ππ .
The Nash equilibrium condition requires that each agent's strategy minimizes its cost functional given the strategies of all other agents. In the hybrid setting, this condition must hold for both continuous and discrete phases of the system evolution.
Lemma 1 (Necessary Conditions for Nash Equilibrium): If ( π’π’1 β , π’π’ 2 β , β¦ , π’π’ ππ β ) is a Nash equilibrium, then for each agent ππ , the following conditions must be satisfied:
During flow phases ( ππ , π’π’ ) β πΆπΆ :
During jump phases ( ππ , π’π’ ) β π·π· :
<!-- formula-not-decoded -->
<!-- formula-not-decoded -->
where the Hamiltonian function for agent ππ is defined as:
<!-- formula-not-decoded -->
with Ξ»ππ = βππ ππ ( ππ ππ ) being the costate variable.
Proof of Lemma 1 : The proof follows from the application of Pontryagin's maximum principle to the optimal control problem (10). For the continuous phase, the optimality condition βπ»π»ππ βπ’π’ππ = 0 yields:
<!-- formula-not-decoded -->
<!-- formula-not-decoded -->
For the discrete phase, the jump optimality condition follows from minimizing the post-jump cost, leading to equation (17).
Theorem 1 (Existence of Nash Equilibrium): Consider the multi-agent hybrid system (1)-(2) with performance indices (9) under Assumption 1. If the following conditions hold:
- (i) The communication graph π’π’ contains a spanning tree,
- (ii) The matrices ( π΄π΄ , π΅π΅ ) are stabilizable for each agent,
- (iii) The matrices οΏ½π΄π΄ , ππ ππ 1 / 2 οΏ½ are observable for each agent,
- (iv) The coupling weights satisfy β ππ ππππ ππβπ©π© πΎπΎ + β ππ ππl lββ < Ξ± for some Ξ± < 2 οΏ½Ξ»ππππππ ( π
π
ππ )/ Ξ»ππππππ ( π΅π΅ ππ ππ ππ π΅π΅ ), then there exists a unique Nash equilibrium in quadratic strategies.
Proof of Theorem 1 : The proof proceeds through several steps:
Step 1 : Establish contractivity of the mapping π―π― : π«π« β π«π« where π«π« = { ππ1 , ππ2 , β¦ , ππ ππ } and π―π― ( ππ ππ ) solves equation (25).
Step 2 : Define the operator π―π― πΎπΎ : ππ++ ππ βππ++ ππ for each agent ππ as:
<!-- formula-not-decoded -->
where Ξππππ οΏ½ππ πποΏ½ represents the coupling terms and ππ++ ππ denotes the set of positive definite ππ Γ ππ matrices. Step 3 : Show that under condition (iv), the operator π―π― is a contraction mapping. The coupling term can be bounded as:
<!-- formula-not-decoded -->
where π½π½ < 1 is determined by the communication weights and system parameters.
Solving for π’π’ ππ β :
Step 4 : Apply the Banach fixed-point theorem to conclude existence and uniqueness of the fixed point ππ β = ( ππ 1 β ππ 2 β β¦, ππ ππ β ) satisfying π―π― ( ππ β ) = ππ β .
Step 5 : Verify that the corresponding control strategies π’π’ ππ β ( ππ ππ ) = -1 2 ( ππ ππ + β ππ ππl lββ ) π
π
ππ -1 π΅π΅ ππ ππ ππ β ππ ππ constitute a Nash equilibrium by checking condition (15).
## Hamilton-Jacobi-Bellman System
The Nash equilibrium strategies satisfy a system of coupled Hamilton-Jacobi-Bellman (HJB) equations. For agent ππ , the value function ππ ππ ( ππ ππ ) satisfies:
<!-- formula-not-decoded -->
Substituting the optimal control law (19) into equation (20):
<!-- formula-not-decoded -->
where Ξ¦ππ ( π’π’ -ππ β ) = -π΅π΅ β ππ ππππ π’π’ ππ β ππβπ©π© πΎπΎ -π΅π΅ β ππ ππl π’π’ l β lββ represents the coupling term from neighboring agents.
For steady-state analysis, set βππππ βπ‘π‘ = 0 , yielding the algebraic HJB equation:
<!-- formula-not-decoded -->
Assumption 1: For each agent ππ , there exists a quadratic value function of the form:
<!-- formula-not-decoded -->
where ππ ππ = ππ ππ ππ β» 0 is a positive definite matrix to be determined.
Under Assumption 1 , have βππ ππ ( ππ ππ ) = 2 ππ ππ ππ ππ . Substituting into equation (22):
<!-- formula-not-decoded -->
For this equation to hold for all ππ ππ , require:
<!-- formula-not-decoded -->
where π―π― ππ = π΄π΄ ππβππππ π€π€ ππππ οΏ½ππ πποΏ½ captures the coupling effects from neighboring agents and will be analyzed in the convergence proof.
Theorem 2 (Exponential Convergence to Nash Equilibrium): Under the conditions of Theorem 1 , the distributed Nash equilibrium strategies achieve exponential convergence of the consensus errors to zero.
Specifically, there exist constants ππ > 0 and ππ > 0 such that:
<!-- formula-not-decoded -->
where ππ ( π‘π‘ ) = [ ππ1 ππ ( π‘π‘ ), ππ 2 ππ ( π‘π‘ ), β¦ , ππ ππ ππ ( π‘π‘ )] ππ is the global error vector.
Proof of Theorem 2 : Taking the time derivative of the Lyapunov function along system trajectories during flow phases:
Μ
<!-- formula-not-decoded -->
Substituting the closed-loop error dynamics with Nash equilibrium strategies, consider the error dynamics for agent ππ in a multi-agent system, given by:
Μ
<!-- formula-not-decoded -->
Assume the optimal control law, derived from the Hamilton-Jacobi-Bellman equation, is:
<!-- formula-not-decoded -->
where π
π
ππ = π
π
ππ ππ β» 0 is the control cost matrix, and ππ ππ = ππ ππ ππ βͺ° 0 is the solution to the Riccati equation. Similarly, for neighbor ππ and leader l :
<!-- formula-not-decoded -->
Substitute π’π’ ππ β into the first control term:
<!-- formula-not-decoded -->
Substitute π’π’ ππ β and π’π’ l β into the coupling terms:
<!-- formula-not-decoded -->
<!-- formula-not-decoded -->
<!-- formula-not-decoded -->
Combine all terms:
<!-- formula-not-decoded -->
Rewrite in compact form, where the last two terms are coupling terms:
Μ
<!-- formula-not-decoded -->
The coupling terms are:
<!-- formula-not-decoded -->
The coupling terms can be shown to satisfy:
<!-- formula-not-decoded -->
where Ξ³ depends on the system parameters and communication weights.
Using the matrix inequality and condition (iv) from Theorem 1:
<!-- formula-not-decoded -->
for some ππ > 0 . This establishes exponential stability with decay rate ππ = ππ / οΏ½ 2 Ξ»ππππππ ( ππ β ) οΏ½ here ππ β = block diag( ππ 1 , ππ 2 , β¦, ππ ππ β ) .
Corollary 1 (Hybrid Stability): The Nash equilibrium strategies also guarantee stability through jump phases. If the jump maps satisfy | πΊπΊ ππ ( ππ ππ , π’π’ ππ β )| β€ Ο | ππ ππ | for some Ο < 1 , then the hybrid system maintains exponential stability across discrete transitions.
## Optimization Framework
Based on the theoretical foundation established in Sections II and III, the complete Hybrid Adaptive Nash Equilibrium Solver (HANES) Algorithm algorithmic framework for distributed Nash equilibrium computation in multi-agent hybrid system was presented. The algorithm integrates hierarchical flow set design with game-theoretic jump triggering mechanisms.
## Initialize:
- Initial states π₯π₯ ππ (0) , ππ β π΅π΅ Β· Communication topology adjacency matrix π΄π΄ = οΏ½ππ ππππ οΏ½ Β· Control parameters Ξ² , Ο , jump thresholds ΞΌ , Ο , οΏ½ Ο Β· Select positive definite matrices ππ ππ , π
π
ππ , ππ ππ for π‘π‘ = 0 to π‘π‘ ππππππ o Step 1 : Data Collection and Critic Update Β· Collect neighbor states π₯π₯ ππ ( π‘π‘ ) , ππ β π©π© πΎπΎ Β· Construct consensus errors ππ ππ ( π‘π‘ ) = β ππ ππππ οΏ½π₯π₯ ππ -π₯π₯πποΏ½ ππβπ©π©ππ Β· Update leader-follower errors using equation (6) Step 2 : Hybrid State Verification Β· Check flow condition: if | π₯π₯ππ -ΞΌ | β₯ then ( π₯π₯ , π’π’ ) β πΆπΆ Β· Check jump condition: if | π₯π₯ππ -ΞΌ | < then ( π₯π₯ , π’π’ ) β π·π· Step 3 : Nash Strategy Update Β· Solve coupled HJB equations (22) for value matrices ππ ππ Β· Compute optimal control π’π’ ππ β = -πΎπΎππ ππ ππ using equation (19) Β· Apply hybrid jump dynamics if ( π₯π₯ , π’π’ ) β π·π· Step 4: Convergence Check if max ππ | ππ ππ ( π‘π‘ )| < ππ then π’π’ ππ β = π’π’ ππ ππππππππππππππππππ return π’π’ ππ β else go to Step 1 end if Return: optimal control policies π’π’ ππ
The comprehensive experimental framework provides rigorous validation of the HANES algorithm's theoretical properties while demonstrating its effectiveness in practical multi-agent coordination scenarios. The results establish empirical evidence supporting the algorithm's convergence guarantees, computational efficiency, and robust performance across diverse operational conditions.
## IV. Experiments and Simulation
All simulations involve agents with scalar dynamics ( n =1) to clearly illustrate hybrid switching behaviors. The hybrid system nature is characterized by flow and jump sets. A common jump threshold ΞΌ =1.0 is used, with jump target intervals defined as [ Ο , οΏ½ Ο ] = [0.3,0.5] . When a jump occurs, the new state is randomly selected within this interval to model uncertainty in post-jump states. The time step for all simulations is dt=0.01.
## Algorithm: HANES (Hybrid Adaptive Nash Equilibrium Solver)
Μ
Μ
-0.5 1 οΏ½ 1 -0.3 -0.3 1 οΏ½ ; from pursuers to evaders are represented by π΄π΄ ππππ = οΏ½ 1.0 0.7 0.8 1 οΏ½ ; from evaders to pursuers are described by π΄π΄ = οΏ½ 0.9 0.5 οΏ½ .
The Pursuit-Evasion Game evaluates the framework's game-theoretic aspects and its ability to converge to a Nash equilibrium in a competitive multi-agent environment. The system comprises two pursuers and two evaders. Initial conditions are π₯π₯ pursuers (0) = οΏ½ 2.0 1.8 οΏ½ and π₯π₯ evaders (0) = οΏ½ 1.5 1.2 οΏ½ . The dynamics for pursuers are π₯π₯ π€π€ = ππ οΏ½π₯π₯ ππ + ππ ππ π’π’ ππ , ππ οΏ½ = -1 , and for evaders are π₯π₯ π₯π₯ = πππ₯π₯ ππ + ππ ππ π’π’ ππ , with a =-2. All input coefficients ππ ππ = ππ ππ = 1 . The interactions among agents are characterized by the following matrices. The pursuer-to-pursuer interactions are described by πΏπΏ ππ = οΏ½ 1 -0.5 οΏ½ ; while the evader-to-evader interactions are given πΏπΏ ππ =
<!-- formula-not-decoded -->
Pursuers aim to minimize their performance index (capture), while evaders maximize theirs (survival), forming a zero-sum game structure. Saddle-point strategies are implemented. The cost function weights are
<!-- formula-not-decoded -->
input cost weights for pursuers π
π
ππ , pursuers = diag(1.304,1.5) ; input cost weights for evaders π
π
ππ , evaders = diag( -4, -3.5) ; jump penalty weight P =0.4481; the simulation runs for π‘π‘ οΏ½inal = 3 seconds .
g
<details>
<summary>Image 2 Details</summary>

### Visual Description
## Multi-Agent Hybrid Pursuit-Evasion Game: State Evolution
### Overview
Three interconnected graphs depict the dynamics of a pursuit-evasion game with two pursuers and two evaders. The top chart shows state evolution over time, the middle chart illustrates control strategies, and the bottom chart tracks cost function convergence.
### Components/Axes
1. **Top Chart**:
- **X-axis**: Time (s) from 0 to 3
- **Y-axis**: State x (0 to 2)
- **Legend**:
- Pursuer 1 (green)
- Pursuer 2 (blue)
- Evader 1 (red)
- Evader 2 (orange)
- **Annotations**:
- Jump Points (black stars)
- Jump Trigger (ΞΌ=1.0, dashed red line at y=1.0)
2. **Middle Chart**:
- **X-axis**: Time (s) from 0 to 3
- **Y-axis**: Control Input u (-0.4 to 0.2)
- **Legend**:
- u_P1 (green)
- u_P2 (blue)
- u_E1 (red)
- u_E2 (orange)
3. **Bottom Chart**:
- **X-axis**: Time (s) from 0 to 3
- **Y-axis**: Cost Functional J (0 to 2)
- **Legend**:
- J_P1 (green)
- J_P2 (blue)
- J_E1 (red)
- J_E2 (orange)
### Detailed Analysis
**Top Chart**:
- Pursuer 1 (green) starts at state x=2.0, decreases sharply to ~0.5 at 0.5s, then stabilizes near 0.1.
- Pursuer 2 (blue) starts at ~1.5, drops to ~0.4 at 0.5s, then converges to 0.05.
- Evader 1 (red) begins at ~1.5, declines to ~0.3 at 0.5s, then stabilizes near 0.05.
- Evader 2 (orange) starts at ~0.8, drops to ~0.2 at 0.5s, then converges to 0.02.
- Jump Points occur at ~0.5s (state x=1.0) and ~1.0s (state x=0.5), coinciding with the ΞΌ=1.0 trigger.
**Middle Chart**:
- Pursuer controls (green/blue) start at -0.3/-0.4, jump to +0.1/+0.05 at 0.5s, then stabilize.
- Evader controls (red/orange) start at +0.1/+0.15, drop to 0 at 0.5s, then remain near 0.
**Bottom Chart**:
- All cost functions (green/blue/red/orange) start near 2.0, drop sharply to ~0.1 by 0.5s, then flatten near 0.01.
- J_P1 (green) and J_P2 (blue) show sharper initial declines than J_E1 (red) and J_E2 (orange).
### Key Observations
1. **State Evolution**: All agents' states converge to near-zero values after 0.5s, with pursuers maintaining higher states than evaders.
2. **Control Strategies**: Pursuers activate positive control inputs at 0.5s (jump points), while evaders deactivate controls entirely.
3. **Cost Function**: All agents achieve Nash equilibrium by 1.0s, with cost functions stabilizing near zero.
### Interpretation
The system demonstrates coordinated agent behavior where pursuers and evaders adjust strategies at ΞΌ=1.0 (0.5s mark). The sharp state declines and cost function convergence suggest optimal Nash equilibrium is achieved rapidly. The control input jumps indicate discrete strategy changes triggered by the ΞΌ threshold. The bottom chart confirms equilibrium verification through cost function stabilization, with pursuers maintaining slightly higher cost trajectories than evaders, possibly reflecting differing objective weights.
</details>
g
y
y
Fig. 2. Pursuit-evasion game: (a) Agent state evolution with hybrid jump events (b) pursuer minimization (negative values) and evader maximization (positive values), and (c) Cost function evolution
Based on the experimental results shown in Figure 1, the HANES algorithm demonstrates successful implementation of the theoretical framework with clear validation of the hybrid system dynamics and Nash equilibrium convergence properties. The state evolution subplot reveals that all agents converge toward the theoretical equilibrium state near zero within approximately 0.8 seconds, with discrete jump events (marked by asterisks) occurring precisely at the predicted trigger threshold ΞΌ = 1.0 for both pursuers. The control strategy subplot confirms that the Nash equilibrium control inputs stabilize after the initial transient period, with pursuers implementing minimization strategies (negative control values) while evaders execute maximization strategies (positive control values), consistent with the zero-sum game formulation. The cost function evolution provides quantitative verification of the theoretical predictions, showing exponential convergence as guaranteed by Theorem 2, with all cost functionals approaching their optimal Nash equilibrium values. Notably, the jump events create brief discontinuities in the cost evolution but do not destabilize the overall convergence process, validating the hybrid system stability properties established in Corollary 1.
Fig. 3. trajectory visualization of the pursuit-evasion game
<details>
<summary>Image 3 Details</summary>

### Visual Description
## Line Chart: Pursuit and Evasion Trajectories
### Overview
The chart depicts trajectories of two pursuers and two evaders in a 2D spatial environment. It includes labeled zones ("Pursuit Zone" and "Evasion Zone") and uses color-coded lines with markers to represent movement paths. The X-axis represents horizontal position (0β8 units), and the Y-axis represents vertical position (0β5 units).
### Components/Axes
- **X-axis**: "X Position (spatial units)" ranging from 0 to 8.
- **Y-axis**: "Y Position (spatial units)" ranging from 0 to 5.
- **Legend**: Located in the top-right corner, mapping colors to entities:
- Blue: Pursuer 1 Trajectory (solid line) and Start (circle marker).
- Green: Pursuer 2 Trajectory (solid line) and Start (circle marker).
- Red: Evader 1 Trajectory (dashed line) and Start (triangle marker).
- Orange: Evader 2 Trajectory (dashed line) and Start (triangle marker).
- **Annotations**:
- "Pursuit Zone" (blue text) in the top-left quadrant.
- "Evasion Zone" (red text) in the bottom-right quadrant, with a downward-pointing arrow.
### Detailed Analysis
1. **Pursuer 1 Trajectory** (blue solid line):
- Starts at (1, 1.5) with a circle marker.
- Ends at (4, 3.2) with a square marker.
- Trend: Steady upward curve with moderate slope.
2. **Pursuer 2 Trajectory** (green solid line):
- Starts at (1.5, 3.5) with a circle marker.
- Ends at (5, 3.9) with a square marker.
- Trend: Gentle upward curve, less steep than Pursuer 1.
3. **Evader 1 Trajectory** (red dashed line):
- Starts at (7, 2) with a triangle marker.
- Ends at (7.5, 0.5) with a triangle marker.
- Trend: Sharp downward slope, nearly vertical descent.
4. **Evader 2 Trajectory** (orange dashed line):
- Starts at (5, 4) with a triangle marker.
- Ends at (7.5, 4.5) with a triangle marker.
- Trend: Gradual upward slope, shallow compared to Evader 1.
### Key Observations
- **Pursuit Dynamics**: Both pursuers move upward and rightward, with Pursuer 1 gaining more vertical distance than Pursuer 2.
- **Evasion Patterns**: Evader 1 descends rapidly, while Evader 2 ascends slowly. Their paths diverge spatially.
- **Zone Proximity**:
- Pursuer 1βs trajectory approaches the "Pursuit Zone" (top-left) but does not enter it.
- Evader 1βs trajectory exits the "Evasion Zone" (bottom-right) entirely.
- **Marker Consistency**: All start points match legend symbols (circles for pursuers, triangles for evaders).
### Interpretation
The chart models a pursuit-evasion scenario where:
- Pursuers aim to intercept evaders by converging toward higher Y-values (vertical pursuit).
- Evaders employ contrasting strategies: Evader 1 retreats downward (possibly escaping), while Evader 2 moves upward (potentially toward pursuers).
- The "Pursuit Zone" and "Evasion Zone" likely represent regions of strategic importance, though their exact boundaries are undefined. The trajectories suggest potential interception points near (4, 3.2) for Pursuer 1 and (5, 3.9) for Pursuer 2, though no direct overlap with evader paths is observed.
</details>
The experimental results demonstrate that the HANES algorithm achieves distributed Nash equilibrium computation with linear computational complexity O(N) while maintaining theoretical rigor, providing compelling evidence for the practical applicability of the proposed framework in multi-agent pursuitevasion scenarios. The trajectory visualization demonstrates successful implementation of distributed Nash equilibrium strategies, with pursuers executing coordinated convergence behaviors from the pursuit zone while evaders perform strategic evasion maneuvers toward the evasion zone.
The multi-agent system operates under a distributed communication network as illustrated in Figure 4. The Leader-Follower Consensus experiment demonstrates cooperative coordination, validating the framework's ability to achieve distributed agreement under hybrid dynamics. The system consists of 4 agents, where agent 2 is the leader and agents 1, 3, and 4 are followers. Initial states are π₯π₯ (0) = [1.8,2.0,1.5,1.7] ππ .
Fig. 4. Multi-agent communication topology
<details>
<summary>Image 4 Details</summary>

### Visual Description
## Diagram: Node-Based Network Structure
### Overview
The image depicts a directed and bidirectional node network with four labeled components (1, 2, 3, 4). Arrows indicate directional relationships, with distinct colors (orange for unidirectional, brown for bidirectional). Node 2 acts as a central hub, while Nodes 3 and 4 share a mutual connection.
### Components/Axes
- **Nodes**:
- **Node 1**: Circular, positioned at the top.
- **Node 2**: Square, centrally located.
- **Node 3**: Circular, bottom-left.
- **Node 4**: Circular, bottom-right.
- **Arrows**:
- **Orange**: Unidirectional (β).
- **Brown**: Bidirectional (β).
- **Connections**:
- Node 1 β Node 2 (orange).
- Node 2 β Node 3 (orange).
- Node 2 β Node 4 (orange).
- Node 3 β Node 4 (brown).
### Detailed Analysis
- **Node 1**: Only outgoing connection to Node 2.
- **Node 2**: Central node with three outgoing connections (to 1, 3, 4).
- **Nodes 3 and 4**: Mutual bidirectional connection (brown arrow) and individual connections to Node 2.
- **Color Coding**: Orange arrows dominate, suggesting primary flow; brown arrow highlights reciprocal interaction.
### Key Observations
1. **Central Hub**: Node 2 is the primary connector, integrating all other nodes.
2. **Bidirectional Interaction**: Nodes 3 and 4 share a unique two-way relationship, distinct from other unidirectional links.
3. **Hierarchy**: Node 1 feeds into Node 2, which distributes flow to Nodes 3 and 4.
### Interpretation
This diagram likely represents a simplified network model where:
- **Node 2** serves as a critical intermediary, aggregating input (Node 1) and distributing output (Nodes 3, 4).
- The **bidirectional link between Nodes 3 and 4** implies a feedback loop or mutual dependency, contrasting with the one-way flow elsewhere.
- **Color differentiation** (orange vs. brown) may signify transaction types (e.g., standard vs. reciprocal interactions).
No numerical data or trends are present; the focus is on structural relationships. The absence of additional labels or legends limits contextual interpretation beyond the depicted connections.
</details>
Μ
The dynamics for all agents are π₯π₯ π€π€ = ππ οΏ½π₯π₯ ππ + ππ ππ π’π’ ππ , ππ οΏ½ = -1 , and input coefficients ππ ππ = 1 for all agents. Communication Topology: The network connections are defined by the adjacency matrix: π΄π΄ topo =
<!-- formula-not-decoded -->
The leader tracks a time-varying reference π₯π₯ ref ( π‘π‘ ) = 2 ππ -0 . 3π‘π‘ cos(0.5 π‘π‘ ) . Control parameters are: Consensus gain: πΎπΎ consensus = 0.8 ; Tracking gain: πΎπΎ tracking = 1.2 ; Hybrid cost weight ππ hybrid = 0.4 The value function estimation also utilizes parameters Ξ=[0.8,0.9,0.85,0.75], discount factor Ξ² =0.95, and base parameter Ξ³ base = 0.5 . Cooperative Cost Structure: The multi-agent interaction cost matrix is
<!-- formula-not-decoded -->
Fig. 5. (a) Multi-agent state evolution with leader (b) Distributed control strategies with coordinated responses, (c) Consensus error convergence and stable value function evolution, and (d) Individual agent tracking performance demonstrating effective hierarchical coordination and bounded tracking errors
<details>
<summary>Image 5 Details</summary>

### Visual Description
## 2x2 Grid of Plots: Leader-Follower Multi-Agent System Analysis
### Overview
The image contains four subplots analyzing a leader-follower multi-agent system. Each plot visualizes different aspects: state evolution, control strategies, consensus performance, and tracking errors. All plots share a time axis (0β25 seconds) and use distinct color-coded data series.
---
### Components/Axes
#### Top-Left: Leader-Follower Multi-Agent State Evolution
- **X-axis**: Time (s) [0, 5, 10, 15, 20, 25]
- **Y-axis**: Agent States [-0.5, 0, 0.5, 1, 1.5, 2]
- **Legend**:
- Blue: Agent 1 (Follower)
- Orange: Agent 2 (Leader)
- Green: Agent 3 (Follower)
- Light Blue: Agent 4 (Follower)
- Dotted Black: Leader Reference
- Red Dashed: Consensus Achieved
- **Key Elements**: Red dashed vertical line at ~5s (consensus time).
#### Top-Right: Distributed Control Strategies
- **X-axis**: Time (s) [0, 5, 10, 15, 20, 25]
- **Y-axis**: Control Inputs [-1, -0.5, 0, 0.5, 1, 1.5, 2.5]
- **Legend**:
- Blue: uβ (Follower)
- Orange: uβ (Leader)
- Yellow: uβ (Follower)
- Purple: uβ (Follower)
#### Bottom-Left: Consensus Performance and Value Function Evolution
- **X-axis**: Time (s) [0, 5, 10, 15, 20, 25]
- **Y-axis**:
- Left: Consensus Error [0, 0.05, 0.1, 0.15, 0.2, 0.25]
- Right: Total Value Function [0, 0.2, 0.4, 0.6, 0.8, 1.2, 1.4, 1.6]
- **Legend**:
- Red: Consensus Error
- Orange: Total Value Function
#### Bottom-Right: Individual Agent Tracking Performance
- **X-axis**: Time (s) [0, 5, 10, 15, 20, 25]
- **Y-axis**: Tracking Errors [-1.2, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6]
- **Legend**:
- Blue: Follower 1 Error
- Orange: Leader Error (ref tracking)
- Yellow: Follower 3 Error
- Purple: Follower 4 Error
---
### Detailed Analysis
#### Top-Left: State Evolution
- **Trend**: All agent states (blue, orange, green, light blue) converge to the dotted black leader reference line after ~5s. Initial divergence occurs between 0β5s.
- **Key Data Points**:
- Agent 2 (Leader, orange) starts at ~1.8 and stabilizes at ~0.1.
- Consensus achieved at ~5s (red dashed line).
#### Top-Right: Control Strategies
- **Trend**: Leader control input (orange) spikes sharply to ~2.5 at t=0, then decays to ~0.1. Followers (blue, yellow, purple) show smaller, delayed spikes.
- **Key Data Points**:
- Leader (uβ) peaks at ~2.5 at t=0.
- Followers stabilize near 0 after ~10s.
#### Bottom-Left: Consensus and Value Function
- **Trend**: Consensus error (red) drops from ~0.25 to near 0 by t=5s. Total value function (orange) starts at ~1.6, drops to ~0.2 by t=5s, then stabilizes.
- **Key Data Points**:
- Consensus error: 0.25 β 0.05 (t=0β5s).
- Value function: 1.6 β 0.2 (t=0β5s).
#### Bottom-Right: Tracking Performance
- **Trend**: All agents (blue, orange, yellow, purple) show initial high errors (~1.2) that decay to near 0 by t=10s. Leader (orange) stabilizes fastest.
- **Key Data Points**:
- Leader error: ~1.2 β 0 (t=0β10s).
- Followers stabilize within Β±0.1 after t=10s.
---
### Key Observations
1. **Consensus Timing**: All agents align with the leader reference within ~5s (top-left plot).
2. **Control Input Dynamics**: Leaderβs control input dominates initially (top-right plot), suggesting centralized decision-making.
3. **Value Function Convergence**: Rapid decline in value function (bottom-left) indicates efficient consensus achievement.
4. **Tracking Error Reduction**: All agents reduce tracking errors by ~90% within 10s (bottom-right plot).
---
### Interpretation
The system demonstrates rapid consensus (5s) and stable tracking performance. The leaderβs control input dominates early dynamics, while followers converge autonomously. The value functionβs sharp decline suggests the system prioritizes consensus over individual objectives. No outliers observed; all agents exhibit similar convergence patterns. This aligns with centralized-decentralized hybrid control strategies, where the leader guides initial alignment, and followers maintain stability.
</details>
The experimental results successfully validate the proposed HANES algorithm, demonstrating key theoretical predictions from the paper. The hybrid jump event at t=2.5 seconds enables rapid leader reconfiguration while maintaining distributed coordination, with consensus error achieving exponential convergence below 0.05 within 8 seconds as predicted by Theorem 2. The coordinated control responses and stable value function evolution following the discrete transition confirm the framework's ability to preserve Nash equilibrium properties through hybrid dynamics, validating both the game-theoretic jump triggering mechanism and the distributed optimization approach for multi-agent coordination.
The performance of the proposed framework across these experiments is assessed using several quantitative metrics. These include convergence analysis, focusing on convergence time ( π‘π‘ conv ), final consensus or tracking error ( || ππ οΏ½inal || ), and convergence rate. For the pursuit-evasion scenario, Nash equilibrium verification is crucial, analyzing strategy stability. Additionally, computational efficiency (e.g., processing time) and robustness to variations (e.g., initial conditions) are considered to demonstrate the algorithm's practical applicability and resilience. These experiments collectively aim to demonstrate leader-follower coordination with hybrid dynamics, distributed control without global information, opponent strategy estimation and adaptation, and consensus achievement with bounded tracking errors.
## V. Conclusion and Future Work
In this paper, a comprehensive framework for distributed multi-agent hybrid systems operating under game-theoretic principles, as established in the hybrid dynamical systems theory. The framework addresses scenarios in which multiple autonomous agents must coordinate their actions through both continuous dynamics and discrete mode transitions while operating under distributed information constraints and strategic interactions.
By encoding the coordination objectives of agents in a distributed Nash equilibrium framework, sufficient conditions were provided to characterize optimal strategies that achieve consensus while maintaining individual agent autonomy. The main theoretical contributions establish rigorous mathematical foundations for multi-agent hybrid coordination through three key innovations: hierarchical flow set design methodology that decomposes complex multi-dimensional constraints into manageable subproblems, game-theoretic jump triggering mechanisms that coordinate discrete transitions across the agent network, and the Hybrid Adaptive Nash Equilibrium Solver (HANES) algorithm that achieves linear computational complexity O(N) compared to traditional cubic complexity O(NΒ³) approaches.
The theoretical framework demonstrates that the proposed distributed Nash equilibrium strategies guarantee exponential convergence to consensus, as established in Theorem 2, while maintaining system stability through discrete jump phases via the jump triggering mechanisms introduced in Section III. The hierarchical flow set construction methodology successfully addresses the exponential scaling problem inherent in multi-agent hybrid systems by systematically decomposing individual agent safety constraints, pairwise interaction requirements, and global coordination objectives. Furthermore, the game-theoretic jump triggering approach enables rapid emergency response capabilities for communication interruptions, agent failures, and environmental disruptions that cannot be addressed through continuous control methods alone.
Connections between optimality and stability for the studied class of multi-agent hybrid games were established through the value function analysis in Section III, demonstrating that the Nash equilibrium strategies serve dual roles as optimal control policies and Lyapunov-like functions for stability certification. The experimental validation through pursuit-evasion and leader-follower consensus scenarios confirms the practical applicability of the theoretical results, showing successful distributed coordination with bounded tracking errors and robust performance across diverse operational conditions.
The comprehensive simulation studies demonstrate significant improvements in convergence time, computational efficiency, and scalability compared to existing centralized approaches. The pursuit-evasion game simulation validated the framework's game-theoretic aspects and Nash equilibrium convergence properties in competitive multi-agent environments, while the leader-follower consensus experiment confirmed the cooperative coordination capabilities under hybrid dynamics with time-varying references and discrete mode transitions.
Future work includes extending the framework to accommodate heterogeneous agent dynamics where individual agents may have different state dimensions and control authorities, as the current formulation assumes homogeneous scalar dynamics. Investigating stochastic extensions of the hybrid game formulation to account for communication uncertainties, measurement noise, and environmental disturbances would enhance the framework's robustness for real-world applications. The development of adaptive algorithms that can learn optimal jump triggering thresholds and flow set parameters online, rather than requiring a priori specification, represents another promising research direction.
Additional future research directions include studying conditions to guarantee global optimality rather than local Nash equilibria, particularly for large-scale networks where multiple equilibria may exist. Exploring the integration of machine learning techniques with the HANES algorithm to handle unknown agent dynamics and environmental conditions would broaden the framework's applicability to scenarios with limited model knowledge. Furthermore, investigating the computational complexity and convergence guarantees for time-varying communication topologies and dynamic agent populations would address practical deployment scenarios in mobile autonomous systems such as UAV swarms and satellite formations.
## References
- [1] Leudo, S.J., et al. "On the optimal cost and asymptotic stability in two-player zero-sum set-valued hybrid games." American Control Conference , 2024.
- [2] Goebel, R., et al. Hybrid Dynamical Systems: Modeling, Stability, and Robustness . Princeton University Press, 2012.
- [3] De La Fuente, Neil, and Guim CasadellΓ . "Game Theory and Multi-Agent Reinforcement Learning: From Nash Equilibria to Evolutionary Dynamics." arXiv preprint arXiv:2412.20523 (2024).
- [4] Kim, Hansung, et al. "Learning Two-agent Motion Planning Strategies from Generalized Nash Equilibrium for Model Predictive Control." arXiv preprint arXiv:2411.13983 (2024).
- [5] Survey of containment control in multi-agent systems: concepts, communication, dynamics, and controller design, International Jthisnal of Systems Science, 2023.
- [6] Sanfelice, R.G. "Motion Planning for Hybrid Dynamical Systems." International Jthisnal of Robotics Research, 2025.
- [7] Sanfelice, R.G. "Distributed State Estimation of Jointly Observable Linear Systems under Directed Switching Networks." IEEE Transactions on Automatic Control, 2024.
- [8] Grammatico, S., et al. "Learning generalized Nash equilibria in multi-agent dynamical systems via extremum seeking control." Automatica, 2021.
- [9] Li, H., et al. "Centralized and Decentralized Event-Triggered Nash Equilibrium-Seeking Strategies for Heterogeneous Multi-Agent Systems." Mathematics, 2025.
- [10] Heemels, W.P.M.H., et al. "An introduction to event-triggered and self-triggered control." IEEE Conference on Decision and Control, 2012.
- [11] Xing, L., et al. "Dynamic Event-triggered Control and Estimation: A Survey." Machine Intelligence Research, 2021.
- [12] Sanfelice, R.G. "Robust Synergistic Hybrid Feedback." IEEE Transactions on Automatic Control, 2024.
- [13] Sanfelice, R.G. "Pointwise Exponential Stability of State Consensus with Intermittent Communication." IEEE Transactions on Automatic Control, 2024.
- [14] Sanfelice, R.G. "Forward Invariance of Sets for Hybrid Dynamical Systems." IEEE Transactions on Automatic Control, 2021.
- [15] Tabuada, P. Verification and Control of Hybrid Systems: A Symbolic Approach. Springer, 2009.
- [16] Lygeros, J., et al. "Hybrid Dynamical Systems: An Introduction to Control and Verification." Foundations and Trends in Systems and Control, 2008.
- [17] Chen, F., et al. "Consensus analysis of hybrid multiagent systems: A game-theoretic approach." IEEE Transactions on Cybernetics, 2019.
- [18] Nowzari, C., et al. "Analysis and control of epidemics: A survey of spreading processes on complex networks." IEEE Control Systems Magazine, 2016.
- [19] Daskalakis, C., et al. "The complexity of computing a Nash equilibrium." Communications of the ACM, 2009.
- [20] Chen, X., et al. "Settling the complexity of computing two-player Nash equilibria." Jthisnal of the ACM, 2009.
- [21] Blondel, V.D. and Tsitsiklis, J.N. "A survey of computational complexity results in systems and control." Automatica, 2000.
- [22] Bemporad, A. and Morari, M. "Verification of hybrid systems via mathematical programming." Hybrid Systems: Computation and Control, 1999.
- [23] Daskalakis, C., et al. "The complexity of computing a Nash equilibrium." SIAM Jthisnal on Computing, 2009.
- [24] Chen, X., et al. "Settling the complexity of computing two-player Nash equilibria." ACM Symposium on Theory of Computing, 2006.
- [25] Grammatico, S. "Distributed Nash equilibrium seeking in aggregative games." IEEE Conference on Decision and Control, 2020.
- [26] Tabuada, P. "Event-triggered real-time scheduling of stabilizing control tasks." IEEE Transactions on Automatic Control, 2007.
- [27] Girard, A. "Dynamic triggering mechanisms for event-triggered control." IEEE Transactions on Automatic Control, 2015.
- [28] Dolk, V.S., et al. "Event-triggered control systems under denial-of-service attacks." IEEE Transactions on Control of Network Systems, 2017.
- [29] Zhu, S., et al. "Distributed Nash Equilibrium Seeking Under Event-Triggered Mechanism." IEEE Transactions on Neural Networks and Learning Systems, 2021.
- [30] Ye, M., et al. "Nash Equilibrium Seeking for Graphic Games With Dynamic Event-Triggered Mechanism." IEEE Transactions on Cybernetics, 2021.
- [31] Bianchi, M., et al. "Learning generalized Nash equilibria in monotone games: A hybrid adaptive extremum seeking control approach." IEEE Conference on Decision and Control, 2021.
- [32] KrilaΕ‘eviΔ, S. and Grammatico, S. "Learning generalized Nash equilibria in multi -agent dynamical systems via extremum seeking control." Automatica, 2021.
- [33] Yi, P., et al. "A Survey of Distributed Algorithms for Aggregative Games." IEEE/CAA Jthisnal of Automatica Sinica, 2024.
- [34] Sanfelice, R.G. "Tracking Control for Hybrid Systems With State-Triggered Jumps." IEEE Transactions on Automatic Control, 2013.
- [35] Heemels, W.P.M.H. "Hybrid and Switched Systems: Modeling, Analysis, and Control." Eindhoven University of Technology, 2023.