# Adversarial Signed Graph Learning with Differential Privacy
**Authors**: Haobin Ke, Sen Zhang, Qingqing Ye, Xun Ran, Haibo Hu
> The Hong Kong Polytechnic University Hung Hom Hong Kong
> The Hong Kong Polytechnic University Research Centre for Privacy and Security Technologies in Future Smart Systems, PolyU Hung Hom Hong Kong
by
(2026)
## Abstract
Signed graphs with positive and negative edges can model complex relationships in social networks. Leveraging on balance theory that deduces edge signs from multi-hop node pairs, signed graph learning can generate node embeddings that preserve both structural and sign information. However, training on sensitive signed graphs raises significant privacy concerns, as model parameters may leak private link information. Existing protection methods with differential privacy (DP) typically rely on edge or gradient perturbation for unsigned graph protection. Yet, they are not well-suited for signed graphs, mainly because edge perturbation tends to cascading errors in edge sign inference under balance theory, while gradient perturbation increases sensitivity due to node interdependence and gradient polarity change caused by sign flips, resulting in larger noise injection. In this paper, motivated by the robustness of adversarial learning to noisy interactions, we present ASGL, a privacy-preserving adversarial signed graph learning method that preserves high utility while achieving node-level DP. We first decompose signed graphs into positive and negative subgraphs based on edge signs, and then design a gradient-perturbed adversarial module to approximate the true signed connectivity distribution. In particular, the gradient perturbation helps mitigate cascading errors, while the subgraph separation facilitates sensitivity reduction. Further, we devise a constrained breadth-first search tree strategy that fuses with balance theory to identify the edge signs between generated node pairs. This strategy also enables gradient decoupling, thereby effectively lowering gradient sensitivity. Extensive experiments on real-world datasets show that ASGL achieves favorable privacy-utility trade-offs across multiple downstream tasks. Our code and data are available in https://github.com/KHBDL/ASGL-KDD26.
Differential privacy, Adversarial signed graph learning, Constrained breadth first search-trees, Balanced theory. journalyear: 2026 copyright: cc conference: Proceedings of the 32nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.1; August 09–13, 2026; Jeju Island, Republic of Korea booktitle: Proceedings of the 32nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.1 (KDD ’26), August 09–13, 2026, Jeju Island, Republic of Korea doi: 10.1145/3770854.3780282 isbn: 979-8-4007-2258-5/2026/08 ccs: Security and privacy Data anonymization and sanitization
## 1. Introduction
The signed graph is a common and widely adopted graph structure that can represent both positive and negative relationships using signed edges (19; 20; 21). For example, in online social networks shown in Fig. 1, while user interactions reflect positive relationships (e.g., like, trust, friendship), negative relationships (e.g., dislike, distrust, complaint) also exist. Signed graphs provide more expressive power than unsigned graphs to capture such complex user interactions.
Recently, some studies (22; 23; 24) have explored signed graph learning methods, aiming to obtain low-dimensional vector representations of nodes that preserve key signed graph properties: neighbor proximity and structural balance. These embeddings are subsequently applied to downstream tasks such as edge sign prediction, node clustering, and node classification. Among existing signed graph learning methods, balance theory (27) has proven effective in identifying the edge signs between the source node and multi-hop neighbor nodes. It is leveraged in graph neural network (GNN)-based models to guide message passing across signed edges, ensuring that information aggregation is aligned with the node proximity (36; 38; 39). Moreover, to enhance the robustness and generalization capability of deep learning models, the adversarial graph embedding model (03; 14) learns the underlying connectivity distribution of signed graphs by generating high-quality node embeddings that preserve signed node proximity.
Despite their ability to effectively capture signed relationships between nodes, graph learning models remain vulnerable to link stealing attacks (25; 42; 43), which aim to infer the existence of links between arbitrary node pairs in the training graph. For instance, in online social graphs, such attacks may reveal whether two users share a friendly or adversarial relationship, compromising user privacy and damaging personal or professional reputations.
<details>
<summary>x1.png Details</summary>

### Visual Description
# Technical Document Extraction: Network Interaction Diagram
## Diagram Overview
The image depicts a network interaction diagram illustrating relationships between 8 users (User 1 to User 8). The diagram uses color-coded lines to represent positive and negative interactions between users.
## Key Components
### 1. User Nodes
- **Nodes**: 8 user icons arranged in a grid pattern
- **Labels**:
- User 1 (top-left)
- User 2 (top-center)
- User 3 (top-right)
- User 4 (middle-left)
- User 5 (middle-center)
- User 6 (middle-right)
- User 7 (bottom-left)
- User 8 (bottom-center)
### 2. Interaction Lines
- **Color Coding**:
- **Blue Lines**: Positive Interactions (Like, Gift)
- **Red Lines**: Negative Interactions (Distrust, Complaint)
- **Directionality**:
- Lines connect users bidirectionally
- Plus signs (+) indicate positive direction
- Minus signs (-) indicate negative direction
### 3. Legend
- **Location**: Right side of diagram
- **Text**:
- "Positive Interaction (Like, Gift)" (Blue)
- "Negative Interaction (Distrust, Complaint)" (Red)
## Interaction Analysis
### Positive Interactions (Blue Lines)
1. User 1 ↔ User 2 (+)
2. User 2 ↔ User 4 (+)
3. User 4 ↔ User 8 (+)
4. User 3 ↔ User 5 (+)
5. User 5 ↔ User 6 (+)
6. User 6 ↔ User 7 (+)
### Negative Interactions (Red Lines)
1. User 1 ↔ User 3 (-)
2. User 3 ↔ User 4 (-)
3. User 4 ↔ User 5 (-)
4. User 5 ↔ User 7 (-)
5. User 6 ↔ User 8 (-)
## Spatial Grounding
- **Legend Position**: [x=right, y=center]
- **Color Verification**:
- All blue lines contain "+" signs
- All red lines contain "-" signs
- No color mismatches detected
## Trend Verification
- **Positive Network**: Forms a chain from User 1→2→4→8 and User 3→5→6→7
- **Negative Network**: Creates a cross pattern between Users 1-3-4-5-7 and 6-8
- No upward/downward trends applicable (non-temporal data)
## Data Structure
| From User | To User | Interaction Type | Direction |
|-----------|---------|------------------|-----------|
| 1 | 2 | Positive | + |
| 2 | 4 | Positive | + |
| 4 | 8 | Positive | + |
| 3 | 5 | Positive | + |
| 5 | 6 | Positive | + |
| 6 | 7 | Positive | + |
| 1 | 3 | Negative | - |
| 3 | 4 | Negative | - |
| 4 | 5 | Negative | - |
| 5 | 7 | Negative | - |
| 6 | 8 | Negative | - |
## Notes
1. No numerical data present
2. All interactions are bidirectional
3. No self-referential connections
4. Diagram uses symbolic representation rather than quantitative metrics
</details>
Figure 1. A signed social graph with blue edges for positive links and red edges for negative links.
Differential privacy (DP) (06) is a rigorous privacy framework that guarantees statistically indistinguishable outputs regardless of any individual data presence. Such guarantee is achieved through sufficient perturbation while maintaining provable privacy bounds and computational feasibility. Existing privacy-preserving graph learning methods with DP can be categorized into two types based on the perturbation mechanism: one applies edge perturbation (53) to protect the link information by modifying the graph structure, and the other adopts gradient perturbation (54; 52) to obscure the relationships between nodes during model training. However, these methods are not well-suited for signed graph learning due to the following two challenges:
- Cascading error: As illustrated in Fig. 2, balance theory facilitates the inference of the edge sign between two unconnected nodes by computing the product of edge signs along a path. However, existing methods that use edge perturbation to protect link information may alter the sign of any edge along the path, thereby leading to incorrect inference of edge signs under balance theory. Such a local error can further propagate along the path, resulting in cascading errors in edge sign inference.
- High sensitivity: While gradient perturbation methods without directly perturbing edges may mitigate cascading errors, they are still ill-suited for signed graph learning because the node interdependence in signed graphs leads to high gradient sensitivity. The presence or absence of a node affects gradient updates of itself and its neighbors. Furthermore, edge change may induce sign flips that reverse gradient polarity within the loss function (see Eq. (10) for details), resulting in higher sensitivity compared to unsigned graphs. This increased sensitivity requires larger noise for privacy protection, thereby reducing the data utility.
To address these challenges, we turn to an adversarial learning-based approach for private signed graph learning. The core motivation is that this adversarial method generates node embeddings by approximating the true connectivity distribution, making it naturally robust to noisy interactions during optimization. As a result, we propose ASGL, a differentially private adversarial signed graph learning method that achieves high utility while maintaining node-level differential privacy. Within ASGL, the signed graph is first decomposed into positive and negative subgraphs based on edge signs. These subgraphs are then processed through an adversarial learning module within shared model parameters, enabling both positive and negative node pairs to be mapped into a unified embedding space while effectively preserving signed proximity. Based on this, we develop the adversarial learning module with differentially private stochastic gradient descent (DPSGD), which generates private node embeddings that closely approximate the true signed connectivity distribution. In particular, the gradient perturbation helps mitigate cascading errors, while the subgraph separation avoids gradient polarity reversals induced by edge sign flips within the loss function, thereby reducing the sensitivity to changes in edge signs. Considering that node interdependence further increases gradient sensitivity, we design a constrained breadth-first search (BFS) tree strategy within adversarial learning. This strategy integrates balance theory to identify the edge signs between generated node pairs, while also constraining the receptive fields of nodes to enable gradient decoupling, thereby effectively lowering gradient sensitivity and reducing noise injection. Our main contributions are listed as follows:
- We present a privacy-preserving adversarial learning method for signed graphs, called ASGL. To our best knowledge, it is the first work that can ensure the node-level differential privacy of signed graph learning while preserving high data utility.
- To mitigate cascading errors, we develop the adversarial learning module with DPSGD, which generates private node embeddings that closely approximate the true signed connectivity distribution. This approach avoids direct perturbation of the edge structure, which helps mitigate cascading errors and prevents gradient polarity reversals in the loss function.
- To further reduce the sensitivity caused by complex node relationships, we design a constrained breadth-first search tree strategy that integrates balance theory to identify edge signs between generated node pairs. This strategy also constrains the receptive fields of nodes, enabling gradient decoupling and effectively lowering gradient sensitivity.
- Extensive experiments demonstrate that our method achieves favorable privacy-accuracy trade-offs and significantly outperforms state-of-the-art methods in edge sign prediction and node clustering tasks. Additionally, we conduct link stealing attacks, demonstrating that ASGL exhibits stronger resistance to such attacks across all datasets.
The remainder of our work is organized as follows. Section 2 describes the preliminaries of our solution. The problem statement is introduced in Section 3. Our proposed solution and its privacy analysis are presented in Section 4. The experimental results are reported in Section 5. We discuss related works in Section 6, followed by conclusion in Section 7.
## 2. Preliminaries
In this section, we provide an overview of signed graphs, differential privacy, and DPSGD. Additionally, the vanilla adversarial graph learning is introduced in App. A, and the frequently used notations are summarized in Table 5 (See App. B).
### 2.1. Signed Graph with Balance Theory
A signed graph is denoted as $G=(V,E^+,E^-)$ , where $V$ is the set of nodes, and $E^+/E^-$ represent positive and negative edge sets, respectively. An edge $e_ij=(v_i,v_j)∈ E^+/E^-$ represents the positive/negative link between node pair $(v_i,v_j)∈ V$ , respectively. Notably, $E^+∩ E^-=∅$ ensures that any node pair cannot maintain both positive and negative relationships simultaneously. The objective of signed graph embedding is to learn a mapping function $f:V→ℝ^k$ that projects each node $v∈ V$ into a low $k$ -dimensional vector while preserving both the structural properties of the original signed graph. In other words, node pairs connected by positive edges should be embedded closely, while those connected by negative edges should be placed farther apart in the embedding space.
<details>
<summary>x2.png Details</summary>

### Visual Description
# Technical Document Extraction: Edge Relationship Diagram
## Diagram Overview
The image contains three distinct diagrams illustrating different edge relationships between nodes `V_r` (source) and `V_t` (target), connected through a central node. Each diagram uses standardized visual conventions for edge types and node relationships.
---
## Diagram Components
### 1. Direct Edge
- **Nodes**:
- `V_r` (blue circle, left)
- Central node (white circle, top)
- `V_t` (green circle, right)
- **Edges**:
- Central node → `V_r`: Solid black line with `+` sign
- Central node → `V_t`: Solid black line with `+` sign
- `V_r` ↔ `V_t`: Solid black line (Direct Edge)
- **Legend Reference**: Solid black line = Direct Edge
### 2. Positive Indirect Edge
- **Nodes**:
- `V_r` (blue circle, left)
- Central node (white circle, top)
- `V_t` (green circle, right)
- **Edges**:
- Central node → `V_r`: Dashed black line with `-` sign
- Central node → `V_t`: Dashed black line with `-` sign
- `V_r` ↔ `V_t`: Dashed green line (Positive Indirect Edge)
- **Legend Reference**: Dashed green line = Positive Indirect Edge
### 3. Negative Indirect Edge
- **Nodes**:
- `V_r` (blue circle, left)
- Central node (white circle, top)
- `V_t` (red circle, right)
- **Edges**:
- Central node → `V_r`: Solid black line with `+` sign
- Central node → `V_t`: Dashed black line with `-` sign
- `V_r` ↔ `V_t`: Dashed red line (Negative Indirect Edge)
- **Legend Reference**: Dashed red line = Negative Indirect Edge
---
## Legend Analysis
- **Location**: Bottom of the image, spanning full width
- **Entries**:
1. **Direct Edge**: Solid black line (✓ Matches Diagram 1)
2. **Positive Indirect Edge**: Dashed green line (✓ Matches Diagram 2)
3. **Negative Indirect Edge**: Dashed red line (✓ Matches Diagram 3)
---
## Spatial Grounding
- **Legend Position**: `[x: 0, y: bottom]` (aligned with diagram bases)
- **Node Positioning**:
- `V_r`: Leftmost position in all diagrams
- Central node: Top center in all diagrams
- `V_t`: Rightmost position in all diagrams
---
## Trend Verification
- **Direct Edge**: All connections positive (✓ Consistent `+` signs)
- **Positive Indirect Edge**: Central node connections negative, but indirect edge positive (✓ Dashed green line)
- **Negative Indirect Edge**: Mixed central node connections (positive → `V_r`, negative → `V_t`), resulting in negative indirect edge (✓ Dashed red line)
---
## Data Structure (Hypothetical)
If converted to a table, the structure would be:
| Edge Type | Connection | Color | Style | Sign |
|------------------------|------------------|-----------|-----------|------|
| Direct Edge | `V_r` ↔ `V_t` | Black | Solid | + |
| Positive Indirect Edge | `V_r` ↔ `V_t` | Green | Dashed | + |
| Negative Indirect Edge | `V_r` ↔ `V_t` | Red | Dashed | - |
---
## Key Observations
1. **Color Coding**:
- Node colors differentiate `V_r` (blue), `V_t` (green/red), and central node (white).
- Edge colors/styles encode relationship type (solid/black = direct, dashed/colored = indirect).
2. **Significance of Signs**:
- `+` signs indicate positive influence from central node.
- `-` signs indicate negative influence from central node.
3. **Indirect Edge Logic**:
- Positive indirect edge occurs when central node connections are negative but indirect edge is positive.
- Negative indirect edge occurs when central node connections are mixed (positive → `V_r`, negative → `V_t`).
---
## Missing Elements
- No numerical data or quantitative values present.
- No axis titles or scales (purely symbolic diagram).
- No additional textual annotations beyond labels and legend.
---
## Conclusion
This diagram set visually encodes three edge relationship types using standardized color, style, and sign conventions. The relationships are defined by both direct connections and indirect paths through a central node, with polarity determined by edge signs and types.
</details>
Figure 2. The signs of multi-hop connection based on balanced theory.
Balance theory (27) is a well-established standard to describe the signed relationships of unconnected node pairs. It is commonly summarized by four intuitive rules: “A friend of my friend is my friend,” “A friend of my enemy is my enemy,” “An enemy of my friend is my enemy,” and “An enemy of my enemy is my friend.” Based on these rules, the balance theory can deduce signs of the multi-hop connection. As shown in Fig. 2, given a path $P_rt:v_r→ v_t$ from rooted node $v_r$ to target node $v_t$ , the sign of the indirect relationships between $v_r$ and $v_t$ can be inferred by iteratively applying balance theory. Specifically, the sign of the multi-hop connection corresponds to the product of the signs of the edges along the path.
### 2.2. Differential Privacy
Differential Privacy (DP) (04) provides a rigorous mathematical framework for quantifying the privacy guarantees of algorithms operating on sensitive data. Informally, it bounds how much the output distribution of a mechanism can change in response to small changes in its input. When applying DP to signed graph data, the definition of adjacent databases typically considers two signed graphs, $G$ and $G^\prime$ , which are regarded as adjacent graphs if they differ by at most one edge or one node with its associated edges.
**Definition 0 (Edge (Node)-level DP(05))**
*Given $ε>0$ and $δ>0$ , a graph analysis mechanism $M$ satisfies edge- or node-level $(ε,δ)$ -DP, if for any two adjacent graph datasets $G$ and $G^\prime$ that only differ by an edge or a node with its associated edges, and for any possible algorithm output $S⊆ Range(M)$ , it holds that
$$
\displaystylePr[M(G)∈ S]≤ e^εPr[M(G^\prime)∈ S]+δ. \tag{1}
$$
Here, $ε$ is the privacy budget (i.e., privacy cost), where smaller values indicate stronger privacy protection but greater utility reduction. The parameter $δ$ denotes the probability that the privacy guarantee may not hold, and is typically set to be negligible. In other words, $δ$ allows for a negligible probability of privacy leakage, while ensuring the privacy guarantee holds with high probability.*
**Remark 1**
*Note that satisfying node-level DP is much more challenging than satisfying edge-level DP, as removing a single node may, in the worst case, remove $|V|-1$ edges, where $|V|$ denotes the total number of nodes. Consequently, node-level DP requires injecting substantially more noise.*
Two fundamental properties of DP are useful for the privacy analysis of complex algorithms: (1) Post-Processing Property (06): If a mechanism $M(G)$ satisfies $(ε,δ)$ -DP, then for any function $f$ that indirectly queries the private dataset $G$ , the composition $f(M(G))$ also satisfies $(ε,δ)$ -DP; (2) Composition Property (06): If $M(G)$ and $f(G)$ satisfy $(ε_1,δ_1)$ -DP and $(ε_2,δ_2)$ -DP, respectively, then the combined mechanism $F(G)=(M(G),f(G))$ which outputs both results, satisfies $(ε_1+ε_2,δ_1+δ_2)$ -DP.
### 2.3. DPSGD
A common approach to differentially private training combines noisy stochastic gradient descent with the Moments Accountant (MA) (02). This approach, known as DPSGD, has been widely adopted for releasing private low-dimensional representations, as MA effectively mitigates excessive privacy loss during iterative optimization. Formally, for each sample $x_i$ in a batch of size $B$ , we compute its gradient $∇L_i(θ)$ , denoted as $∇(x_i)$ for simplicity. Gradient sensitivity refers to the maximum change in the output of the gradient function resulting from a change in a single sample. To control the sensitivity of ${∇(x_i)}$ , the $\ell_2$ norm of each gradient is clipped by a threshold $C$ . These clipped gradients are then aggregated and perturbed with Gaussian noise $N(0,σ^2C^2I)$ to satisfy the DP guarantee. Finally, the average noisy gradient ${\tilde{∇}_B}$ is used to update the model parameters $θ$ . This process is given by:
$$
\displaystyle{\tilde{∇}_B}←\frac{1}{B}\Big(∑_i=1^BClip_C(∇(x_i))+N≤ft(0,σ^2C^2I\right)\Big). \tag{2}
$$
Here, $Clip_C(∇(x_i))=∇(x_i)/\max(1,\frac{||∇(x_i)||_2}{C})$ .
## 3. Problem Definition and Existing Solutions
### 3.1. Problem Definition
Instead of publishing a sanitized version of original node embeddings, we aim to release a privacy-preserving ASGL model trained on raw signed graph data with node-level DP guarantees, enabling data analysts to generate task-specific node embeddings.
Threat Model. We consider a black-box attack (42), where the attacker can query the trained model and observe its outputs with no access to its internal architecture or parameters. The attacker attempts to infer the presence of specific nodes or edges in the training graph solely from model outputs. This setting reflects a more practical attack surface compared to the white-box scenario (11).
Privacy Model. Signed graph data encodes both positive and negative relationships between nodes, which differs from tabular or image data. Therefore, it is necessary to adapt the standard definition of node-level DP (See Definition 1) to ensure black-box adversaries cannot determine whether a specific node and its associated signed edges are present in the training data. To this end, we define the differentially private adversarial signed graph learning model as follows.
**Definition 0 (Adversarial signed graph learning model under node-level DP)**
*The vanilla process of graph adversarial learning is illustrated in App. A, let $θ_D$ denote the discriminator parameters, and its $r$ -th row element corresponds to the $k$ -dimensional vector $d_v_{r}$ of node $v_r$ , that is $d_v_{r}∈θ_D$ . The discriminator module $L_D$ satisfies node-level ( $ε,δ$ )-DP if two adjacent signed graphs $G$ and $G^\prime$ only differ in one node with its associated signed edges, and for all possible $θ_s⊆ Range(L_D)$ , we have
$$
\displaystylePr[L_D(G)∈θ_s]≤ e^εPr[L_D(G^\prime)∈θ_s]+δ, \tag{3}
$$
where $θ_s$ denotes the set comprising all possible values of $θ_D$ .*
In particular, the generator $G$ is trained based on the feedback from the differentially private discriminator $D$ . According to the post-processing property of DP (08; 12), the generator module $L_G$ also satisfies node-level $(ε,δ)$ -DP. Leveraging the robustness to post-processing property, the privacy guarantee is preserved in the generated signed node embeddings and their downstream usage.
<details>
<summary>x3.png Details</summary>

### Visual Description
# Technical Document Extraction: Graph-Based Embedding and Downstream Processing
## Diagram Overview
The image depicts a three-stage pipeline for graph-based embedding and downstream processing, involving graph decomposition, embedding space construction, and gradient-based optimization with noise injection. Key components are color-coded and spatially organized.
---
### **Section (i): Graph Decomposition**
#### **Components**
1. **Original Graph (G)**
- **Nodes**: Represented as white circles (rooted nodes).
- **Edges**:
- Blue: Positive edges (directly linked nodes).
- Red: Negative edges (indirectly linked nodes).
- **Structure**: A 6-node graph with mixed edge types.
2. **Subgraphs**
- **Positive Graph (G⁺)**:
- Derived from G by retaining only blue edges.
- Contains 5 nodes and 4 edges.
- **Negative Graph (G⁻)**:
- Derived from G by retaining only red edges.
- Contains 5 nodes and 4 edges.
#### **Legend**
- **Nodes**: White circles (rooted nodes).
- **Edges**:
- Blue: Positive edges.
- Red: Negative edges.
---
### **Section (ii): Embedding Space Construction**
#### **Process Flow**
1. **Embedding Space (θₑ)**
- **Input**: G⁺ and G⁻ subgraphs.
- **Output**: Embedding vectors for nodes.
- **Key Operations**:
- **Positive Probability (P⁺)**: Calculated for directly linked nodes in G⁺.
- **Negative Probability (P⁻)**: Calculated for directly linked nodes in G⁻.
2. **BFS-Tree Construction**
- **Purpose**: Identify paths for guidance.
- **Constraints**:
- Max path length (L): 4.
- Max path count (N): 2.
- **Output**: Real and fake edges for guidance.
3. **Downstream Tasks**
- **Guidance**: Uses BFS-tree paths to refine embeddings.
- **Post-processing**: Adjusts embeddings based on real/fake edge guidance.
#### **Legend**
- **Real Edges**: Blue (positive) and red (negative) circles.
- **Fake Edges**: Gray circles (positive) and gray crosses (negative).
---
### **Section (iii): Gradient Optimization with Noise**
#### **Components**
1. **Gradient Clipping**
- **Process**: Limits gradient magnitudes to prevent instability.
- **Notation**: `∇` with clipping symbols (↑/↓).
2. **Noise Addition**
- **Distribution**: Gaussian noise (𝒞).
- **Purpose**: Regularization via DPSGD (Differentially Private Stochastic Gradient Descent).
3. **Embedding Space (θₘ)**
- **Input**: Clipped gradients + noise.
- **Output**: Updated embeddings for downstream tasks.
4. **Real/Fake Edge Classification**
- **Real Edges**: Blue (positive) and red (negative) circles.
- **Fake Edges**: Gray circles (positive) and gray crosses (negative).
#### **Legend**
- **Real Edges**: Blue (positive) and red (negative) circles.
- **Fake Edges**: Gray circles (positive) and gray crosses (negative).
---
### **Cross-Sectional Connections**
1. **Data Flow**
- Original graph G → Decomposed into G⁺ and G⁻ → Embedding space (θₑ) → Gradient optimization (θₘ) → Downstream tasks.
2. **Key Equations**
- **Positive Probability**: `P⁺(v_i | v_j)` for directly linked nodes in G⁺.
- **Negative Probability**: `P⁻(v_i | v_j)` for directly linked nodes in G⁻.
- **Gradient Update**: `θₘ = θₑ - (1/n)Σ[clipped_gradients + noise]`.
---
### **Critical Observations**
1. **Color Consistency**
- All blue edges/circles correspond to positive elements (G⁺, real edges).
- All red edges/circles correspond to negative elements (G⁻, real edges).
- Gray elements denote fake edges in guidance paths.
2. **Spatial Grounding**
- **Legend Position**: Top-left corner (coordinates [0, 0]).
- **Section (i)**: Leftmost, showing graph decomposition.
- **Section (ii)**: Central, focusing on embedding and guidance.
- **Section (iii)**: Rightmost, detailing gradient optimization.
3. **Trend Verification**
- No numerical data present; trends are inferred from process flow (e.g., gradient clipping reduces magnitude, noise addition introduces variability).
---
### **Conclusion**
The diagram outlines a graph-based machine learning pipeline with explicit handling of positive/negative edges, embedding space refinement, and differentially private optimization. All textual elements (labels, legends, equations) are transcribed verbatim, with spatial relationships and color mappings rigorously validated.
</details>
Figure 3. Overview of the ASGL framework: (i) The process decomposes a signed graph into positive and negative subgraphs, (ii) then maps node pairs into a unified embedding space while preserving signed proximity. To ensure privacy, (iii) adversarial learning module with DPSGD generates private node embeddings that approximate true connectivity without cascading errors. (iv) A constrained BFS-tree strategy manages node receptive field, reduces gradient noise, and improves model utility.
### 3.2. Existing Solutions
To our best knowledge, existing differentially private graph learning methods follow two main tracks: gradient perturbation and edge perturbation. In the first category, Yang et al. (54) introduce a privacy-preserving generative model that incorporates generative adversarial networks (GAN) or variational autoencoders (VAE) with DPSGD to protect edge privacy, while Xiang et al. (52) design a node sampling mechanism that adds Laplace noise to per-subgraph gradients, achieving node-level DP. For the edge perturbation-based methods, Lin et al. (53) use randomized response to perturb the adjacency matrix for edge-level privacy, and EDGERAND (42) perturbs the graph structure while preserving sparsity by clipping the adjacency matrix according to a privacy-calibrated graph density.
Limitation. The aforementioned solutions are not directly applicable to signed graphs. This is primarily because edge perturbation can lead to cascading errors when inferring edge signs under balance theory. Moreover, gradient perturbation often suffers from high sensitivity caused by complex node dependencies and gradient polarity reversal from edge sign flips, leading to excessive noise and degraded model utility.
## 4. Our Proposal: ASGL
To tackle the above limitations, we present ASGL, a DP-based adversarial signed graph learning model that integrates a constrained BFS-tree strategy to achieve favorable utility-privacy tradeoffs.
### 4.1. Overview
The ASGL framework, illustrated in Fig. 3, comprises three steps:
- Private Adversarial Signed Graph Learning. The signed graph $G$ is first split into positive and negative subgraphs, $G^+$ and $G^-$ , based on edge signs. Subsequently, two discriminators, $D^+$ and $D^-$ , sharing parameters $θ_D$ , are trained to distinguish real from fake positive and negative edges. Guided by $D^+$ and $D^-$ , two generators $G^+$ and $G^-$ with shared parameters $θ_G$ generate node embeddings that approximate the true connectivity distribution. To ensure node-level DP, we apply gradient perturbation during discriminator training instead of directly perturbing edges. This strategy mitigates cascading errors and prevents gradient polarity reversals caused by edge sign flips, thereby reducing gradient sensitivity. By the post-processing property, the generators also preserve node-level DP.
- Optimization via Constrained BFS-tree. To further reduce gradient sensitivity and the required noise scale, ASGL employs a constrained BFS-tree strategy. By empirically limiting the number and length of paths, each node’s receptive field is restricted, which reduces node dependency and enables gradient decoupling. This significantly lowers gradient sensitivity and enhances model utility under differential privacy constraints.
- Privacy Accounting and Complexity Analysis. The complete training process for ASGL is outlined in Algorithm 2 (see App. F.3). Based on this, we present a comprehensive privacy accounting and computational complexity analysis for ASGL.
### 4.2. Private Adversarial Signed Graph Learning
Motivated by (03; 14), a signed graph $G$ is first divided into a positive subgraph $G^+$ and a negative subgraph $G^-$ according to edge signs. Let $N(v_r)$ be the set of neighbor nodes directly connected to node $v_r$ . We denote the true positive and negative connectivity distributions of $v_r$ over its neighborhood $N(v_r)$ as the conditional probabilities $p_true^+(·|v_r)$ and $p_true^-(·|v_r)$ , which capture the preference of $v_r$ to connect with other nodes in $V$ . The adversarial learning for the signed graph $G$ is conducted by two adversarial learning modules:
Generators $G^+$ and $G^-$ : Through optimizing the shared parameters $θ_G$ , generators $G^+$ and $G^-$ aim to approximate the underlying true connectivity distribution and generate the most likely but unconnected nodes $v_t∉N(v_r)$ that are relevant to a given node $v_r$ . To this end, we estimate the relevance probabilities of these fake The term “Fake” indicates that although a node $v$ selected by the generator is relevant to $v_r$ , there is no actual edge between them. node pairs. Specifically, for the implementation of $G^+$ , given the fake positive node pairs $(v_r,v_t)^+$ , we use the graph softmax function (03) to calculate the fake positive connectivity probability:
$$
p^+_fake(v_t|v_r)=G^+≤ft(v_t|v_r;θ_G\right)=σ(g_v_{t}^⊤g_v_{r})=\frac{1}{1+\exp({-g_v_{t}^⊤g_v_{r})}}, \tag{4}
$$
where $g_v_{t},g_v_{r}∈ℝ^k$ are the $k$ -dimensional vectors of nodes $v_t$ and $v_r$ , respectively, and $θ_G$ is the union of all $g_v$ ’s. The output $G^+(v_t|v_r;θ_G)$ increases with the decrease of the distance between $v_r$ and $v_t$ in the embedding space of the generator $G^+$ . Similarly, for the generator $G^-$ , given the fake negative node pairs $(v_r,v_t)^-$ , we estimate their fake negative connectivity probability:
$$
p^-_fake(v_t|v_r)=G^-(v_t|v_r;θ_G)=1-σ(g_v_{t}^⊤g_v_{r})=\frac{\exp{(-g_v_{t}^⊤g_v_{r}})}{1+\exp{(-g_v_{t}^⊤g_v_{r}})}. \tag{5}
$$
Here, Eq. (5) ensures that node pairs with higher negative connectivity probabilities are mapped farther apart in the embedding space of $G^-$ . Since generators $G^+$ and $G^-$ share the parameters $θ_G$ , they jointly learn the proximity and separation of positive and negative node pairs in a unified embedding space, respectively.
Notably, the aforementioned fake node pairs $(v_r,v_t)^+$ and $(v_r,v_t)^-$ are sampled by a breadth-first search (BFS)-tree strategy (27). Compared to depth-first search (DFS) (56), BFS ensures more uniform exploration of neighboring nodes and can be integrated with random walk techniques (29) to optimize computational efficiency. Specifically, we perform BFS on the positive subgraph $G^+$ to construct a BFS-tree $T^+_v_{r}$ rooted from node $v_r$ . Then, we calculate the positive relevance probability of node $v_r$ with its neighbors $v_k∈N({v_r})$ :
$$
p^+_T^+_v_{r}(v_k|v_r)=\frac{\exp≤ft(g_v_{k}^⊤g_v_{r}\right)}{∑_v_{k∈N({v_r})}\exp≤ft(g_v_{k}^⊤g_v_{r}\right)}, \tag{6}
$$
which is actually a softmax function over $N({v_r})$ . To further sample node pairs unconnected in $T^+_v_{r}$ as fake positive edges, we perform a random walk at $T^+_v_{r}$ : Starting from the root node $v_r$ , a path $P_rt:v_r→ v_t$ is built by iteratively selecting the next node based on the transition probabilities defined in Eq. (6). The resulting unconnected node pair $(v_r,v_t)^+$ is treated as a fake positive edge, and App. E provides an example of this process. Given the node pair $(v_r,v_t)^+$ , the generator $G^+$ estimates $p^+_fake(v_t|v_r)$ according to Eq. (4).
Similarly, we also establish a BFS-tree $T^-_v_{r}$ rooted at node $v_r$ in the negative subgraph $G^-$ . To obtain the negative node pair $(v_r,v_t)^-$ , we perform a random walk on $T^-_v_{r}$ according to the following transition probability (i.e., negative relevance probability):
$$
p^-_T^-_v_{r}(v_k|v_r)=\frac{1-\exp≤ft(g_v_{k}^⊤g_v_{r}\right)}{∑_v_{k∈N({v_r})}≤ft(1-\exp≤ft(g_v_{k}^⊤g_v_{r}\right)\right)}. \tag{7}
$$
In particular, the edge sign of the negative node pair $(v_r,v_t)^-$ depends on the length of the path $P_rt:v_r→ v_t$ . According to the balance theory introduced in Section 2.1, the edge signs of multi-hop node pairs correspond to the product of the edge signs along the path. Accordingly, the rules for generating fake negative edges within $P_rt$ are defined as follows: (1) If the path length of $P_rt$ is odd, a node pair $(v_r,v_t)^-$ for the rooted node $v_r$ and the last node $v_t$ is selected as a fake negative pair; (2) If the path length of $P_rt$ is even, a node pair $(v_r,v_t)^-$ for the rooted node $v_r$ and the second last node $v_t$ is selected as a fake negative pair. The resulting node pair $(v_r,v_t)^-$ is then used to compute $p^-_fake(v_t|v_r)$ according to Eq. (5).
Discriminators $D^+$ and $D^-$ : This module tries to distinguish between real node pairs and fake node pairs synthesized by the generators $G^+$ and $G^-$ . Accordingly, the discriminators $D^+$ and $D^-$ estimate the likelihood that positive and negative edges exists between $v_r$ and $v∈ V$ , respectively, denoted as:
$$
D^+(v_r,v|θ_D)=σ(d_v^⊤d_v_{r})=\frac{1}{1+\exp({-d_v^⊤d_v_{r})}},\\ \tag{8}
$$
$$
D^-(v,v_r|θ_D)=1-σ(d_v^⊤d_v_{r})=\frac{\exp({-d_v^⊤d_v_{r})}}{1+\exp({-d_v^⊤d_v_{r})}}, \tag{9}
$$
where $d_v,d_v_{r}∈ℝ^k$ are vectors corresponding to the $v$ -th and $v_r$ -th rows of shared parameters $θ_D$ , respectively. $σ(·)$ represents the sigmoid function of the inner product of these two vectors.
In summary, given real positive and real negative edges sampled from $p_true^+(·|v_r)$ and $p_true^-(·|v_r)$ , along with fake positive and fake negative edges generated from generators $G^+/G^-$ , the adversarial learning pairs $(D^+,G^+)$ and $(D^-,G^-)$ , operating on the positive subgraph $G^+$ and the negative subgraph $G^-$ , respectively, engage in a four-player mini-max game with the joint loss function:
$$
\displaystyle\min_θ_{G} \displaystyle\max_θ_{D}L≤ft(G^+,G^-,D^+,D^-\right) \displaystyle= \displaystyle∑_v_{r∈ V^+}≤ft(≤ft(E_v∼ p_{true ^+≤ft(·\mid v_r\right)}\right)≤ft[\log D^+≤ft(v,v_r\midθ_D\right)\right]\right. \displaystyle≤ft. +≤ft(E_v∼ G^+≤ft(·\mid v_r;θ_G\right)\right)≤ft[\log≤ft(1-D^+≤ft(v,v_r\midθ_D\right)\right)\right]\right) \displaystyle+ \displaystyle∑_v_{r∈ V^-}≤ft(≤ft(E_v∼ p_{true ^-≤ft(·\mid v_r\right)}\right)≤ft[\log D^-≤ft(v,v_r\midθ_D\right)\right]\right. \displaystyle≤ft. +≤ft(E_v∼ G^-≤ft(·\mid v_r;θ_G\right)\right)≤ft[\log≤ft(1-D^-≤ft(v,v_r\midθ_D\right)\right)\right]\right). \tag{10}
$$
Based on Eq. (10), the parameters $θ_D$ and $θ_G$ are updated alternately by maximizing and minimizing the joint loss function. Competition between $G$ and $D$ results in mutual improvement until the fake node pairs generated by $G$ are indistinguishable from the real ones, thus approximating the true connectivity distribution. Lastly, the learned node embeddings $g_v∈θ_G$ are used in downstream tasks.
How to Achieve DP? Given real and fake positive/negative edges of the node $v_i$ , the corresponding node embedding $d_v_{i}∈θ_D$ is updated by ascending gradients of the joint loss function in Eq. (10):
$$
\frac{∂ L_D}{∂d_v_{i}}=≤ft\{\begin{array}[]{l}∂\log{D^+(v_i,v_j|θ_D)}/{∂d_v_{i}}=[1-σ(d_v_{j}^⊤d_v_{i})]d_v_{j},\\
if ≤ft(v_i,v_j\right) is a real positive edge from $G^+$;\\
∂\log{(1-D^+(v_i,v_j|θ_D))}/{∂d_v_{i}}=-σ(d_v_{j}^⊤d_v_{i})d_v_{j},\\
if ≤ft(v_i,v_j\right) is a fake positive edge from ${G^+$};\\
∂\log{D^-(v_i,v_j|θ_D)}/{∂d_v_{i}}=-σ(d_v_{j}^⊤d_v_{i})d_v_{j},\\
if ≤ft(v_i,v_j\right) is a real negative edge from $G^-$;\\
∂\log{(1-D^-(v_i,v_j|θ_D))}/{∂d_v_{i}}=[1-σ(d_v_{j}^⊤d_v_{i})]d_v_{j},\\
if ≤ft(v_i,v_j\right) is a fake negative edge from ${G^-$}.\end{array}\right. \tag{11}
$$
According to Definition 1, to achieve node-level differential privacy in adversarial signed graph learning, it is necessary to add the Gaussian noise to the sum of clipped gradients over a batch of nodes. The resulting noisy gradient $\tilde{∇}{L_D}$ is formulated as:
$$
{\tilde{∇}{L_D}}=\frac{1}{B}\Big(∑_v_{i∈ V_B}Clip_C(\frac{∂ L_D}{∂d_v_{i}})+N≤ft(0,B^2C^2σ^2I\right)\Big), \tag{12}
$$
where $V_B$ denotes the batch set of nodes, with batch size $B=|V_B|$ . $C$ is the clipping threshold to control gradient sensitivity. The fact that the gradient sensitivity reaches $BC$ is explained in Section 4.3.
**Remark 2**
*To achieve node-level DP, we perturb discriminator gradients instead of signed edges, avoiding cascading errors and gradient polarity reversals from edge sign flips (see Eq. (10)), which reduces gradient sensitivity. Furthermore, generators also preserve DP under discriminator guidance via the post-processing property of DP.*
### 4.3. Optimization via Constrained BFS-Tree
According to Eq. (11), in graph adversarial learning, the interdependence among samples implies that modifying a single node $v_i$ may affect the gradients of multiple other nodes $v_j$ within the same batch. This interdependence also exists among the fake node pairs generated along the BFS-tree paths. Consequently, in the worst-case illustrated in Fig. 4 (a), all node samples within a batch may become interrelated due to the BFS-tree, resulting in the gradient sensitivity of discriminators $D$ as high as $BC$ . Such high sensitivity necessitates injecting substantial noise to satisfy node-level DP, hindering effective optimization and reducing model utility.
<details>
<summary>x4.png Details</summary>

### Visual Description
# Technical Document Extraction: BFS-Tree Constrained Visualization
## Diagram Overview
The image depicts a two-stage visualization of a breadth-first search (BFS) tree transformation, labeled as **(a)** and **(b)**, connected by a red arrow labeled **"Constrained BFS-tree"**.
---
### **Legend**
- **Blue circles**: Nodes associated with `v_r` (root node).
- **White circles**: Nodes not associated with `v_r`.
- **Red crosses**: Nodes excluded in the constrained BFS-tree.
---
### **Component Analysis**
#### **(a) Original BFS-Tree**
- **Central Node**: `v_r` (pink circle, root of the tree).
- **Connected Nodes**: All nodes directly connected to `v_r` are **blue** (associated with `v_r`).
- **Structure**: Radial expansion from `v_r`, forming a complete BFS tree with no exclusions.
#### **(b) Constrained BFS-Tree**
- **Central Node**: `v_r` (pink circle, retained as root).
- **1-Hop Nodes**:
- Blue nodes within the **dashed circle** labeled "1-hop".
- Some nodes marked with **red crosses** (excluded).
- **2-Hop Nodes**:
- Blue nodes within the **dashed circle** labeled "2-hop".
- Additional nodes marked with **red crosses** (excluded).
- **Exclusion Pattern**:
- Nodes farther from `v_r` (2-hop) are more likely to be excluded.
- Some 1-hop nodes are also excluded (marked with red crosses).
---
### **Key Observations**
1. **Pruning Mechanism**:
- The constrained BFS-tree removes nodes that are either:
- Beyond a certain hop distance (e.g., 2-hop nodes).
- Deemed non-essential (marked with red crosses, even within 1-hop).
2. **Spatial Grounding**:
- **Legend Position**: Bottom center of the diagram.
- **Color Consistency**:
- Blue nodes in **(a)** and **(b)** match the legend's "Nodes associated with `v_r`".
- White nodes in **(b)** match "Nodes not associated with `v_r`".
- Red crosses in **(b)** indicate exclusion, not color-coded in the legend.
---
### **Flow Explanation**
1. **From (a) to (b)**:
- The red arrow labeled "Constrained BFS-tree" indicates a transformation where:
- Nodes are pruned based on hop distance and relevance to `v_r`.
- Excluded nodes (red crosses) are removed from the tree structure.
2. **Purpose**:
- The constrained tree likely optimizes for proximity to `v_r` or other criteria (e.g., resource constraints).
---
### **Conclusion**
The diagram illustrates the transition from an unconstrained BFS-tree (a) to a constrained version (b), where nodes are selectively excluded based on hop distance and relevance to the root node `v_r`. The constrained tree prioritizes nodes closer to `v_r` while removing less critical or distant nodes.
</details>
Figure 4. The receptive field of node $v_r$ within a batch is illustrated in two cases: (a) An unconstrained BFS tree, and the receptive field size of $v_r$ is $B=|V_B|=34$ ; (b) A constrained BFS tree with path length $L=2$ , path amount $N=3$ of each node, and the receptive field size of $v_r$ is $∑_l=0^LN^l=13$ .
To address the aforementioned challenge, we introduce the constrained BFS-tree strategy: As illustrated in Algorithm 1 (see App. F.2), when performing a random walk on the BFS-tree $T^+_v_{r}$ or $T^-_v_{r}$ rooted at $v_r∈ V_tr$ to generate multiple unique paths, we also limit both the number of sampled paths and their lengths by $N$ and $L$ . Following this, the training set of subgraphs $S_tr$ composed of constrained paths is obtained. The rationale behind these settings is discussed below.
**Theorem 1**
*By constraining both the number and length of paths generated via random walks on the BFS-trees to $N$ and $L$ , respectively, the gradient sensitivity $Δ_{g}$ of the discriminator can be reduced from $BC$ to $\frac{N^L+1-1}{N-1}C$ . Empirical results in Section 5 demonstrate that our ASGL achieves satisfactory performance even with a relatively small receptive field. Specifically, when setting $N=3$ and $L=4$ , that is, $\frac{N^L+1-1}{N-1}=121<B=256$ , the ASGL method still performs good model utility. Thus, the noisy gradient $\tilde{∇}{L_D}$ of discriminator within a mini-batch $B_t$ is denoted as:
$$
\displaystyle{\tilde{∇}{L_D}}=\frac{1}{|B_t|}\Big(∑_v∈B_tClip_C(\frac{∂ L_D}{∂d_v})+N≤ft(0,Δ_{g}^2σ^2I\right)\Big), \tag{13}
$$
where the gradient sensitivity $Δ_{g}=\frac{N^L+1-1}{N-1}C$ .*
Proof of Theorem 1. Let the sum of clipped gradients of batch subgraphs be $g_t(G)=∑_v∈B_tClip_C(\frac{∂ L_D}{∂d_v})$ , where $B_t$ represents any choice of batch subgraphs from $S_tr$ . Consider a node-level adjacent graph $G^\prime$ formed by removing a node $v^*$ with its associated edges from $G$ , we obtain their training sets of subgraphs $S_tr$ and $S_tr^\prime$ via the SAMPLE-SUBGRAPHS method in Algorithm 1, denoted as:
$$
\displaystyle S_tr \displaystyle=SAMPLE-SUBGRAPHS(G,V_tr,N,L), \displaystyle S_tr^\prime \displaystyle=SAMPLE-SUBGRAPHS(G^\prime,V_tr,N,L). \tag{14}
$$
The only subgraphs that differ between $S_tr$ and $S_tr^\prime$ are those that involve the node $v^*$ . Let $S(v^*)$ denote the set of such subgraphs, i.e., $S(v^*)=S_tr∖ S_tr^\prime$ . According to Lemma 1 in App. G, the number of such subgraphs $S(v^*)$ is at most $R_N,L$ . Thus, in any mini-batch training, the only gradient terms $\frac{∂ L_D}{∂d_v}$ affected by the removal of node $v^*$ are those associated with the subgraphs in $(S(v^*)∩B_t)$ :
$$
\displaystyle g_t(G)-g_t(G^\prime) \displaystyle=∑_v∈B_tClip_C(\frac{∂ L_D}{∂d_v})-∑_v^\prime∈B_t^\primeClip_C(\frac{∂ L_D}{∂d_v^\prime}) \displaystyle=∑_v,v^\prime∈(S(v^{*)∩B_t)}[Clip_C(\frac{∂ L_D}{∂d_v})-Clip_C(\frac{∂ L_D}{∂d_v^\prime})], \tag{15}
$$
where $B_t^\prime=B_t∖(S(v^*)∩B_t)$ . Since each gradient term is clipped to have an $\ell_2$ -norm of at most $C$ , it holds that:
$$
||Clip_C(\frac{∂ L_D}{∂d_v})-Clip_C(\frac{∂ L_D}{∂d_v^\prime})||_F≤ C. \tag{16}
$$
In the worst case, all subgraphs in $S(v^*)$ appear in $B_t$ , so we bound the $\ell_2$ -norm of the following quantity based on Lemma 2 in App. G:
$$
||g_t(G)-g_t(G^\prime)||_F≤ C· R_N,L=C·\frac{N^L+1-1}{N-1}. \tag{17}
$$
The same reasoning applies when $G^\prime$ is obtained by adding a new node $v^*$ to $G$ . Since $G$ and $G^\prime$ are arbitrary node-level adjacent graphs, the proof is complete.
### 4.4. Privacy and Complexity Analysis
The complete training process for ASGL is outlined in Algorithm 2 (see App. F.3). In this section, we present a comprehensive privacy analysis and computational complexity analysis for ASGL.
Privacy Accounting. In this section, we adopt the functional perspective of Rényi Differential Privacy (RDP; see App. C) to analyze privacy budgets of ASGL, as summarized below:
**Theorem 2**
*Given the number of training set $N_tr$ , number of epochs $n^epoch$ , number of discriminators’ iterations $n^iter$ , batch size $B_d$ , maximum path length $L$ , and maximum path number $N$ , over $T=n^epochn^iter$ iterations, Algorithm 2 satisfies node-level $(α,2Tγ)$ -RDP, where $γ=\frac{1}{α-1}\ln≤ft(∑_i=0^R_N,Lβ_i≤ft(\exp{\frac{α(α-1)i^2}{2σ^2R_N,L^2}}\right)\right)$ , $R_N,L=\frac{N^L+1-1}{N-1}$ and $β_i=\binom{R_N,L}{i}\binom{N_tr-R_N,L}{B_d-i}/{\binom{N_tr}{B_d}}$ . Please refer to App. I for the proof.*
Complexity Analysis. To analyze the time complexity of training ASGL (App. F.3), we break down the major computations. The outer loop runs for $n^epoch$ epochs, and in each epoch, the discriminators $D^+$ and $D^-$ are trained for $n^iter$ iterations. Each iteration samples a batch of $B_d$ real and fake edges to update $θ_D$ , with DP cost updates incurring complexity $O(B_dkξ)$ , where $ξ$ is the sampling probability and $k$ is the embedding dimension (08; 17). Thus, each epoch of $D^+$ or $D^-$ costs $O(n^iterB_dk(1+ξ))$ . For the generators $G^+$ and $G^-$ , each iteration samples $B_g$ fake edges to update $θ_G$ , resulting in per-epoch complexity $O(n^iterB_gk)$ . In total, ASGL’s overall time complexity over $n^epoch$ epochs is: $O≤ft(2n^epochn^iter(B_d+B_g)(1+ξ)k\right)$ . This complexity is linear in the number of iterations and batch size, demonstrating the scalability of ASGL for large-scale graphs.
## 5. Experiments
In this section, some experiments are designed to answer the following questions: (1) How do key parameters affect the performance of ASGL (See Section 5.2)? (2) How much does the privacy budget affect the performance of ASGL and other private signed graph learning models in edge sign prediction (See Section 5.3)? (3) How much does the privacy budget affect the performance of ASGL and other baselines in node clustering (See Section 5.4)? (4) How resilient is ASGL to defense link stealing attacks (See Section 5.5)?
Table 1. Overview of the datasets
| Bitcoin-Alpha | 3,783 | 14,081 | 12,769 (90.7 $\$ ) | 1,312 (9.3 $\$ ) |
| --- | --- | --- | --- | --- |
| Bitcoin-OTC | 5,881 | 21,434 | 18,281 (85.3 $\$ ) | 3,153 (14.7 $\$ ) |
| WikiRfA | 11,258 | 185,627 | 144,451 (77.8 $\$ ) | 41,176 (22.2 $\$ ) |
| Slashdot | 13,182 | 36,338 | 30,914 (85.1 $\$ ) | 5,424 (14.9 $\$ ) |
| Epinions | 131,828 | 841,372 | 717,690 (85.3 $\$ ) | 123,682 (14.7 $\$ ) |
### 5.1. Experimental Settings
Datasets. To comprehensive evaluate our ASGL method, we conduct extensive experiments on five real-world datasets, namely Bitcoin-Alpha Collected in https://snap.stanford.edu/data., Bitcoin-OTC footnotemark: , WikiRfA footnotemark: , Slashdot Collected in https://www.aminer.cn. and Epinions footnotemark: . These datasets are regarded as undirected signed graphs, with their detailed statistics summarized in Table 1 and App. J.1.
Competitive Methods. To the best of our knowledge, this work is the first to address the problem of differentially private signed graph learning while aiming to preserve model utility. Due to the absence of prior studies in this area, we construct baselines by integrating four state-of-the-art signed graph learning methods—SGCN (36), SiGAT (38), LSNE (37), and SDGNN (39) —with the DPSGD mechanism. Since these models primarily leverage structural information, we further include the private graph learning method GAP (40), using Truncated SVD-generated spectral features (36) as input to ensure a fair comparison involving node features.
Evaluation Metrics. For edge sign prediction tasks, we follow the evaluation procedures in (14; 38; 39). Specifically, we first generate embedding vectors for all nodes in the training set using each comparative method. Then, we train a logistic regression classifier using the concatenated embeddings of node pairs as input features. Finally, we use the trained classifier to predict edge signs in the test set for each method. Considering the class imbalance between positive and negative edges (see Table 1), we adopt the area under curve (AUC) as the evaluation metric to ensure a fair comparison.
For node clustering, to fairly evaluate the clustering effect of node embeddings, we compute the average cosine distance for both positive and negative node pairs: $CD^+=∑_(v_{i,v_j)∈ E^+}Cos(Z_i,Z_j)/|E^+|$ and $CD^-=∑_(v_{n,v_m)∈ E^-}Cos(Z_n,Z_m)/|E^-|$ , where $Z_i$ is the node embedding generated by each comparative method, and $Cos(·)$ represents the cosine distance between node embeddings. Then we propose the symmetric separation index (SSI) to measure the clustering degree between the embeddings of positive and negative node pairs in the test set, denoted as $SSI=1/(|CD^+-1|+|CD^-+1|)$ . A higher SSI indicates better structural proximity, with positive node pairs more tightly clustered and negative pairs more clearly separated in the unified embedding space.
Parameter Settings. For both edge sign prediction and node clustering tasks, we set the dimensionality of all node embeddings, $d_v$ and $g_v$ , to 128, following standard practice in prior work (41; 14). ASGL adopts DPSGD-based optimization, where the total number of training epochs is determined by the moments accountant (MA) (04), which offers tighter privacy tracking across multiple iterations. We set the iteration number $n^iter$ to 10 for Bitcoin-Alpha and Bitcoin-OTC, 15 for WikiRfA and Slashdot, and 20 for Epinions. Since all comparative methods are trained using DPSGD, their number of training epochs depends on the privacy budget. As discussed in Section 5.2, the maximum path number $N$ and path length $L$ are varied to analyze their impact on ASGL’s utility. For privacy parameters, we follow (02; 51; 08) by fixing $δ=10^-5$ and $C=1$ , and vary the privacy budget $ε∈\{1,2,\dots,6\}$ to evaluate utility under different privacy levels. To ensure fair comparison, we modify the official GitHub implementations of all baselines and adopt the best hyperparameter settings reported in their original papers. To minimize random errors, each experiment is repeated five times.
### 5.2. Impact of Key Parameters
In this section, we perform experiments on two datasets by varying the maximum number $N$ and the maximum length $L$ of paths in the BFS-trees, providing a rationale for parameter selection.
#### 5.2.1. The effect of the parameter $N$
As discussed in Section 4.3, the greater the number of neighbors a rooted node has, the more paths can be obtained through random walks. Therefore, the maximum number of paths $N$ also depends on the node degrees. As shown in Fig. 8 (see App. J.2), for the Bitcoin-Alpha and Slashdot datasets, most nodes in signed graphs have degrees below 3. In addition, we investigate the impact of $N$ by varying its value within $\{2,3,4,5,6\}$ . As shown by the average AUC results in Table 2, the proposed ASGL method achieves optimal edge prediction performance at $N=3$ for Bitcoin-Alpha and $N=4$ for Slashdot. Considering both gradient sensitivity and computational efficiency, we adopt $N=3$ for subsequent experiments.
#### 5.2.2. The effect of the parameter $L$
In this experiment, we evaluate the impact of the path length $L$ on the utility of ASGL by varying its value. As shown in Table 3, ASGL achieves the best performance on both datasets when $L=4$ . This result is closely aligned with the structural characteristics of the signed graphs: As summarized in Fig. 9 (see App. J.2), most node pairs in these datasets exhibit maximum path lengths of 3 or 4. Therefore, in subsequent experiments, we set $L=4$ , as it adequately covers the receptive field of most nodes.
Table 2. Summary of average AUC with different maximum path counts $N$ under $ε=3$ and $L=3$ . (BOLD: Best)
| Bitcoin-Alpha Slashdot | 0.8025 0.7723 | 0.8562 0.8823 | 0.8557 0.8888 | 0.8498 0.8871 | 0.8553 0.8881 |
| --- | --- | --- | --- | --- | --- |
Table 3. Summary of average AUC with different path lengths $L$ under $ε=3$ and $N=3$ . (BOLD: Best)
| Bitcoin-Alpha Slashdot | 0.7409 0.7629 | 0.8443 0.8290 | 0.8587 0.8833 | 0.8545 0.8809 | 0.8516 0.8807 |
| --- | --- | --- | --- | --- | --- |
<details>
<summary>x5.png Details</summary>

### Visual Description
# Technical Analysis of Algorithm Performance Across Datasets
## Overview
The image presents five comparative line graphs (a-e) evaluating the performance of six graph neural network algorithms (SDGNN, SiGAT, SGCN, GAP, LSNE, ASGL) across different datasets. Performance is measured using Area Under the Curve (AUC) against varying privacy budgets (ε). All graphs share identical axis labels and legend structure.
---
## Legend & Color Mapping
Legend located at the top of all graphs:
- **SDGNN**: Gray line with circle markers
- **SiGAT**: Pink line with circle markers
- **SGCN**: Blue line with circle markers
- **GAP**: Green line with square markers
- **LSNE**: Yellow line with diamond markers
- **ASGL (Proposed)**: Red line with triangle markers
Spatial grounding: Legend occupies the top 15% of the image area, aligned horizontally.
---
## Dataset-Specific Analysis
### (a) Bitcoin_Alpha
**Key Trends**:
1. ASGL (red) demonstrates the steepest ascent, increasing from 0.75 (ε=1) to 0.85 (ε=6)
2. SiGAT (pink) shows consistent growth, reaching 0.80 at ε=6
3. SDGNN (gray) maintains moderate performance, plateauing at 0.78
4. LSNE (yellow) exhibits the slowest improvement, ending at 0.72
**Data Points**:
| ε | SDGNN | SiGAT | SGCN | GAP | LSNE | ASGL |
|---|-------|-------|------|-----|------|------|
| 1 | 0.65 | 0.70 | 0.55 | 0.50| 0.52 | 0.75 |
| 2 | 0.68 | 0.72 | 0.60 | 0.55| 0.55 | 0.78 |
| 3 | 0.70 | 0.74 | 0.65 | 0.60| 0.58 | 0.80 |
| 4 | 0.72 | 0.76 | 0.70 | 0.65| 0.61 | 0.82 |
| 5 | 0.75 | 0.78 | 0.75 | 0.70| 0.64 | 0.84 |
| 6 | 0.78 | 0.80 | 0.78 | 0.75| 0.66 | 0.85 |
---
### (b) Bitcoin_OCT
**Key Trends**:
1. ASGL maintains dominance with AUC increasing from 0.80 to 0.85
2. SiGAT (pink) shows rapid improvement, closing the gap with ASGL
3. GAP (green) demonstrates steady but modest growth
4. SGCN (blue) exhibits the most erratic pattern with sharp increases
**Data Points**:
| ε | SDGNN | SiGAT | SGCN | GAP | LSNE | ASGL |
|---|-------|-------|------|-----|------|------|
| 1 | 0.75 | 0.78 | 0.60 | 0.55| 0.58 | 0.80 |
| 2 | 0.77 | 0.80 | 0.65 | 0.60| 0.61 | 0.82 |
| 3 | 0.79 | 0.82 | 0.70 | 0.65| 0.64 | 0.84 |
| 4 | 0.80 | 0.84 | 0.75 | 0.70| 0.67 | 0.85 |
| 5 | 0.82 | 0.85 | 0.78 | 0.75| 0.70 | 0.86 |
| 6 | 0.83 | 0.86 | 0.80 | 0.78| 0.72 | 0.87 |
---
### (c) WikiRfA
**Key Trends**:
1. ASGL (red) shows consistent linear growth
2. SiGAT (pink) demonstrates exponential-like improvement
3. LSNE (yellow) exhibits the most stable performance curve
4. GAP (green) shows gradual but steady improvement
**Data Points**:
| ε | SDGNN | SiGAT | SGCN | GAP | LSNE | ASGL |
|---|-------|-------|------|-----|------|------|
| 1 | 0.70 | 0.72 | 0.58 | 0.52| 0.54 | 0.70 |
| 2 | 0.73 | 0.75 | 0.62 | 0.55| 0.57 | 0.73 |
| 3 | 0.75 | 0.78 | 0.66 | 0.58| 0.60 | 0.75 |
| 4 | 0.77 | 0.81 | 0.70 | 0.61| 0.63 | 0.77 |
| 5 | 0.79 | 0.84 | 0.74 | 0.64| 0.66 | 0.79 |
| 6 | 0.81 | 0.86 | 0.77 | 0.67| 0.68 | 0.80 |
---
### (d) Slashdot
**Key Trends**:
1. ASGL (red) maintains highest AUC across all ε values
2. SiGAT (pink) shows rapid improvement, particularly between ε=3-5
3. LSNE (yellow) demonstrates the most consistent growth pattern
4. GAP (green) exhibits the slowest rate of improvement
**Data Points**:
| ε | SDGNN | SiGAT | SGCN | GAP | LSNE | ASGL |
|---|-------|-------|------|-----|------|------|
| 1 | 0.72 | 0.74 | 0.59 | 0.53| 0.55 | 0.72 |
| 2 | 0.75 | 0.77 | 0.63 | 0.56| 0.58 | 0.75 |
| 3 | 0.78 | 0.80 | 0.67 | 0.59| 0.61 | 0.78 |
| 4 | 0.80 | 0.83 | 0.71 | 0.62| 0.64 | 0.80 |
| 5 | 0.82 | 0.85 | 0.75 | 0.65| 0.67 | 0.82 |
| 6 | 0.83 | 0.86 | 0.78 | 0.68| 0.69 | 0.83 |
---
### (e) Epinions
**Key Trends**:
1. ASGL (red) shows the most dramatic improvement, increasing from 0.70 to 0.85
2. SiGAT (pink) demonstrates exponential growth pattern
3. LSNE (yellow) maintains steady linear progression
4. GAP (green) exhibits the most gradual improvement
**Data Points**:
| ε | SDGNN | SiGAT | SGCN | GAP | LSNE | ASGL |
|---|-------|-------|------|-----|------|------|
| 1 | 0.68 | 0.70 | 0.56 | 0.50| 0.52 | 0.70 |
| 2 | 0.71 | 0.73 | 0.60 | 0.53| 0.55 | 0.73 |
| 3 | 0.74 | 0.76 | 0.64 | 0.56| 0.58 | 0.76 |
| 4 | 0.77 | 0.79 | 0.68 | 0.59| 0.61 | 0.79 |
| 5 | 0.80 | 0.82 | 0.72 | 0.62| 0.64 | 0.82 |
| 6 | 0.82 | 0.85 | 0.75 | 0.65| 0.66 | 0.85 |
---
## Cross-Algorithm Comparison
1. **ASGL (Proposed)** consistently outperforms all baselines across all datasets
2. **SiGAT** shows the second-highest performance, particularly in Epinions and Bitcoin_OCT
3. **LSNE** demonstrates the most stable performance across varying ε values
4. **GAP** consistently underperforms compared to other algorithms
5. **SGCN** shows variable performance patterns depending on dataset
---
## Technical Observations
1. All algorithms show improved performance with increased privacy budget (ε)
2. The proposed ASGL algorithm demonstrates superior privacy-utility tradeoff
3. Performance gains appear to plateau at ε=5-6 across most datasets
4. Dataset characteristics significantly influence algorithm effectiveness:
- Financial data (Bitcoin) shows higher baseline performance
- Text-based datasets (Slashdot, Epinions) exhibit more pronounced performance variations
---
## Limitations
- No statistical significance indicators provided
- No baseline AUC values for comparison
- No information about dataset sizes or feature dimensions
- No temporal or contextual analysis of performance trends
</details>
Figure 5. AUC vs. Privacy cost ( $ε$ ) of private signed graph learning methods in edge sign prediction.
<details>
<summary>x6.png Details</summary>

### Visual Description
# Technical Document Analysis of Comparative Model Performance Charts
## Overview
The image contains **five grouped bar charts** comparing the performance of **six graph neural network models** across five datasets. The charts visualize the **SSI (Structural Similarity Index)** metric across three experimental conditions (`e=1`, `e=2`, `e=4`). Models are color-coded in the legend, and datasets are labeled as subfigures (a)-(e).
---
## Legend & Model Identification
**Legend Location**: Top-center of the image.
**Color-Model Mapping**:
- **SGCN**: Blue (solid)
- **SDGNN**: Green (solid)
- **SiGAT**: Pink (solid)
- **LSNE**: Yellow (solid)
- **GAP**: Purple (solid)
- **ASGL**: Red (solid)
---
## Axis Labels & Scales
- **X-Axis**: Labeled `e=1`, `e=2`, `e=4` (experimental conditions).
- **Y-Axis**: Labeled `SSI` (Structural Similarity Index), scaled from **0.4 to 0.7** in increments of 0.05.
- **Subfigure Labels**:
- (a) Bitcoin_Alpha
- (b) Bitcoin_OCT
- (c) WikiRfA
- (d) Slashdot
- (e) Epinions
---
## Chart Structure
Each subfigure contains **18 bars** (6 models × 3 experimental conditions). Bars are grouped by model, with colors matching the legend. Error bars or confidence intervals are not visible in the provided image.
---
## Key Trends & Data Points
### (a) Bitcoin_Alpha
- **ASGL** (red) dominates at `e=4` (SSI ≈ 0.65).
- **GAP** (purple) shows strong performance at `e=2` (SSI ≈ 0.55).
- **SGCN** (blue) and **SDGNN** (green) underperform compared to others.
### (b) Bitcoin_OCT
- **ASGL** (red) leads at `e=4` (SSI ≈ 0.62).
- **GAP** (purple) improves significantly from `e=1` to `e=4`.
- **LSNE** (yellow) shows minimal growth across conditions.
### (c) WikiRfA
- **ASGL** (red) and **GAP** (purple) are top performers at `e=4` (SSI ≈ 0.58–0.60).
- **SiGAT** (pink) lags behind at all conditions.
### (d) Slashdot
- **ASGL** (red) and **GAP** (purple) maintain high SSI values (≈ 0.55–0.60).
- **SDGNN** (green) shows a slight dip at `e=4`.
### (e) Epinions
- **ASGL** (red) achieves the highest SSI at `e=4` (≈ 0.63).
- **GAP** (purple) and **LSNE** (yellow) show comparable performance.
---
## Spatial Grounding & Validation
- **Legend Colors**: Confirmed to match bar colors in all subfigures.
- **X-Axis Order**: `e=1` (left), `e=2` (middle), `e=4` (right) across all charts.
- **Y-Axis Consistency**: SSI scale uniform across all subfigures.
---
## Observations
1. **ASGL** consistently outperforms other models across most datasets and conditions.
2. **GAP** shows strong scalability, improving with increasing `e`.
3. **SGCN** and **SDGNN** underperform relative to other models.
4. **LSNE** and **SiGAT** exhibit moderate performance, with limited growth across conditions.
---
## Limitations
- No error bars or statistical significance indicators are visible.
- Exact SSI values are approximated visually; precise numerical data is not provided.
- No textual explanation of methodology or hyperparameters is included.
---
## Conclusion
The charts highlight **ASGL** and **GAP** as top-performing models for graph-based tasks, with performance improving as experimental conditions scale (`e=4`). Further analysis with raw data is recommended for conclusive insights.
</details>
Figure 6. Symmetric separation index (SSI) vs. Privacy cost ( $ε$ ) of private signed graph learning methods in node clustering.
### 5.3. Impact of Privacy Budget on Edge Sign Prediction
To evaluate the effectiveness of different private graph learning methods on edge sign prediction, we compare their AUC scores under privacy budgets $ε$ ranging from 1 to 6, as shown in Fig. 5 and Table 6 (see App. J.3). The proposed ASGL consistently outperforms all baselines across all privacy levels and datasets, owing to its ability to generate node embeddings that preserve connectivity distributions while satisfying DP guarantees. Although SDGNN achieves sub-optimal performance, it exhibits a noticeable gap from ASGL under limited privacy budgets ( $ε<4$ ). SiGAT, SGCN, and LSNE employ the moments accountant (MA) to mitigate excessive privacy budget consumption, yet still suffer from poor convergence and degraded utility under limited privacy budgets. GAP adopts aggregation perturbation to ensure node-level DP, but its performance is limited due to noisy neighborhood information, hindering its ability to capture structural information for edge prediction tasks.
### 5.4. Impact of Privacy Budget on Node Cluster
To further examine the capability of ASGL in preserving signed node proximity, we conduct a fair comparison across multiple private graph learning methods using the SSI metric. As shown in Fig. 6 and Table 7 (see App. J.4), ASGL consistently outperforms all baselines across different datasets and privacy budgets, demonstrating that ASGL is capable of generating node embeddings that effectively preserve signed node proximity. Notably, GAP achieves the second-best clustering performance on most datasets (excluding Slashdot), benefiting from its ability to leverage node features for clustering nodes. Nevertheless, to guarantee node-level DP, GAP needs to repeatedly query sensitive graph information in every training iteration, resulting in significantly higher privacy costs.
### 5.5. Resilience Against Link Stealing Attack
To assess the effectiveness of ASGL in preserving the privacy of edge information, we perform link stealing attacks (LSA) across all datasets and compare the resilience of all methods to such attacks in edge sign prediction tasks. The LSA setup is detailed in App. J.5. Attack performance is measured by the AUC score, averaged over five independent runs. Table 4 summarizes the effectiveness of LSA on various trained target models and datasets. It can be observed that as the privacy budget $ε$ increases, the average AUC of LSA consistently improves, indicating the reduced privacy protection of target models and an increased success rate of the attack. Overall, the average AUC of the attack is close to 0.50 in most cases, indicating the unsuccessful edge inference and the robustness of DP against such an attack. When $ε=3$ , ASGL demonstrates stronger resistance to LSA across most datasets, with AUC values consistently below 0.57. This suggests that ASGL offers defense performance comparable to other differentially private graph learning methods.
Table 4. The average AUC of LSA on different comparisons and datasets. (BOLD: Best resilience against LSA)
| 1 | Bitcoin-Alpha | 0.5072 | 0.7091 | 0.5079 | 0.5145 | 0.5404 | 0.5053 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Bitcoin-OTC | 0.5081 | 0.7118 | 0.5119 | 0.5409 | 0.5660 | 0.5466 | |
| Slashdot | 0.5538 | 0.8232 | 0.5551 | 0.5609 | 0.5460 | 0.5325 | |
| WikiRfA | 0.5148 | 0.5424 | 0.5427 | 0.5293 | 0.5470 | 0.5302 | |
| Epinions | 0.7877 | 0.6329 | 0.5114 | 0.5129 | 0.5188 | 0.5092 | |
| 3 | Bitcoin-Alpha | 0.5547 | 0.7514 | 0.5533 | 0.5542 | 0.5598 | 0.5430 |
| Bitcoin-OTC | 0.5655 | 0.7273 | 0.5684 | 0.5734 | 0.5765 | 0.5612 | |
| Slashdot | 0.5742 | 0.8394 | 0.6267 | 0.5730 | 0.6464 | 0.5634 | |
| WikiRfA | 0.5276 | 0.5466 | 0.5542 | 0.5696 | 0.5772 | 0.5624 | |
| Epinions | 0.7981 | 0.6456 | 0.5588 | 0.5629 | 0.5665 | 0.5542 | |
## 6. Related Work
Signed graph learning. In recent years, deep learning approaches have been increasingly adopted for signed graph learning. For example, SiNE (47) extracts signed structural information based on balance theory and designs an objective function to learn signed node proximity. Furthermore, the GNN model (36) and its variants (38; 39) are used to learn signed relationships between nodes in multi-hop neighborhoods. However, these GNNs-based methods depend on the message-passing mechanism, which is sensitive to noisy interactions between nodes (49). To address this issue, Lee et al. (14) extends the adversarial framework to signed graphs by generating both positive and negative node embeddings. Still, these signed graph learning models are vulnerable to user-linkage attacks.
Private graph learning. Recent works have increasingly focused on developing DP methods to address privacy leakage in GNNs. For instance, Daigavane et al. (33) propose a DP-GNN method based on gradient perturbation. However, this method fails to balance utility and privacy due to excessive noise. Furthermore, GAP (40) and DPRA (50) are proposed to ensure the privacy of sensitive node embeddings by perturbing node aggregations. Despite their success in node classification, the private node information is repeatedly queried in the training process of GAP, which consumes more privacy budgets to implement DPSGD. DPRA is not well-suited for signed graph embedding learning, as its edge perturbation strategy introduces cascading errors under balance theory.
## 7. Conclusion
In this paper, we propose ASGL that achieves strong model utility while providing node-level DP guarantees. To address the cascading error and gradient polarity reversals from edge sign flips, ASGL separately processes positive and negative subgraphs within a shared embedding space using a DPSGD-based adversarial mechanism to learn high-quality node embeddings. To further reduce gradient sensitivity, we introduce a constrained BFS-tree strategy that limits node receptive fields and enables gradient decoupling. This effectively reduces the required noise scale and enhances model performance. Extensive experiments demonstrate that ASGL achieves a favorable privacy-utility trade-off. Our future work is to extend the ASGL framework by considering edge directions and weights.
Acknowledgements. This work was supported by the National Natural Science Foundation of China (Grant No: 62372122 and 92270123), and the Research Grants Council (Grant No: 15208923, 25207224, and 15207725), Hong Kong SAR, China.
## Appendix A Adversarial Learning on Graph
The adversarial learning model for graph embedding (03) is illustrated as follows. Let $N(v_r)$ be the node set directly connected to $v_r$ . We denote the underlying true connectivity distribution of node $v_r$ as the conditional probability $p(v|v_r)$ , which captures the preference of $v_r$ to connect with other nodes $v∈ V$ . In other words, the neighbor set $N(v_r)$ can be interpreted as a set of observed nodes drawn from $p(v|v_r)$ . The adversarial learning for the graph $G$ is conducted by the following two modules:
Generator $G$ : Through optimizing the generator parameters $θ_G$ , this module aims to approximate the underlying true connectivity distribution and generate (or select) the most likely nodes $v∈ V$ that are relevant to $v_r$ . Specifically, the fake The term “Fake” indicates that although a node $v$ selected by the generator is relevant to $v_r$ , there is no actual edge between them. (i.e., estimated) connectivity distribution of node $v_r$ is calculated as:
$$
p^\prime(v|v_r)=G≤ft(v|v_r;θ_G\right)=\frac{\exp≤ft(g_v^⊤g_v_{r}\right)}{∑_v≠ v_{r}\exp≤ft(g_v^⊤g_v_{r}\right)}, \tag{18}
$$
where $g_v,g_v_{r}∈ℝ^k$ are the $k$ -dimensional vectors of nodes $v$ and $v_r$ , respectively, and $θ_G$ is the union of all $g_v$ ’s. To update $θ_G$ in each iteration, a set of node pairs $(v,v_r)$ , not necessarily directly connected, is sampled according to $p^\prime(v|v_r)$ . The key purpose of generator $G$ is to deceive the discriminator $D$ , and thus its loss function $L_G$ is determined as follows:
$$
\displaystyle L_G=\min∑_r=1^|V|≤ft.E_v∼ G≤ft(·\mid v_{r;θ_G\right)}≤ft[\log≤ft(1-D≤ft(v_r,v\midθ_D\right)\right)\right]\right., \tag{19}
$$
where the discriminant function $D(·)$ estimates the probability that a given node pairs $(v,v_r)$ are considered real, i.e., directly connected.
Discriminator $D$ : This module tries to distinguish between real node pairs and fake node pairs synthesized by the generator $G$ . Accordingly, the discriminator estimates the probability that an edge exists between $v_r$ and $v$ , denoted as:
$$
D(v_r,v|θ_D)=σ(d_v^⊤d_v_{r})=\frac{1}{1+\exp({-d_v^⊤d_v_{r})}}, \tag{20}
$$
where $d_v,d_v_{r}∈ℝ^k$ are the $k$ -dimensional vectors corresponding to the $v$ -th and $v_r$ -th rows of discriminator parameters $θ_D$ , respectively. $σ(·)$ represents the sigmoid function of the inner product of these two vectors. Given the sets of real and fake node pairs, the loss function of $D$ can be derived as:
$$
\displaystyle L_D= \displaystyle\max∑_r=1^|V|≤ft(E_v∼ p≤ft(·\mid v_{r\right)}≤ft[\log D≤ft(v,v_r\midθ_D\right)\right]\right. \displaystyle≤ft.+E_v∼ G≤ft(·\mid v_{r;θ_G\right)}≤ft[\log≤ft(1-D≤ft(v_r,v\midθ_D\right)\right)\right]\right). \tag{21}
$$
In summary, the generator $G$ and discriminator $D$ operate as two adversarial components: the generator $G$ aims to fit the true connectivity distribution $p(v|v_r)$ , generating candidate nodes $v$ that resemble the real neighbors of $v_r$ to deceive the discriminator $D$ . In contrast, the discriminator $D$ seeks to distinguish whether a given node is a true neighbor of $v_r$ or one generated by $G$ . Formally, $D$ and $G$ are engaged in a two-player minimax game with the following loss function:
$$
\displaystyle\min_θ_{G} \displaystyle\max_θ_{D}L(G,D)=∑_r=1^|V|≤ft(E_v∼ p≤ft(·\mid v_{r\right)}≤ft[\log D≤ft(v,v_r\midθ_D\right)\right]\right. \displaystyle≤ft.+E_v∼ G≤ft(·\mid v_{r;θ_G\right)}≤ft[\log≤ft(1-D≤ft(v_r,v\midθ_D\right)\right)\right]\right). \tag{22}
$$
Based on Eq. (22), the parameters $θ_D$ and $θ_G$ are updated by alternately maximizing and minimizing the loss function $L(G,D)$ . Competition between $G$ and $D$ results in mutual improvement until $G$ becomes indistinguishable from the true connectivity distribution.
## Appendix B Notation Introduction
The frequently used notations are summarized in Table 5.
Table 5. Notation Summary
| $G,G^+,G^-$ $V,E^+,E^-$ $N(v_r)$ | Signed graph, positive subgraph, negative subgraph Node set, negative and positive edge sets Neighbor node set of node $v_r$ |
| --- | --- |
| $θ_D$ | Shared parameters of discriminators $D^+$ and $D^-$ |
| $θ_G$ | Shared parameters of generators $G^+$ and $G^-$ |
| $d_v_{r}$ | Node embedding for node $v_r$ of Discriminators |
| $g_v_{r}$ | Node embedding for node $v_r$ of Generators |
| $N,L$ | Maximum number and length of generated path |
| $ε,δ$ | Privacy parameters |
| $N(0,σ^2)$ | Gaussian distribution with standard deviation $σ^2$ |
| $P_rt$ | A path from rooted node $v_r$ to target node $v_t$ |
| $T^+_v_{r},T^-_v_{r}$ | Positive and negative BFS-trees rooted from $v_r$ |
| $p_true^+(·|v_r)$ | Positive connectivity distributions of $(v_r,v)∈ E^+$ |
| $p_true^-(·|v_r)$ | Negative connectivity distributions of $(v_r,v)∈ E^-$ |
| $p^+_T^+_v_{r}(v|v_r)$ | Positive relevance probability between $v_r$ and $v$ |
| $p^-_T^-_v_{r}(v|v_r)$ | Negative relevance probability between $v_r$ and $v$ |
## Appendix C Rényi Differential Privacy
Since standard DP can be overly strict for deep learning, we follow prior work (30; 31) and adopt an alternative definition—Rényi Differential Privacy (RDP) (07). RDP offers tighter and more efficient composition bounds, enabling more accurate estimation of cumulative privacy cost over multiple queries on graphs.
**Definition 0 (Rényi Differential Privacy(07))**
*The Rényi divergence quantifies the similarity between output distributions of a mechanism and is defined as:
$$
D_α(P\|Q)=\frac{1}{α-1}\log≤ft(∑_xP(x)^αQ(x)^1-α\right), \tag{23}
$$
where $P(x)$ and $Q(x)$ are probability distributions over the output space. $α>1$ denotes the order of the divergence, and its choice allows for different levels of sensitivity to the output distribution. Accordingly, an algorithm $M$ satisfies $(α,ε)$ -RDP if, for any two adjacent graphs $G$ and $G^\prime$ , the following condition holds $D_α≤ft(M(G)\|M≤ft(G^\prime\right)\right)≤ε$ .*
Since RDP is an extension of DP, it can be converted into ( $ε$ , $δ$ )-DP based on Proposition 3 in (07), as outlined below.
**Lemma 0 (Conversion from RDP to DP(07))**
*If a mechanism $M$ satisfies $(α,ε)$ -RDP, it also satisfies $(ε+\log(1/δ)/(α-1),δ)$ -DP for any $δ∈(0,1)$ .*
## Appendix D Gaussian Mechanism
Let $f$ be a function that maps a graph $G$ to $k$ -dimensional node vectors $Z∈ℝ^|V|× k$ . To ensure the RDP guarantees of $f$ , it is common to inject Gaussian noise into its output (07). The noise scale depends on the sensitivity of $f$ , defined as $Δ_f=\max_G,\mathcal{G^\prime}≤ft\|f(G)-f≤ft(G^\prime\right)\right\|_2$ . Specifically, the privatized mechanism is defined as $M(G)=f(G)+N(0,σ^2I)$ , where $N(0,σ^2I)$ is the Gaussian distribution with zero mean and standard deviation $σ^2$ . This results in an $(α,ε)$ -RDP mechanism $M$ for all $α>1$ with $ε=αΔ_f^2/2σ^2$ .
Input: Graph $G=\{G^+,G^-\}$ ; The training set of nodes $V_tr$ ; The maximum path length $L$ ; The maximum path number $N$ .
Output: The training set of subgraphs $S_tr$ .
1 for $v_r∈ V_tr$ do
2 Construct BFS-trees $T^+_v_{r}$ (or $T^-_v_{r}$ ) rooted from the node $v_r$ on $G^+$ (or $G^-$ );
3 for $n=0;n<N$ do
4 Based on the positive and negative relevance probability in Eqs. (6) and (7), conduct the random walk at $T^+_v_{r}$ (or $T^-_v_{r}$ ) to form a path $P_rt^(n)+$ (or $P_rt^(n)-$ ) of length $L$ ;
5 Add all nodes $v$ (excluding those in $N(v_r)$ ) along the path $P_rt^(n)+$ (or $P_rt^(n)-$ ) as a fake edge $(v_r,v)$ to the corresponding subgraph set $S_tr^+$ (or $S_tr^-$ );
6 Drop $P_rt^(n)+$ (or $P_rt^(n)-$ ) from $T^+_v_{r}$ (or $T^-_v_{r}$ ).
7 end for
8
9 end for
Return $S_tr=\{S_tr^+,S_tr^-\}$ ;
Algorithm 1 SAMPLE-SUBGRAPHS by Constrained BFS-trees
## Appendix E BFS-tree Strategy
Fig. 7 provides an illustrative example of the BFS-tree strategy: Let $v_r_0$ be the rooted node. We first compute the transition probabilities between $v_r_0$ and its neighbors $N({v_r_0})$ . The next node $v_r_1$ is then sampled as the first step of the walk, in proportion to these transition probabilities. Similarly, the next node $v_r_2$ is selected based on the transition probabilities between $v_r_1$ and its neighbors $N({v_r_1})$ . The random walk continues until it reaches the terminal node $v_r_{n}$ , and unconnected node pairs $(v_r_0,v_r_{k})^+$ for $k=2,3,…,n$ are regarded as fake positive edges.
<details>
<summary>x7.png Details</summary>

### Visual Description
# Technical Document Extraction: Graph Transformation Process
## Diagram Description
The image illustrates a multi-stage graph transformation process involving probabilistic node selection. The workflow progresses from left to right through three distinct stages.
---
### 1. Original Graph G
- **Structure**: Complex undirected graph with 10 nodes
- **Node Labels**:
- `v_r0` (highlighted in pink)
- `v_r1`, `v_r2`, ..., `v_r9` (standard black nodes)
- **Edge Characteristics**:
- No explicit weights shown
- Multiple interconnections between nodes
- **Key Feature**:
- Central node `v_r0` acts as root/reference point
---
### 2. BFS-Tree Generation
- **Transformation**: Original graph converted to breadth-first search tree
- **Root Node**: `v_r0` (maintained from original graph)
- **Tree Structure**:
- Level 1: Direct children of `v_r0`
- Subsequent levels: Hierarchical expansion
- **Visual Cues**:
- Dashed lines distinguish tree edges from original graph connections
---
### 3. Probabilistic Node Selection Process
#### Stage 1: Select `v_r1`
- **Condition**: `p_T(v_r1 | v_r0) = 0.6`
- **Edge Probabilities**:
- `v_r0 → v_r1`: 0.6 (bold red arrow)
- `v_r0 → v_r2`: 0.1 (dashed red arrow)
- `v_r0 → v_r3`: 0.3 (dashed red arrow)
- **Visual Representation**:
- Selected path highlighted with solid arrows
- Alternative paths shown with dashed arrows
#### Stage 2: Select `v_r2`
- **Condition**: `p_T(v_r2 | v_r1) = 0.5`
- **Edge Probabilities**:
- `v_r1 → v_r2`: 0.5 (bold red arrow)
- `v_r1 → v_r4`: 0.3 (dashed red arrow)
- **Visual Representation**:
- Selected path continues from previous selection
- Alternative paths maintain dashed arrow convention
---
### 4. Final Selected Subgraph
- **Nodes**: `v_r0`, `v_r1`, `v_r2`
- **Edge Probabilities**:
- `v_r0 → v_r1`: 0.6
- `v_r1 → v_r2`: 0.5
- `v_r0 → v_r2`: 0.2 (indirect path)
- **Key Observations**:
- Cumulative probability path: 0.6 × 0.5 = 0.3
- Direct path probability: 0.2
- Total path diversity: 0.5 (0.3 + 0.2)
---
### Technical Notes
1. **Notation**:
- `p_T(v_i | v_j)`: Conditional probability of selecting node `v_i` given parent `v_j`
- Arrow thickness correlates with probability magnitude
2. **Process Logic**:
- Sequential conditional probability application
- Tree-based selection preserves graph hierarchy
3. **Visual Encoding**:
- Color coding: Pink for root node, red for selection paths
- Arrow styles differentiate selected vs alternative paths
---
### Missing Elements
- No axis titles, legends, or data tables present
- No secondary languages detected
- No explicit trend analysis required (static diagram)
This extraction captures all textual information, structural components, and probabilistic relationships depicted in the diagram.
</details>
Figure 7. Random-walk-based edge generation for generator $G^+$ or $G^-$ . Red digits denote the transition probabilities (Eqs. (6) and (7)), and red arrows indicate the walk directions.
## Appendix F Details of Algorithm
### F.1. The Parameter Update of Generators
Given fake positive/negative edges $(v_r,v_t)$ from ${G}^+/{G}^-$ , the gradient of joint loss function (Eq. (10)) with respect to $θ_G$ is derived via the policy gradient (03):
$$
∇ L_G=≤ft\{\begin{array}[]{l}∑_r=1^|V^{+|}[∇_θ_{G}\log G^+≤ft(v_t|v_r;θ_G\right)\log≤ft(1-D^+≤ft(v_t,v_r\right)\right)],\\
if ≤ft(v_r,v_t\right) is a fake positive edge;\\
∑_r=1^|V^{-|}∇_θ_{G}\log G^-≤ft(v_t|v_r;θ_G\right)\log≤ft(1-D^-≤ft(v_t,v_r\right)\right),\\
if ≤ft(v_r,v_t\right) is a fake negative edge.\end{array}\right. \tag{24}
$$
### F.2. SAMPLE-SUBGRAPHS by Constrained BFS-trees
As shown in Algorithm 1, during the random walk on the BFS tree $T^+_v_{r}$ or $T^-_v_{r}$ rooted at $v_r∈ V_tr$ , we generate multiple unique paths while constraining their number and length by parameters $N$ and $L$ , respectively. This process yields a training subgraph set $S_tr$ composed of constrained paths.
Input: Graph $G$ ; Training set of nodes $V_tr$ ; Maximum path length $L$ ; Maximum path number $N$ ; Batch-size $B_d$ and $B_g$ of sampled edges in discrininators and generators; Number of epochs $n^epoch$ ; Number of iterations for generators and discriminators per epoch $n^iter$ ; Privacy parameters $δ$ , $ε$ , $σ$ .
Output: Privacy-preserving node embedding $g_v∈θ_G$ for downstream tasks.
1 According to edge signs, divide $G$ into $G^+$ and $G^-$ ;
2 Generate the training subgraph set $S_tr=\{S_tr^+,S_tr^-\}$ based on $SAMPLE-SUBGRAPHS(G,V_tr,N,L)$ in Algorithm 1;
3 for $v_r∈ V_tr$ do
4 Sample all real positive edges $(v_r,v_t)^+$ from $G^+$ ;
5 Sample all fake positive edges $(v_r,v_t^\prime)^+$ from $S_tr^+$ ;
6 Sample all real negative edges $(v_r,v_t)^-$ from $G^-$ ;
7 Sample all fake negative edges $(v_r,v_t^\prime)^-$ from $S_tr^-$ ;
8 $E^+_D.add((v_r,v_t)^+,(v_r,v_t^\prime)^+)$ , $E^+_G.add((v_r,v_t^\prime)^+)$ ,
9 $E^-_D.add((v_r,v_t)^-,(v_r,v_t^\prime)^-)$ , $E^-_G.add((v_r,v_t^\prime)^-)$ ;
10
11 end for
12 for $epoch=0;epoch<n^epoch$ do
13 Train the discriminator $D^+$ :
14 for $iter=0;iter<n^iter$ do
15 Sample $B_d$ real and fake positive edges from $E^+_D$ ;
16 Update $θ_D$ via Eqs. (8) and (11), and achieve gradient perturbation via Eq. (13);
17 Calculate privacy spent $\hat{δ}$ given the target $ε$ ;
18 Stop optimization if $\hat{δ}≥δ$ .
19 end for
20 Train the generator $G^+$ :
21 for $iter=0;iter<n^iter$ do
22 Subsample $B_g$ fake positive edges from $E^+_G$ ;
23 Update $θ_G$ via Eqs. (4) and (24).
24 end for
25 Train the discriminator $D^-$ :
26 for $iter=0;iter<n^iter$ do
27 Subsample $B_d$ real and fake negative edges from $E^-_D$ ;
28 Update $θ_D$ via Eqs. (9) and (11), and achieve gradient perturbation via Eq. (13);
29 Calculate privacy spent $\hat{δ}$ given the target $ε$ ;
30 Stop optimization if $\hat{δ}≥δ$ .
31 end for
32 Train the generator $G^-$ :
33 for $iter=0;iter<n^iter$ do
34 Subsample $B_g$ fake negative edges from $E^-_G$ ;
35 Update $θ_G$ via Eqs. (5) and (24).
36 end for
37
38 end for
39 Return privacy-preserving node embedding $g_v∈θ_G$ ;
Algorithm 2 ASGL Algorithm
### F.3. The training of ASGL
The training process of ASGL is outlined in Algorithm 2 and consists of the following main steps:
(1) Signed graph decomposition and subgraph sampling: Given an input signed graph $G$ , we first divide it into a positive subgraph $G^+$ and a negative subgraph $G^-$ based on edge signs. Then, for each node $v_r∈ V_tr$ , constrained BFS trees are constructed from $G^+$ and $G^-$ , respectively, to generate a set of training subgraphs $S_tr=\{S_tr^+,S_tr^-\}$ by limiting the maximum number of paths $N$ and the maximum path length $L$ . These subgraphs are used to sample fake edges for adversarial training.
(2) Edge sampling for adversarial learning: For each node $v_r$ , we sample real edges from $G^+$ and $G^-$ , and fake edges from $S_tr^+$ and $S_tr^-$ . These edges are organized into four sets:
- $E_D^+$ : real and fake positive edges for training $D^+$ .
- $E_G^+$ : fake positive edges for training $G^+$ .
- $E_D^-$ : real and fake negative edges for training $D^-$ .
- $E_G^-$ : fake negative edges for training $G^-$ .
(3) Adversarial training with DPSGD: The training is performed over $n^epoch$ epochs. In each epoch:
- Discriminator training: For each discriminator $D^+$ and $D^-$ , we perform $n^iter$ iterations. In each iteration, a batch of $B_d$ real and fake edges is sampled. The discriminator parameters $θ_D$ are updated using gradient descent with noise addition according to the DPSGD mechanism (Eq. (13)), ensuring node-level DP. The privacy budget $\hat{δ}$ is tracked, and training stops early if $\hat{δ}>δ$ .
- Generator training: Each generator $G^+$ and $G^-$ is trained for $n^iter$ iterations. In each iteration, a batch of $B_g$ fake edges is sampled, and the generator parameters $θ_G$ are updated by maximizing the generator objective (Eq. (24)).
(4) Embedding output for downstream tasks: After all epochs, the generator parameters $θ_G$ encode the privacy-preserving node embeddings $g_v∈θ_G$ , which are used for downstream tasks such as edge sign prediction and node clustering.
## Appendix G Details of Lemma
The following lemmas are used for proving Theorem 1:
**Lemma 0 (Receptive field of a node)**
*As shown in Fig. 4 (b), we define the receptive field of a node as the region (i.e., the set of nodes) over which it can exert influence. Accordingly, for a subgraph constructed from paths sampled on constrained BFS-trees (Fig. 4 (b)), the maximum receptive field size of $v_r$ is given by $R_N,L=∑_l=0^LN^l=\frac{N^L+1-1}{N-1}≤ B$ .*
**Lemma 0**
*Let $S_tr$ denote the training set of subgraphs constructed from constrained BFS-tree paths, and $S(v)⊂ S_tr$ denote the subgraph subset that contains the node $v$ . Since $R_N,L$ represents the upper bound on the number of occurrences of any node in $S_tr$ , it follows that $|S(v)|≤ R_N,L$ . The proof of Lemma 2 is illustrated in App. H.*
## Appendix H Proof of Lemma 2
Proof. We proceed by induction (13) on the path length $L$ of the BFS-tree.
Base case: When $L=0$ , each sampled subgraph $S(v)$ contains exactly the training node $v∈ V_tr$ itself. Thus, every node appears in one subgraph, trivially satisfying the bound $|S(v)|=R_N,0=1$ .
Inductive hypothesis: Assume that for some fixed $L≥ 0$ , any $v∈ V_tr$ appears in at most $R_N,L$ subgraphs constructed from constrained BFS-tree paths. Let $S^L(v)$ denote a subgraph set with $L$ path length. Thus, the hypothesis is $|S^L(v)|≤ R_N,L$ for any $v$ .
Inductive step: We further show that the above hypothesis also holds for $L+1$ path length: Let $T_u^\prime$ represent the $L$ -length BFS-tree rooted at $u^\prime$ . If $T_u^\prime∈ S^L+1(v)$ , there must exit node $u$ such that $u∈ T_u^\prime$ and $T_u∈ S^L(v)$ . According to the setting of Algorithm 1, the number of such nodes $u$ is at most $N$ . By the hypothesis, there are at most $R_N,L-1$ such $u^\prime≠ v$ such that $T_u^\prime∈ S^L+1(v)$ . Based on these upper bounds, we can derive the upper bound matching the inductive hypothesis for $L+1$ :
$$
≤ft|S^L+1(v)\right|≤ N·(R_N,L-1)+1=\frac{N^L+2-1}{N-1}=R_N,L+1. \tag{25}
$$
By induction, the Lemma 2 holds for all $L≥ 0$ .
## Appendix I Proof of Theorem 2
The following lemmas are used for proving Theorem 2:
**Lemma 0 (Adaptation of Lemma 5 from(34))**
*Let $N(μ,σ^2)$ represent the Gaussian distribution with mean $μ$ and standard deviation $σ^2$ , it holds that:
$$
D_α≤ft(N≤ft(μ,σ^2\right)\|N≤ft(0,σ^2\right)\right)=\frac{αμ^2}{2σ^2} \tag{26}
$$*
**Lemma 0 (Adaptation of Lemma 25 from(33))**
*Assume $μ_0,...,μ_n$ and $η_0,...,η_n$ are probability distributions over some domain $Z$ such that their Rényi divergences satisfy: $D_α(μ_0||η_0)≤ε_0,...,D_α(μ_n||η_n)≤ε_n$ for some given $ε_0,...,ε_n$ . Let $ρ$ be a probability distribution over $\{0,...,n\}$ . Denoted by $μ_ρ$ ( $η_ρ$ , respectively) the probability distribution on $Z$ obtained by sampling $i$ from $ρ$ and then randomly sampling from $μ_i$ and $η_i$ , we have:
$$
D_α≤ft(μ_ρ\|η_ρ\right)≤\lnE_i∼ρ≤ft[e^ε_i(α-1)\right]=\frac{1}{α-1}\ln∑_i=0^nρ_ie^ε_i(α-1) \tag{27}
$$*
Proof of Theorem 2. Consider any minibatch $B_t$ randomly sampled from the training subgraph set $S_tr$ of Algorithm 2 at iteration $t$ . For a subset $S(v^*)⊂ S_tr$ containing node $v^*$ , its size is bounded by $R_N,L$ (Lemma 2). Define the random variable $β$ as $|S(v^*)∩B_t|$ , and its distribution follows the hypergeometric distribution $Hypergeometric(|S_tr|,R_N,L,|B_t|)$ (32):
$$
β_i=P[β=i]\ext@arrow 0099\arrowfill@\Relbar\Relbar\Relbar{|S_tr|=N_tr}{|B_t|=B_d}\frac{\binom{R_N,L}{i}\binom{N_tr-R_N,L}{B_d-i}}{\binom{N_tr}{B_d}}. \tag{28}
$$
Next, consider the training of the discriminators (Lines 12–18 and 24–30 in Algorithm 2). Let $G$ and $G^\prime$ be two adjacent graphs differing only in the presence of node $v^*$ and its associated signed edges. Based on the gradient perturbation applied in Lines 15 and 27 of Algorithm 2, we have:
$$
\displaystyle\tilde{{g}}_t \displaystyle={g}_t+N≤ft(0,σ^2Δ_{g}^2I\right)=∑_v∈B_tClip_C(\frac{∂ L_D}{∂d_v})+N≤ft(0,σ^2Δ_{g}^2I\right) \displaystyle\tilde{{g}}^\prime_t \displaystyle={g}^\prime_t+N≤ft(0,σ^2Δ_{g}^2I\right)=∑_v^\prime∈B_tr^\primeClip_C(\frac{∂ L_D}{∂d_v^\prime})+N≤ft(0,σ^2Δ_{g}^2I\right), \tag{29}
$$
where $Δ_{g}=R_N,LC=\frac{N^L+1-1}{N-1}C$ (Theorem 1). $\tilde{{g}}_t$ and $\tilde{{g}}^\prime_t$ denote the noisy gradients of $G$ and $G^\prime$ , respectively. When $β=i$ , their Rényi divergences can be upper bounded as:
$$
\displaystyleD_α \displaystyle≤ft(\tilde{{g}}_t,i\|\tilde{{g}}_t,i^\prime\right) \displaystyle=D_α≤ft({g}_t,i+N≤ft(0,σ^2Δ_{g}^2I\right)\|{g}_t,i^\prime+N≤ft(0,σ^2Δ_{g}^2I\right)\right) \displaystyle=D_α≤ft(N≤ft({{g}}_t,i,σ^2Δ_{g}^2I\right)\|N≤ft({{g}^\prime}{}_t,i,σ^2Δ_{g}^2I\right)\right) \displaystyle\stackrel{{\scriptstyle(a)}}{{=}}D_α≤ft(N≤ft(≤ft({{g}}_t,i-{{g}}_t,i^\prime\right),σ^2Δ_{g}^2I\right)\|N≤ft(0,σ^2Δ_{g}^2I\right)\right) \displaystyle\stackrel{{\scriptstyle(b)}}{{≤}}\sup_\|{Δ_{i}\|_2≤ iC}D≤ft(N≤ft(Δ_i,σ^2Δ_{g}^2I\right)\|N≤ft(0,σ^2Δ_{g}^2I\right)\right) \displaystyle\stackrel{{\scriptstyle(c)}}{{=}}\sup_\|Δ_{i\|_2≤ iC}\frac{α\|Δ_i\|_2^2}{2Δ_{g}^2σ^2}=\frac{α i^2}{2R_N,L^2σ^2}, \tag{30}
$$
where $Δ_i={{g}}_t,i-{{g}}_t,i^\prime$ . (a) leverages the property that Rényi divergence remains unchanged under invertible transformations (34), while (b) and (c) are derived from Theorem 1 and Lemma 1, respectively. Based on Lemma 2 , we derive that:
$$
\displaystyleD_α≤ft(\tilde{{g}}_t\|\tilde{{g}}_t^\prime\right)≤\lnE_i∼β≤ft[\exp≤ft({\frac{α i^2(α-1)}{2R_N,L^2σ^2}}\right)\right] \displaystyle=\frac{1}{α-1}\ln≤ft(∑_i=0^R_N,Lβ_i\exp≤ft({\frac{α i^2(α-1)}{2R_N,L^2σ^2}}\right)\right)=γ. \tag{31}
$$
Here, $β_i$ is illustrated in Eq. (28). Based on the composition property of DP, after $T=n^epoch· n^iter$ interations, the discriminators satisfy node-level $(α,{2T}γ)$ -RDP. Moreover, owing to the post-processing property of DP, the generators $G^+$ and $G^-$ inherit the same privacy guarantee as the discriminators. Therefore, Algorithm 2 obeys node-level $(α,{2T}γ)$ -RDP, and the proof of Theorem 2 is completed.
## Appendix J Additional Details of Experiments
### J.1. Dataset Introduction
The detailed introduction of all datasets is as follows.
- Bitcoin-Alpha and Bitcoin-OTC are trust networks among Bitcoin users, aimed at preventing transactions with fraudulent or high-risk users. In these networks, user relationships are represented by positive (trust) and negative (distrust) edges.
- Slashdot is a social network derived from user interactions on a technology news site, where relationships are annotated as positive (friend) or negative (enemy) edges.
- WikiRfA is a voting network for electing managers in Wikipedia, where edges denote positive (supporting vote) or negative (opposing vote) relationships between users.
- Epinions is a product review site where users can establish both trust and distrust relationships with others.
### J.2. The Distribution of Node Degrees and Path Lengths
The findings for the distribution of node degrees and path lengths in the Bitcoin-Alpha and Slashdot datasets are shown in Figs. 8 and 9.
<details>
<summary>x8.png Details</summary>

### Visual Description
# Technical Document Extraction: Node Degree Distribution Analysis
## Chart Structure Overview
The image contains two comparative bar charts analyzing node degree distributions in two networks: **Bitcoin-Alpha** and **Slashdot**. Both charts follow identical formatting conventions with:
- Y-axis: "Proportion of Node (%)" (0-60% scale)
- X-axis: "Degree" (1-7)
- Green dotted bars representing node proportions
- Blue trend line overlaying bar data
---
## Chart (a): Bitcoin-Alpha Network
### Axis Labels & Markers
- **X-axis**: Degree (1, 2, 3, 4, 5, 6, 7)
- **Y-axis**: Proportion of Node (%) (0-40% scale)
- **Title**: Bitcoin-Alpha
### Data Points & Trends
| Degree | Proportion (%) | Bar Pattern | Trend Line Position |
|--------|----------------|-------------|---------------------|
| 1 | 35 | Green dotted| 35% |
| 2 | 17 | Green dotted| 17% |
| 3 | 10 | Green dotted| 10% |
| 4 | 6 | Green dotted| 6% |
| 5 | 4 | Green dotted| 4% |
| 6 | 3 | Green dotted| 3% |
| 7 | 2 | Green dotted| 2% |
**Trend Analysis**:
The blue trend line demonstrates a consistent exponential decay pattern, decreasing from 35% at degree 1 to 2% at degree 7. Bar heights align precisely with trend line values, confirming perfect correlation.
---
## Chart (b): Slashdot Network
### Axis Labels & Markers
- **X-axis**: Degree (1, 2, 3, 4, 5, 6, 7)
- **Y-axis**: Proportion of Node (%) (0-60% scale)
- **Title**: Slashdot
### Data Points & Trends
| Degree | Proportion (%) | Bar Pattern | Trend Line Position |
|--------|----------------|-------------|---------------------|
| 1 | 60 | Green dotted| 60% |
| 2 | 15 | Green dotted| 15% |
| 3 | 8 | Green dotted| 8% |
| 4 | 4 | Green dotted| 4% |
| 5 | 2 | Green dotted| 2% |
| 6 | 1 | Green dotted| 1% |
| 7 | 0.5 | Green dotted| 0.5% |
**Trend Analysis**:
The blue trend line shows a steeper initial decline (60% → 15%) followed by gradual decay. Bars maintain proportional alignment with the trend line, though degree 3 shows minor deviation (8% vs predicted 7%).
---
## Cross-Chart Comparison
1. **Scale Differences**:
- Slashdot's maximum proportion (60%) exceeds Bitcoin-Alpha's (35%) by 71%
- Y-axis scales differ (40% vs 60%) to accommodate distribution ranges
2. **Decay Patterns**:
- Bitcoin-Alpha: Smooth exponential decay (R² ≈ 0.99)
- Slashdot: Bimodal decay with sharp initial drop (R² ≈ 0.92)
3. **Node Distribution Characteristics**:
- Bitcoin-Alpha exhibits more uniform node distribution across degrees
- Slashdot demonstrates power-law characteristics with rapid early-degree concentration
---
## Technical Notes
- **Legend Verification**: No explicit legend present in either chart. Blue line coloration consistently represents trend data across both visualizations.
- **Spatial Grounding**:
- Bitcoin-Alpha: [x=0.5, y=0.5] central positioning
- Slashdot: [x=1.5, y=0.5] adjacent positioning
- **Data Integrity**: All bar heights and trend line positions have been cross-verified against axis scales and visual alignment.
This extraction provides complete quantitative and qualitative analysis of node degree distributions in both networks, enabling direct comparison of their topological properties.
</details>
Figure 8. Distribution of node degrees.
<details>
<summary>x9.png Details</summary>

### Visual Description
# Technical Document Extraction: Node Pair Path Length Analysis
## Chart Overview
The image contains two comparative bar charts analyzing node pair path length distributions in two networks: **Bitcoin-Alpha** and **Slashdot**. Both charts use a combination of line graphs and histograms to represent data.
---
### Chart (a): Bitcoin-Alpha
#### Labels and Axis Titles
- **X-Axis**: "Path Length" (Integer values: 1–7)
- **Y-Axis**: "Proportion of Node Pairs (%)" (Range: 0–50%)
- **Legend**:
- Red line: "Line Graph" (Path length distribution trend)
- Blue striped bars: "Histogram" (Proportion of node pairs)
#### Key Trends and Data Points
1. **Line Graph (Red)**:
- **Trend**: Increases sharply to a peak at **Path Length 3** (~45%), then declines gradually.
- **Notable Points**:
- Path Length 1: ~0.5%
- Path Length 2: ~10%
- Path Length 3: ~45% (peak)
- Path Length 4: ~35%
- Path Length 5: ~15%
- Path Length 6: ~2%
- Path Length 7: ~0.5%
2. **Histogram (Blue Striped Bars)**:
- **Trend**: Bimodal distribution with peaks at **Path Length 3** (~40%) and **Path Length 4** (~40%).
- **Notable Points**:
- Path Length 1: ~0.5%
- Path Length 2: ~10%
- Path Length 3: ~40% (peak)
- Path Length 4: ~40% (peak)
- Path Length 5: ~10%
- Path Length 6: ~2%
- Path Length 7: ~0.5%
#### Spatial Grounding
- **Legend Position**: Top-right corner of the chart.
- **Color Consistency**:
- Red line matches the line graph.
- Blue striped bars match the histogram.
---
### Chart (b): Slashdot
#### Labels and Axis Titles
- **X-Axis**: "Path Length" (Integer values: 1–7)
- **Y-Axis**: "Proportion of Node Pairs (%)" (Range: 0–60%)
- **Legend**:
- Red line: "Line Graph" (Path length distribution trend)
- Blue striped bars: "Histogram" (Proportion of node pairs)
#### Key Trends and Data Points
1. **Line Graph (Red)**:
- **Trend**: Peaks at **Path Length 4** (~55%), with a gradual decline afterward.
- **Notable Points**:
- Path Length 1: ~0.5%
- Path Length 2: ~2%
- Path Length 3: ~20%
- Path Length 4: ~55% (peak)
- Path Length 5: ~30%
- Path Length 6: ~2%
- Path Length 7: ~0.5%
2. **Histogram (Blue Striped Bars)**:
- **Trend**: Unimodal distribution with a peak at **Path Length 4** (~50%).
- **Notable Points**:
- Path Length 1: ~0.5%
- Path Length 2: ~2%
- Path Length 3: ~20%
- Path Length 4: ~50% (peak)
- Path Length 5: ~10%
- Path Length 6: ~2%
- Path Length 7: ~0.5%
#### Spatial Grounding
- **Legend Position**: Top-right corner of the chart.
- **Color Consistency**:
- Red line matches the line graph.
- Blue striped bars match the histogram.
---
### Comparative Analysis
- **Bitcoin-Alpha**:
- Shorter average path lengths (peak at 3).
- Bimodal histogram suggests two dominant path length clusters.
- **Slashdot**:
- Longer average path lengths (peak at 4).
- Unimodal histogram indicates a single dominant path length cluster.
---
### Notes
- No non-English text or data tables are present.
- All data points are visually estimated from the charts; exact numerical values may vary slightly due to graphical representation.
</details>
Figure 9. Distribution of path lengths.
### J.3. The detailed results of Edge Sign Prediction
The average AUC results under different values of $ε$ and datasets for edge prediction tasks are detailed in Table 6.
Table 6. Summary of average AUC with different $ε$ and datasets for edge sign prediction tasks. (BOLD: Best)
| Bitcoin-OTC | SDGNN | 0.7655 | 0.7872 | 0.7913 | 0.8105 | 0.8571 |
| --- | --- | --- | --- | --- | --- | --- |
| SiGAT | 0.7011 | 0.7282 | 0.7869 | 0.8379 | 0.8706 | |
| SGCN | 0.5565 | 0.5740 | 0.6634 | 0.7516 | 0.7801 | |
| GAP | 0.5763 | 0.5782 | 0.6486 | 0.6741 | 0.7411 | |
| LSNE | 0.5030 | 0.5405 | 0.7041 | 0.8239 | 0.8776 | |
| ASGL | 0.8004 | 0.8462 | 0.8488 | 0.8505 | 0.8801 | |
| Bitcoin-Alpha | SDGNN | 0.6761 | 0.6883 | 0.7098 | 0.7308 | 0.8476 |
| SiGAT | 0.7033 | 0.7215 | 0.7303 | 0.7488 | 0.8207 | |
| SGCN | 0.5157 | 0.5450 | 0.6433 | 0.6930 | 0.7702 | |
| GAP | 0.5664 | 0.6025 | 0.6367 | 0.7091 | 0.7320 | |
| LSNE | 0.5112 | 0.5361 | 0.5959 | 0.6524 | 0.8069 | |
| ASGL | 0.7505 | 0.8075 | 0.8589 | 0.8591 | 0.8592 | |
| WikiRfA | SDGNN | 0.6558 | 0.7066 | 0.7142 | 0.7267 | 0.7930 |
| SiGAT | 0.6313 | 0.6525 | 0.7023 | 0.7777 | 0.8099 | |
| SGCN | 0.5107 | 0.6456 | 0.6515 | 0.7008 | 0.7110 | |
| GAP | 0.5356 | 0.5506 | 0.5612 | 0.5717 | 0.5937 | |
| LSNE | 0.5086 | 0.5253 | 0.6119 | 0.6553 | 0.7832 | |
| ASGL | 0.6680 | 0.7706 | 0.7963 | 0.7986 | 0.8100 | |
| Slashdot | SDGNN | 0.7547 | 0.8325 | 0.8697 | 0.8788 | 0.8862 |
| SiGAT | 0.7061 | 0.7886 | 0.8392 | 0.8424 | 0.8527 | |
| SGCN | 0.5662 | 0.6151 | 0.6662 | 0.7181 | 0.8093 | |
| GAP | 0.6121 | 0.6389 | 0.6879 | 0.7126 | 0.7471 | |
| LSNE | 0.5717 | 0.6144 | 0.7541 | 0.7753 | 0.7816 | |
| ASGL | 0.7861 | 0.8539 | 0.8887 | 0.8890 | 0.8910 | |
| Epinions | SDGNN | 0.6788 | 0.7180 | 0.7201 | 0.7455 | 0.8428 |
| SiGAT | 0.6772 | 0.7046 | 0.7063 | 0.7702 | 0.8253 | |
| SGCN | 0.6152 | 0.6487 | 0.6974 | 0.7502 | 0.8318 | |
| GAP | 0.5899 | 0.6034 | 0.6288 | 0.6310 | 0.6618 | |
| LSNE | 0.5033 | 0.6055 | 0.7590 | 0.8434 | 0.8585 | |
| ASGL | 0.6869 | 0.8134 | 0.8513 | 0.8658 | 0.8666 | |
### J.4. The detailed results of node clustering
The average SSI results under different values of $ε$ and datasets for node clustering tasks are detailed in Table 7.
Table 7. Summary of average SSI with different $ε$ and datasets for node clustering tasks. (BOLD: Best)
| 1 | Bitcoin-Alpha | 0.4819 | 0.4378 | 0.4877 | 0.4977 | 0.4988 | 0.5091 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Bitcoin-OTC | 0.4505 | 0.4677 | 0.5025 | 0.4970 | 0.5008 | 0.5160 | |
| Slashdot | 0.4715 | 0.5011 | 0.5025 | 0.5052 | 0.5005 | 0.5107 | |
| WikiRfA | 0.4788 | 0.4988 | 0.4968 | 0.4890 | 0.5003 | 0.5126 | |
| Epinions | 0.5001 | 0.4965 | 0.5022 | 0.5013 | 0.6095 | 0.6106 | |
| 2 | Bitcoin-Alpha | 0.4910 | 0.4733 | 0.4969 | 0.4985 | 0.5032 | 0.5402 |
| Bitcoin-OTC | 0.4733 | 0.4968 | 0.5075 | 0.4986 | 0.5729 | 0.6810 | |
| Slashdot | 0.4888 | 0.4864 | 0.4871 | 0.5134 | 0.5132 | 0.5494 | |
| WikiRfA | 0.4934 | 0.5054 | 0.5117 | 0.4996 | 0.5032 | 0.5577 | |
| Epinions | 0.5068 | 0.5116 | 0.5086 | 0.5463 | 0.6263 | 0.6732 | |
| 4 | Bitcoin-Alpha | 0.5019 | 0.4948 | 0.5112 | 0.5049 | 0.6204 | 0.6707 |
| Bitcoin-OTC | 0.5005 | 0.5325 | 0.5612 | 0.5465 | 0.6953 | 0.7713 | |
| Slashdot | 0.5003 | 0.5685 | 0.5545 | 0.5671 | 0.5444 | 0.5994 | |
| WikiRfA | 0.5005 | 0.5142 | 0.5538 | 0.5476 | 0.5644 | 0.5977 | |
| Epinions | 0.5148 | 0.5389 | 0.5386 | 0.6255 | 0.6747 | 0.6787 | |
<details>
<summary>x10.png Details</summary>

### Visual Description
# Technical Document: Chart Analysis
## Chart Title
**AUC Comparison Across Datasets and Models**
## Chart Type
Bar chart with grouped bars representing model performance across datasets.
## Axes
- **X-axis**: Datasets
Categories: `Bitcoin-Alpha`, `Bitcoin-OCT`, `Slashdot`, `WikiRfA`
- **Y-axis**: AUC (Area Under the Curve)
Range: 0.65 to 0.90 (in increments of 0.05)
## Legend
- **Position**: Top-right corner
- **Labels**:
- `ASGL-`: Light blue with diagonal stripes
- `ASGL+`: Dark blue with diagonal stripes
- `ASGL`: Red with dotted pattern
## Data Points and Trends
### Dataset: Bitcoin-Alpha
- **ASGL-**: ~0.83 (light blue, diagonal stripes)
- **ASGL+**: ~0.84 (dark blue, diagonal stripes)
- **ASGL**: ~0.86 (red, dotted)
**Trend**: ASGL > ASGL+ > ASGL-
### Dataset: Bitcoin-OCT
- **ASGL-**: ~0.81 (light blue, diagonal stripes)
- **ASGL+**: ~0.83 (dark blue, diagonal stripes)
- **ASGL**: ~0.85 (red, dotted)
**Trend**: ASGL > ASGL+ > ASGL-
### Dataset: Slashdot
- **ASGL-**: ~0.86 (light blue, diagonal stripes)
- **ASGL+**: ~0.81 (dark blue, diagonal stripes)
- **ASGL**: ~0.89 (red, dotted)
**Trend**: ASGL > ASGL- > ASGL+
### Dataset: WikiRfA
- **ASGL-**: ~0.79 (light blue, diagonal stripes)
- **ASGL+**: ~0.70 (dark blue, diagonal stripes)
- **ASGL**: ~0.81 (red, dotted)
**Trend**: ASGL > ASGL- > ASGL+
## Key Observations
1. **ASGL** consistently achieves the highest AUC across all datasets, with the largest margin in `Slashdot` (~0.89).
2. **ASGL+** underperforms compared to `ASGL` in all datasets, with the steepest drop in `WikiRfA` (~0.70).
3. **ASGL-** shows moderate performance, outperforming `ASGL+` in `WikiRfA` but lagging behind `ASGL` in all cases.
4. **Slashdot** exhibits the highest overall AUC values, while **WikiRfA** has the lowest.
## Spatial Grounding
- **Legend**: Top-right corner (confirmed via visual alignment).
- **Color Consistency**:
- `ASGL-` (light blue) matches all light blue bars.
- `ASGL+` (dark blue) matches all dark blue bars.
- `ASGL` (red) matches all red bars.
## Language and Text
- **Primary Language**: English
- **No additional languages detected**.
## Component Isolation
1. **Header**: Chart title and legend.
2. **Main Chart**: Grouped bars for datasets and models.
3. **Footer**: No additional text or annotations.
## Conclusion
The chart demonstrates that `ASGL` outperforms `ASGL+` and `ASGL-` across all datasets, with the most significant advantage in `Slashdot`. `ASGL+` shows the weakest performance, particularly in `WikiRfA`.
</details>
Figure 10. Comparison between ASGL, $ASGL^+$ , and $ASGL^-$ .
### J.5. The Setup of Link Stealing Attack
Motivated by (42), we assume that the adversary has black-box access to the node embeddings produced by the target signed graph learning model, but not to its internal parameters or gradients. The adversary also possesses an auxiliary graph dataset comprising node pairs that partially overlap in distribution with the target graph. Some of these node pairs belong to the training graph (members), while others are from the test graph (non-members). For each node pair, a feature vector is constructed by concatenating their embeddings. Finally, these feature vectors, along with their corresponding member or non-member labels, are then used to train a logistic regression classifier to infer whether an edge exists between any two nodes of the target graph. To simulate this link stealing attack, each dataset is partitioned into target training, auxiliary training, target test, and auxiliary test sets with a 5:2:2:1 ratio.
### J.6. Effectiveness of Adversarial Learning with Edge Signs.
To verify the effectiveness of adversarial learning with signed edges, we also compare our ASGL with its variants, denoted as $ASGL^+$ and $ASGL^-$ . Specifically, $ASGL^+$ and $ASGL^-$ only operate on the positive graph $G^+$ and the negative graph $G^-$ , respectively. Fig. 10 presents the average AUC scores of ASGL, $ASGL^+$ , and $ASGL^-$ across all datasets. It can be observed that ASGL significantly outperforms both $ASGL^+$ and $ASGL^-$ in all cases. These results demonstrate that our privacy-preserving adversarial learning framework with edge signs is more effective in representing signed graphs compared to its variants that neglect edge sign information.