## Neural Analogical Matching
Maxwell Crouse 1 * , Constantine Nakos 1 , Ibrahim Abdelaziz 2 , Kenneth Forbus 1
1 Qualitative Reasoning Group, Northwestern University
2 IBM Research, IBM T.J. Watson Research Center
{ mvcrouse, cnakos } @u.northwestern.edu, ibrahim.abdelaziz1@ibm.com, forbus@northwestern.edu
## Abstract
Analogy is core to human cognition. It allows us to solve problems based on prior experience, it governs the way we conceptualize new information, and it even influences our visual perception. The importance of analogy to humans has made it an active area of research in the broader field of artificial intelligence, resulting in data-efficient models that learn and reason in human-like ways. While cognitive perspectives of analogy and deep learning have generally been studied independently of one another, the integration of the two lines of research is a promising step towards more robust and efficient learning techniques. As part of a growing body of research on such an integration, we introduce the Analogical Matching Network: a neural architecture that learns to produce analogies between structured, symbolic representations that are largely consistent with the principles of Structure-Mapping Theory.
## 1 Introduction
Analogical reasoning is a form of inductive reasoning that cognitive scientists consider to be one of the cornerstones of human intelligence (Gentner 2003; Hofstadter 2001, 1995). Analogy shows up at nearly every level of human cognition, from low-level visual processing (Sagi, Gentner, and Lovett 2012) to abstract conceptual change (Gentner et al. 1997). Problem solving using analogy is common, with past solutions forming the basis for dealing with new problems (Holyoak, Junn, and Billman 1984; Novick 1988). Analogy also facilitates learning and understanding by allowing people to generalize specific situations into increasingly abstract schemas (Gick and Holyoak 1983).
Many different theories have been proposed for how humans perform analogy (Mitchell 1993; Chalmers, French, and Hofstadter 1992; Gentner 1983; Holyoak, Holyoak, and Thagard 1996). One of the most influential theories is StructureMapping Theory (SMT) (Gentner 1983), which posits that analogy involves the alignment of structured representations of objects or situations subject to certain constraints. Key characteristics of SMT are its use of symbolic representations and its emphasis on relational structure, which allow the same principles to apply to a wide variety of domains.
* Correspondence to mvcrouse@u.northwestern.edu , code available at https://github.com/mvcrouse/NeuralAnalogy.
Copyright Β© 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Until now, the symbolic, structured nature of SMT has made it a poor fit for deep learning. The representations produced by deep learning techniques are incompatible with offthe-shelf SMT implementations like the Structure-Mapping Engine (SME) (Falkenhainer, Forbus, and Gentner 1989; Forbus et al. 2017), while the symbolic graphs that SMT assumes as input are challenging to encode with traditional neural methods. In this work, we describe how recent advances in graph representation learning can be leveraged to create deep learning systems that can learn to produce structural analogies consistent with SMT.
Contributions: We introduce the Analogical Matching Network (AMN), a neural architecture that learns to produce analogies between symbolic representations. AMN is trained on purely synthetic data and is demonstrated over a diverse set of analogy problems drawn from structure-mapping literature to produce outputs that are largely consistent with SMT. With AMN, we aim to push the boundaries of deep learning and extend them to an important area of human cognition; in particular, by showing how to design a deep learning system that conforms to a cognitive theory of analogical reasoning. It is our hope that future generations of neural architectures can reap the same benefits from analogy that symbolic reasoning systems and humans currently do.
## 2 Related Work
Many different computational models of analogy have been proposed (Mitchell 1993; Holyoak and Thagard 1989; O'Donoghue and Keane 1999; Forbus et al. 2017), each instantiating a different cognitive theory of analogy. The differences between them are compounded by the computational costs of analogical reasoning, a provably NP-Hard problem (Veale and Keane 1997). While these computational models are often used to test cognitive theories of human behavior, they are also useful tools for applied tasks. For instance, the Structure-Mapping Engine (SME) has been used in questionanswering (Ribeiro et al. 2019), computer vision (Chen et al. 2019), and machine reasoning (Klenk et al. 2005).
Many of the early approaches to analogy were connectionist (Gentner and Markman 1993). The STAR architecture of (Halford et al.) used tensor product representations of structured data to perform simple analogies of the form R ( x, y ) β S ( f ( x ) , f ( y )) . Drama (Eliasmith and Thagard
Figure 1: Relational and graph representations for models of the atom (left) and Solar System (right). Light green edges indicate the set of correspondences between the two graphs.
<details>
<summary>Image 1 Details</summary>

### Visual Description
## Diagram: Semantic Network of Astronomical Relationships
### Overview
The image presents a semantic network diagram illustrating relationships between astronomical entities (nucleus, electron, sun, planet) and their properties (mass, temperature, attraction, revolution). A key is provided above the diagram, mapping numerical identifiers to specific concepts or relationships. The diagram itself uses nodes to represent these concepts and directed edges to represent the relationships between them.
### Components/Axes
The image consists of two main parts: a key and the diagram itself.
**Key:**
The key is a list of numbered items, each defining a concept or relationship.
* [1] nucleus
* [2] electron
* [3] MASS([1]) - Mass of the nucleus
* [4] MASS([2]) - Mass of the electron
* [5] ATTRACTS([1], [2]) - Nucleus attracts electron
* [6] REVOLVES-AROUND([2], [1]) - Electron revolves around nucleus
* [7] GREATER([3], [4]) - Mass of nucleus is greater than mass of electron
* [8] sun
* [9] planet
* [10] MASS([8]) - Mass of the sun
* [11] MASS([9]) - Mass of the planet
* [12] TEMPERATURE([8]) - Temperature of the sun
* [13] TEMPERATURE([9]) - Temperature of the planet
* [14] REVOLVES-AROUND([9], [8]) - Planet revolves around sun
* [15] GREATER([10], [11]) - Mass of sun is greater than mass of planet
* [16] GREATER([12], [13]) - Temperature of sun is greater than temperature of planet
* [17] ATTRACTS([9], [8]) - Sun attracts planet
* [18] CAUSES (AND([15], [17]), [14]) - Sun's mass and attraction cause the planet to revolve around it.
* [19] YELLOW([8]) - Sun is yellow
**Diagram:**
The diagram consists of nodes (ovals) representing the concepts and directed edges (arrows) representing the relationships. The nodes are labeled with the concepts from the key.
### Detailed Analysis or Content Details
The diagram shows the following relationships:
* **Nucleus & Electron:**
* MASS is associated with both nucleus and electron.
* ATTRACTS connects nucleus to electron.
* REVOLVES-AROUND connects electron to nucleus.
* GREATER connects the mass of the nucleus to the mass of the electron.
* **Sun & Planet:**
* MASS is associated with both sun and planet.
* TEMPERATURE is associated with both sun and planet.
* ATTRACTS connects sun to planet.
* REVOLVES-AROUND connects planet to sun.
* GREATER connects the mass of the sun to the mass of the planet.
* GREATER connects the temperature of the sun to the temperature of the planet.
* **Causal Relationship:**
* A complex relationship is represented by "CAUSES". It involves an "AND" gate combining GREATER(mass of sun vs planet) and ATTRACTS(sun attracts planet), which then causes REVOLVES-AROUND(planet revolves around sun).
* **Color:**
* YELLOW is associated with the sun.
### Key Observations
* The diagram uses a consistent structure to represent relationships between different astronomical entities.
* The "CAUSES" relationship is more complex, involving a logical "AND" operation.
* The diagram visually represents a hierarchical structure, with the nucleus-electron system and the sun-planet system being distinct but similarly structured.
* The diagram is a simplified representation of complex physical phenomena.
### Interpretation
The diagram illustrates a simplified model of astronomical relationships, focusing on mass, attraction, temperature, and orbital motion. It demonstrates how these properties are interconnected and how they contribute to the observed behavior of celestial bodies. The use of a semantic network allows for a clear and concise representation of these relationships. The "CAUSES" node highlights the idea that physical laws (in this case, gravity and mass) are responsible for observed phenomena (planetary orbits). The diagram is a conceptual model, not a quantitative one, and does not provide specific values for mass, temperature, or distances. It serves as a visual aid for understanding the fundamental relationships between these entities. The diagram is a form of knowledge representation, encoding information about the solar system in a structured and accessible format.
</details>
2001) was an implementation of the multi-constraint theory of analogy (Holyoak, Holyoak, and Thagard 1996) that used holographic representations similar to tensor products to embed structure. LISA (Hummel and Holyoak 1997, 2005) was a hybrid symbolic connectionist approach to analogy. It staged the mapping process temporally, generating mappings from elements that were activated at the same time.
Cognitive perspectives of analogy have gone relatively unexplored in deep learning research, with only a few recent works that address them (Hill et al. 2019; Zhang et al. 2019; Lu et al. 2019). Most prior deep learning works have considered analogies involving perceptual data (Mikolov, Yih, and Zweig 2013; Reed et al. 2015; Bojanowski et al. 2017; Zhou et al. 2019; Benaim et al. 2020). Such problems differ from those seen in the structure-mapping literature in that they typically do not require explicit graph matching and they involve only one relation which is unobserved.
Our approach is conceptually related to recent work on neural graph matching (Emami and Ranka 2018; Georgiev and LiΒ΄ o 2020; Wang, Yan, and Yang 2019). Such works generally focus on finding unconstrained maximum weight matchings and often interleave their networks with hardcoded algorithms (e.g., (Emami and Ranka 2018) applies the Hungarian algorithm to coerce its outputs into a permutation matrix). These considerations make them less applicable here, as 1) SMT is subject to unique constraints that make standard bipartite matching techniques insufficient and 2) we wish to explore the extent to which SMT is purely learnable.
## 3 Structure-Mapping Theory
In Structure-Mapping Theory (SMT) (Gentner 1983), analogy centers around the structural alignment of relational representations (see Figure 1). A relational representation is a set of logical expressions constructed from entities (e.g., sun ), attributes (e.g., YELLOW ), functions (e.g., TEMPERATURE ), and relations (e.g., GREATER ). Structural alignment is the process of producing a mapping between two relational representations (referred to as the base and target ). A mapping is a triple β© M,C,S βͺ , where M is a set of correspondences between the base and target, C is a set of candidate inferences (i.e., inferences about the target that can be made from the structure of the base), and S is a structural evaluation score that measures the quality of M . Correspondences are pairs of elements between the base and target (i.e., expressions or entities) that are identified as matching with one another. While entities can be matched together irrespective of their labels, there are more rigorous criteria for matching expressions. SMT asserts that matches should satisfy the following:
1. One-to-One : Each element of the base and target can be a part of at most one correspondence.
2. Parallel Connectivity : Two expressions can be in a correspondence with each other only if their arguments are also in correspondences with each other.
3. Tiered Identicality : Relations of expressions in a correspondence must match identically, but functions need not if their correspondence supports parallel connectivity.
4. Systematicity : Preference should be given to mappings with more deeply nested expressions.
To understand these properties, we use a classic analogy (see Figure 1) from (Gentner 1983; Falkenhainer, Forbus, and Gentner 1989), which draws an analogy between the Solar System and the Rutherford model of the atom. A set of correspondences M between the base (Solar System) and target (Rutherford atom) is a set of pairs of elements from both sets, e.g., { β© [1] , [8] βͺ , β© [2] , [9] βͺ } . The one-to-one constraint
Figure 2: An overview of the model pipeline
<details>
<summary>Image 2 Details</summary>

### Visual Description
\n
## Diagram: System Architecture for Graph Correspondence
### Overview
The image depicts a system architecture diagram illustrating a process for finding correspondences between two graphs, labeled "Base and Target". The process involves several stages: Label DAG LSTM, Signature DAG LSTM, Correspondence Selector, Candidate Inference Selector, and output stages for Correspondences and Inferences. The diagram uses directed acyclic graphs (DAGs) represented by nodes and edges, with different colors representing different elements or stages of the process.
### Components/Axes
The diagram is structured into six main components arranged horizontally:
1. **Base and Target:** Input graphs represented by blue nodes connected by edges.
2. **Label DAG LSTM:** A DAG with nodes colored red, orange, yellow, and green.
3. **Signature DAG LSTM:** A DAG with nodes colored purple, green, and blue.
4. **Correspondence Selector:** A DAG with nodes colored blue.
5. **Candidate Inference Selector:** A DAG with nodes colored green and blue.
6. **Output Correspondences & Output Candidate Inferences:** Representations of the output, with lines connecting corresponding nodes and green nodes representing inferences.
Arrows indicate the flow of information between these components. Each component is enclosed within a rounded rectangle with a semi-transparent gray fill.
### Detailed Analysis or Content Details
**1. Base and Target:**
* Two DAGs are present.
* Both graphs have approximately 7 nodes.
* Nodes are connected by directed edges.
* All nodes are colored blue.
**2. Label DAG LSTM:**
* The DAG has approximately 7 nodes.
* Node colors: Red (approximately 2 nodes), Orange (approximately 2 nodes), Yellow (approximately 2 nodes), Green (approximately 1 node).
* Edges connect nodes in a directed manner.
**3. Signature DAG LSTM:**
* The DAG has approximately 7 nodes.
* Node colors: Purple (approximately 3 nodes), Green (approximately 2 nodes), Blue (approximately 2 nodes).
* Edges connect nodes in a directed manner.
**4. Correspondence Selector:**
* The DAG has approximately 7 nodes.
* All nodes are colored blue.
* Edges connect nodes, and curved lines are drawn between some nodes, indicating correspondences.
**5. Candidate Inference Selector:**
* The DAG has approximately 5 nodes.
* Node colors: Green (approximately 3 nodes), Blue (approximately 2 nodes).
* Edges connect nodes in a directed manner.
**6. Output Correspondences:**
* Shows lines connecting corresponding nodes between the two original graphs.
* The lines are curved and colored black.
**7. Output Candidate Inferences:**
* Shows a series of green nodes connected by a single line.
* Approximately 3 nodes are present.
### Key Observations
* The system appears to transform the input graphs through multiple layers of LSTM-based DAGs.
* The color coding is consistent within each stage, suggesting different types of information or features are represented by each color.
* The Correspondence Selector and Candidate Inference Selector stages seem to refine the initial graph representations to identify correspondences and potential inferences.
* The output stages visually represent the identified correspondences and candidate inferences.
### Interpretation
This diagram illustrates a system for graph matching and inference. The "Base and Target" graphs represent the input data, and the subsequent stages process this data to find relationships between the nodes in the two graphs. The use of LSTMs suggests that the system is capable of learning and representing complex relationships within the graphs. The "Label DAG LSTM" and "Signature DAG LSTM" likely extract features or representations of the graphs, while the "Correspondence Selector" and "Candidate Inference Selector" use these representations to identify corresponding nodes and potential inferences. The final output stages visualize the results of this process.
The diagram suggests a pipeline approach to graph correspondence, where each stage builds upon the previous one to refine the results. The use of DAGs allows for representing complex relationships and dependencies between nodes. The system could be used for various applications, such as knowledge graph alignment, image matching, or protein-protein interaction prediction. The diagram does not provide quantitative data, but it clearly outlines the architecture and flow of information within the system. The system appears to be designed to identify both direct correspondences between nodes and potential inferences based on the graph structure.
</details>
restricts each element to be a member of at most one correspondence. Thus, if β© [7] , [15] βͺ was a member of M , then β© [7] , [16] βͺ could not be added to M . Parallel connectivity enforces correspondence between arguments if the parents are in correspondence. In this example, if β© [7] , [15] βͺ was a member of M , then both β© [3] , [10] βͺ and β© [4] , [11] βͺ would need to be members of M . Parallel connectivity also respects argument order when dealing with ordered relations. Tiered identicality is not relevant in this example; however, if [10] used the label WEIGHT instead of MASS , tiered identicality could be used to match [3] and [10] , since such a correspondence would allow for a match between their parents. The last property, systematicity, results in larger correspondence sets being preferred over smaller ones. Note that the singleton set { β© [1] , [8] βͺ } satisfies SMT's constraints, but it is clearly not useful by itself. Systematicity captures the natural preference for larger, more interesting matches.
Candidate inferences are statements from the base that are projected into the target to fill in missing structure (Bowdle and Gentner 1997; Gentner and Markman 1998). Given a set of correspondences M , candidate inferences are created from statements in the base that are supported by expressions in M but are not part of M themselves. In Figure 1, one candidate inference would be CAUSES(AND([7],[5]),[6]) , derived from [18] by substituting its arguments with the expressions they correspond to in the target. In this work, we adopt SME's default criteria for computing candidate inferences. Valid candidate inferences are all statements that have some dependency that is included in the correspondences or an ancestor that is a candidate inference (e.g., an expression whose parent has arguments in the correspondences).
The concepts above carry over naturally into graphtheoretic notions. The base and target are considered semiordered directed-acyclic graphs (DAGs) G B = β© V B , E B βͺ and G T = β© V T , E T βͺ , where V B and V T are sets of nodes and E B and E T are sets of edges. Each node corresponds to some expression and has a label given by its relation, function, attribute, or entity name. Structural alignment is the process of finding a maximum weight bipartite matching M β V B Γ V T , where M satisfies the pairwise-disjunctive constraints imposed by parallel connectivity. Finding candidate inferences is then determining the subset of nodes from
V B \ { b i : β© b i , t j βͺ β M } with support in M .
## 4 Model
## 4.1 Model Components
Given a base G B = β© V B , E B βͺ and target G T = β© V T , E T βͺ , AMN produces a set of correspondences M β V B Γ V T and a set of candidate inferences I β V B \ { b i : β© b i , t j βͺ β M } . A key design choice of this work was to avoid using rules or architectures that force particular outputs whenever possible. AMN is not forced to output correspondences that satisfy the constraints of SMT; instead, conformance with SMT is reinforced through performance on training data. Our architecture uses Transformers (Vaswani et al. 2017) and pointer networks (Vinyals, Fortunato, and Jaitly 2015) and takes inspiration from the work of (Kool, Van Hoof, and Welling 2018). A high-level overview is given in Figure 2, which shows how each of the three main components (graph embedding, correspondence selection, and candidate inference selection) interact with one another.
Representing Structure: When embedding the nodes of G B and G T , there are representational concerns to keep in mind. First, as matching should be done on the basis of structure, the labels of entities should not be taken into account during the alignment process. Second, because SMT's constraints require AMN to be able to recognize when a node is part of multiple correspondences, AMN should maintain distinguishable representations for distinct nodes, even if those nodes have the same labels. Last, the architecture should not be vocabulary dependent, i.e., AMN should generalize to symbols it has never seen before. To achieve each of these, AMNfirst parses the original input into two separate graphs, a label graph and a signature graph (see Figure 3).
The label graph will be used to get an estimate of structural similarities. To generate the label graph, AMN substitutes each entity node's label with a generic entity token. This is intentional, as it reflects that entity labels have no inherent utility for producing matchings according to SMT. Then, each function and predicate node is assigned a randomly chosen generic label (from a fixed set of such labels) based off its arity and orderedness. Assignments are made consistently across the entire graph, e.g., every instance of MASS in both
Figure 3: Original graph (left), its label graph (middle), and its signature graph (right)
<details>
<summary>Image 3 Details</summary>

### Visual Description
\n
## Diagram: Relational Structure
### Overview
The image presents a series of three interconnected diagrams, visually representing relationships between concepts. Each diagram is structured as a tree, with nodes connected by directed edges (arrows). The diagrams appear to illustrate a progression of relationships, starting with physical properties and moving towards more abstract concepts.
### Components/Axes
The diagrams consist of nodes (ovals and diamonds) and directed edges. The nodes contain text labels representing concepts. The diagrams are arranged horizontally, from left to right.
### Detailed Analysis or Content Details
**Diagram 1 (Leftmost):**
* Top Node: "GREATER"
* Nodes connected to "GREATER": "MASS", "ATTRACTS", "REVOLVES AROUND"
* "MASS" connects to "electron"
* "REVOLVES AROUND" connects to "nucleus"
**Diagram 2 (Middle):**
* Top Node: "pβ?" (diamond shape)
* Nodes connected to "pβ?": "Οβ?", "pββ?"
* "Οβ?" connects to "entity"
* "pββ?" connects to "entity"
**Diagram 3 (Rightmost):**
* Top Node: "pβ?" (diamond shape)
* Nodes connected to "pβ?": "Οβ?", "!"
* "Οβ?" connects to "id 87" and "id 13"
* "!" connects to "!"
### Key Observations
The diagrams demonstrate a hierarchical structure. The first diagram relates to physical properties of matter. The second and third diagrams introduce probabilistic or symbolic elements ("pβ?", "pβ?", "Οβ?", "Οβ?"). The presence of question marks suggests uncertainty or hypotheses. The "id 87" and "id 13" in the third diagram suggest identifiers or labels for specific entities. The repeated "!" symbol is unclear without further context.
### Interpretation
The diagrams likely represent a conceptual model or a knowledge representation scheme. The first diagram establishes a basic physical relationship. The subsequent diagrams introduce a layer of abstraction, potentially representing probabilistic relationships or symbolic representations of entities. The question marks indicate that these relationships are not definitively established. The use of "id" suggests a database or a system for identifying and categorizing entities. The diagrams could be part of a larger system for reasoning about physical phenomena or for representing knowledge in a formal way. The diagrams are not providing facts or data, but rather a conceptual framework. The meaning of the symbols "Ο" and "p" is unknown without additional context. The diagrams appear to be a visual representation of a logical or relational structure, possibly used in artificial intelligence or knowledge engineering.
</details>
the base and target would be assigned the same generic replacement label. This substitution means the original label is not used in the matching process, which allows AMN to generalize to new symbols.
The label graph is not sufficient to produce representations that can be used for matching, as it represents a node by only label-based features which are shared amongst different nodes, an issue known as the type-token distinction (Kahneman, Treisman, and Gibbs 1992; Wetzel 2006). To contend with this, a signature graph is constructed that represents nodes in a way that respects object identity. To construct the signature graph, AMN replaces each distinct entity with a unique identifier (drawn from a fixed set of possible identifiers). It then assigns each function and predicate a new label based solely on its arity and orderedness, ignoring the original symbol. For instance, ATTRACTS and REVOLVES-AROUND would be assigned the same label as they are both ordered binary predicates.
As all input graphs will be DAGs, AMN uses two separate DAG LSTMs (Crouse et al. 2019) to embed the nodes of the label and signature graphs (equations detailed in Appendix 7.4). Each node embedding is computed as a function of its complete set of dependencies in the original graph. The set of label structure embeddings is written as L V = { l v : v β V } and the set of signature embeddings is written as S V = { s v : v β V } . Before passing these embeddings to the next step, each element of S V is scaled to unit length, i.e. each s v becomes s v / β s v β , which gives our network an efficiently checkable criterion for whether or not two nodes are likely to be equal, i.e., when the dot product of two signature embeddings is 1.
Correspondence Selector: The graph embedding procedure yields two sets of node embeddings (label structure and signature embeddings) for the base and target. We utilize the set of embedding pairs for each node of V B and V T , writing l v to denote the label structure embedding of node v from L V and s v the signature embedding of node v from S V . We first define the set of unprocessed correspondences C (0)
<!-- formula-not-decoded -->
<!-- formula-not-decoded -->
where [ Β· ; Β· ] denotes vector concatenation, is the tiered identicality threshold that governs how much the subgraphs rooted at two nodes may differ and still be considered for correspondence (in this work, we set = 1e -5 ). The first element of each correspondence in C (0) , i.e., h c = [ l b ; l t ; s b ; s t ] , is then passed through an N -layered Transformer encoder (equations detailed in Appendix 7.4) to produce a set of encoded
<!-- formula-not-decoded -->
The Transformer decoder selects a subset of correspondences that constitutes the best analogical match (see Figure 4). The attention-based transformations are only performed on the initial element of each tuple, i.e., h d in β© h d , s b , s t βͺ . We let D t be the processed set of all selected correspondences at timestep t (after the N attention layers) and O t be the set of all remaining correspondences (with D 0 = { START-TOK } and O 0 = E βͺ { END-TOK } ). The decoder generates compatibility scores Ξ± od between each pair of elements, i.e., β© o, d βͺ β O t ΓD t . These are combined with the signature embedding similarities to produce a final compatibility Ο od
<!-- formula-not-decoded -->
where FFN is a two layer feed-forward network with ELU activations (Clevert, Unterthiner, and Hochreiter 2015). Recall that the signature components, i.e. s b and s t , were scaled to unit length. Thus, we would expect closeness in the original graph to be reflected by dot-product similarity and identicality to be indicated by a maximum value dot-product, i.e. s b o s b d = 1 or s t o s t d = 1 . Once each pair has been scored, AMNselects an element of O t to be added to D t +1 . For each o β O t , we compute its value to be
<!-- formula-not-decoded -->
where FFN is a two layer feed-forward network with ELU activations. A softmax is applied to these scores and the highest valued element is added to D t +1 . The use of maximum, minimum, and average is intended to let the network capture both individual and aggregate evidence. Individual evidence is given by a pairwise interaction between two correspondences (e.g., two correspondences that together violate the one-toone constraint). Conversely, aggregate evidence is given by the interaction of a correspondence with everything selected thus far (e.g., a correspondence needed for several parallel connectivity constraints). When END-TOK is selected, the set of correspondences M returned is the set of node pairs from V B and V T associated with elements in D .
Candidate Inference Selector: The output of the correspondence selector is a set of correspondences M . The candidate inferences associated with M are drawn from the nodes of the base graph V B that were not used in M . Let V in and V out be the subsets of V B that were / were not used in M , respectively. We first extract all signature embeddings for both sets, i.e., S in = { s b : b β V in } and S out = { s b : b β V out } . In this module there are no Transformer components, with AMNoperating directly on S in and S out .
Figure 4: The correspondence selection process, where β and β are the start and stop tokens and E , D t , and O t are the sets of encoded, selected, and remaining correspondences
<details>
<summary>Image 4 Details</summary>

### Visual Description
\n
## Diagram: Correspondence Encoder-Decoder Network
### Overview
This diagram illustrates the architecture of a correspondence encoder-decoder network. The network takes an initial correspondence filtering stage and processes it through an encoder, followed by a decoder with multiple time steps to produce output correspondences. The diagram focuses on the flow of information and the representation of correspondences at each stage.
### Components/Axes
The diagram is segmented into three main sections: Correspondence Filtering, Correspondence Encoder, and Correspondence Decoder.
* **Correspondence Filtering:** Depicts a graph with nodes labeled c1 through c5, connected by blue lines representing relationships or correspondences.
* **Correspondence Encoder:** Consists of a series of stacked blue rectangular blocks labeled C(0) through C(N), representing encoded features.
* **Correspondence Decoder:** A series of blocks labeled with time steps t=0 to t=5. Each block contains two components: a yellow rectangle labeled 'Ξ΅' and a green rectangle labeled 'D'. Within each 'D' block are circular nodes labeled o1 through o5, with arrows indicating flow. The output correspondences are shown on the far right, also as circular nodes labeled c1 through c5, enclosed in a curved bracket.
* **Labels:** The diagram includes labels for each component: "Correspondence Filtering", "Correspondence Encoder", "Correspondence Decoder", "Output Correspondences". Time steps are labeled as 't=0', 't=1', 't=5', and '...'.
* **Arrows:** White arrows indicate the direction of information flow.
### Detailed Analysis or Content Details
The diagram shows a sequential process:
1. **Correspondence Filtering:** The initial stage represents a graph with 5 nodes (c1, c2, c3, c4, c5) and connections between them. The exact nature of these connections isn't specified, but they represent initial correspondences.
2. **Correspondence Encoder:** The encoder takes the filtered correspondences and transforms them into a series of encoded features, represented by the stacked blue blocks C(0) to C(N). The number of encoder layers (N) is not explicitly defined. Each block contains 5 circular nodes labeled c1 through c5.
3. **Correspondence Decoder:** The decoder takes the encoded features and iteratively refines the correspondences over time steps t=0 to t=5 (and potentially more, indicated by "...").
* **t=0:** The decoder block contains 'Ξ΅' (likely representing some transformation or input) and 'D0' with nodes o1 through o5. Arrows indicate flow between nodes.
* **t=1:** Similar to t=0, with 'Ξ΅' and 'D1' containing nodes o1 through o5.
* **t=5:** Similar to t=0 and t=1, with 'Ξ΅' and 'D5' containing nodes o1 through o5.
4. **Output Correspondences:** The final stage produces the output correspondences, represented by nodes c1 through c5.
The nodes within the encoder and decoder blocks (c1-c5 and o1-o5) appear to represent the same underlying entities, but their representation changes as they are processed through the network. The arrows within the decoder blocks suggest a dynamic process where the relationships between the nodes are updated at each time step.
### Key Observations
* The network architecture is sequential, with information flowing from the filtering stage through the encoder and then through the decoder.
* The decoder operates iteratively, refining the correspondences over multiple time steps.
* The use of 'Ξ΅' within the decoder blocks suggests a potential input or transformation at each time step.
* The diagram does not provide any numerical values or specific details about the transformations performed by the encoder and decoder.
### Interpretation
This diagram depicts a neural network designed to learn and refine correspondences between entities. The initial correspondence filtering stage likely represents a pre-processing step to identify potential relationships. The encoder then learns a compressed representation of these relationships, and the decoder uses this representation to generate more accurate or refined correspondences over time. The iterative nature of the decoder suggests that the network is capable of capturing complex dependencies between the entities. The diagram is a high-level overview of the architecture and does not provide details about the specific algorithms or parameters used. It is a conceptual illustration of a correspondence learning process, likely used in tasks such as image matching, object tracking, or point cloud registration. The diagram is a visual representation of a mathematical model, and its effectiveness depends on the specific implementation and training data.
</details>
AMNwill select elements from S out to return. Like before, we let D t be the set of all selected elements from S out and O t be the set of all remaining elements from S out at timestep t . AMNcomputes compatibility scores between pairs of output options with candidate inference and previously selected nodes, i.e. Ξ± od for each β© o, d βͺ β O t Γ ( D t βͺS in ) . The compatibility scores are given by a simple single-headed attention computation (see Appendix 7.4). Unlike the correspondence encoder-decoder, there are no other values to combine these scores with, so they are used directly to compute a value v o for each element of O t . AMNcomputes this value as
<!-- formula-not-decoded -->
A softmax is used and the highest valued element is added to D t +1 . Once the end token is selected, decoding stops and the set of nodes associated with elements in D is returned.
Loss Function: As both the correspondence and candidate inference components use a softmax, the loss function is categorical cross entropy. Teacher forcing is used to guide the decoder to select the correct choices during training. With L corr the loss for correspondence selection and L ci the loss for candidate inference selection, the final loss is given as L = L corr + Ξ» L ci (with Ξ» a hyperparameter), which is minimized with Adam (Kingma and Ba 2014).
## 4.2 Model Scoring
Structural Match Scoring: In order to avoid counting erroneous correspondence predictions towards the score of the output correspondences M , we first identify all correspondences that are either degenerate or violate the constraints of SMT. Degenerate correspondences are correspondences between constants that have no higher-order structural support in M (i.e., if either has no parent that participates in a correspondence in M ). To determine if a correspondence β© b, t βͺ violates SMT, we check whether the subgraphs of the base and target rooted at b and t satisfy the one-to-one matching, parallel connectivity, and tiered identicality constraints (see Section 3). The check can be computed in time linear with the size of the corresponding subgraphs. Let the valid subset of M be M val . A correspondence m is considered a root correspondence if there does not exist another correspondence m β² such that m β² β M val and a node in m β² is an ancestor of a node in m . We define M root β M val to be the set of all such root correspondences. For a correspondence m = β© b, t βͺ in M val , its score s ( m ) is given as the size of the subgraph rooted at b in the base. The structural match score for M is then sum of scores for all correspondences in M root , i.e., s ( M ) = β m β M root s ( m ) . This repeatedly counts nodes that appear in the dependencies of multiple correspondences, which leads to higher scores for more interconnected matchings (in keeping with the systematicity preference of SMT).
Structural Evaluation Maximization: Dynamically assigning labels to each example allows AMN to handle neverbefore-seen symbols, but its inherent randomness can lead to significant variability in terms of outputs. AMN combats this by running each test problem r times and returning the mapping M = arg max M i β j J ( M i , M j ) , where J ( M i , M j ) is the Jaccard index (intersection over union) between the correspondence sets produced by the i -th and j -th runs. Intuitively, this is the run that shared the most correspondences with other runs and had the fewest unshared extra correspondences.
## 5 Experiments
## 5.1 Data Generation and Training
AMN was trained on 100,000 synthetic analogy examples, with the hyperparameters used for AMN provided in Appendix 7.1 (in the supplementary material). A single example consisted of base and target graphs, a set of correspondences, and a set of nodes from the base to be candidate inferences. Construction of synthetic examples begins with generating DAGs. Each DAG consists of a set of k β [2 , 7] layers (with the particular k for a graph chosen at random). Each node is assigned an arity a , with the maximum arity being a = 3 . Nodes at layer i can be connected to a nodes from lower layers (i.e., layer j with j < i ) selected at random. Nodes with arity a = 0 are considered entities and nodes with non-zero
Figure 5: AMN output correspondences for an example from the Geometric Analogies domain
<details>
<summary>Image 5 Details</summary>

### Visual Description
\n
## Diagram: Spatial Relationship Graph
### Overview
The image depicts a directed graph illustrating spatial relationships between various objects. The graph consists of nodes (ovals) representing objects and edges (arrows) representing relationships like "above", "elementContains", and "consistsOf". The nodes are labeled with object types and properties, such as "ShapeTypeA", "ObjectColoredWithWhiteColor", and "ObjectBorderColoredWithBlackColor".
### Components/Axes
The diagram does not have traditional axes. It consists of nodes and directed edges. The nodes are labeled with the following:
* **ShapeTypeA, ShapeTypeB, ShapeTypeC**: These represent different shape types.
* **ObjectColoredWithWhiteColor, ObjectColoredWithBlackColor**: These describe the color of the objects.
* **ObjectBorderColoredWithBlackColor**: This describes the color of the object's border.
* **Object-240, Object-241, Object-238, Object-239, Object-219**: Unique identifiers for specific objects.
* **elementContains, consistsOf, above**: Relationship types.
### Detailed Analysis or Content Details
The diagram can be broken down into sections based on the central node "Object-238".
**Left Branch (Object-240):**
* "Object-240" is connected to "Object-241" via the relationship "above".
* "Object-241" is connected to "ShapeTypeA" and "ObjectColoredWithWhiteColor" and "ObjectBorderColoredWithBlackColor".
* "Object-240" is connected to "ShapeTypeA" and "ObjectBorderColoredWithBlackColor".
**Central Node (Object-238):**
* "Object-238" is connected to "Object-241" via the relationship "above".
* "Object-238" is connected to "Object-239" via the relationship "consistsOf".
* "Object-238" is connected to "Object-219" via the relationship "elementContains".
* "Object-238" is labeled with "ObjectColoredWithWhiteColor".
**Right Branch (Object-239 & Object-219):**
* "Object-239" is connected to "ShapeTypeC" and "ObjectBorderColoredWithBlackColor" and "ObjectColoredWithBlackColor".
* "Object-219" is connected to "ShapeTypeB" and "ObjectBorderColoredWithBlackColor" and "ObjectColoredWithWhiteColor".
**Top Branch (Object-241):**
* "Object-241" is connected to "ShapeTypeA" and "ObjectColoredWithWhiteColor" and "ObjectBorderColoredWithBlackColor".
### Key Observations
* The diagram represents a hierarchical structure, with "Object-238" acting as a central component.
* The relationships "above", "consistsOf", and "elementContains" define the spatial arrangement and composition of the objects.
* Object properties like color and shape are explicitly linked to the objects.
* The diagram appears to be describing a scene or assembly where objects are positioned relative to each other and have specific characteristics.
### Interpretation
The diagram likely represents a scene description or a component hierarchy within a larger system. The relationships suggest a parent-child or container-contained structure. For example, "Object-238" *contains* "Object-219" and *consists of* "Object-239". The "above" relationship indicates vertical positioning. The consistent use of color properties ("ObjectColoredWithWhiteColor", "ObjectBorderColoredWithBlackColor") suggests these are important attributes for distinguishing or identifying the objects.
The diagram could be used in several contexts:
* **Computer Vision:** Representing the spatial relationships between objects detected in an image.
* **Robotics:** Describing the arrangement of objects in a robot's environment.
* **Scene Graph:** A representation of a 3D scene for rendering or simulation.
* **Knowledge Representation:** Modeling the relationships between entities in a domain.
The diagram's structure suggests a deliberate organization, implying that the spatial relationships and object properties are significant for the intended application. The use of unique object identifiers ("Object-240", etc.) suggests that these objects are distinct instances within a larger set.
</details>
arities (i.e., a > 0 ) are randomly assigned as predicates or functions and randomly designated as ordered or unordered.
To generate a training example, we first generate a set of random DAGs C , which will later become the correspondences. Next, we construct the base B by generating graphs above C . As each DAG is constructed in layers, this simply means that C is considered the lowest layers of B . Likewise, for the target T we build another set of graphs above C . The nodes of C are thus shared with both B and T . Each node of C is duplicated, producing one node for B and one node for T , and the resulting pair of nodes becomes a correspondence. Any element in B that was an ancestor of a node from C or a descendent of such an ancestor was considered a candidate inference. In Appendix 7.2 we provide a figure showing each component of a training example. During training, each generated example was turned into a batch of 8 inputs by repeatedly running the encoding procedure (which dynamically assigns node labels) over the original base and target.
## 5.2 Experimental Domains
Though training was done with synthetic data, we evaluated the effectiveness of AMN on both synthetic data and data used in previous analogy experiments. The corpus of previous analogy examples was taken from the public release of SME 1 . Importantly, AMN was not trained on the corpus of existing analogy examples (AMN never learned from a real-world analogy example). In fact, there was no overlap between the symbols (i.e., entities, functions, and predicates) used in that corpus and the symbols used for the synthetic data. We briefly describe each of the domains AMN was evaluated on below (detailed descriptions can be found in (Forbus et al. 2017)).
1. Synthetic : this domain consisted of 1000 examples generated with the same parameters as the training data (useful as a sanity check for AMN's performance).
2. Visual Oddity : this problem setting was initially proposed to explore cultural differences to geometric reasoning in (Dehaene et al. 2006). The work of (Lovett and Forbus 2011) modeled the findings of the original experiment
1 http://www.qrg.northwestern.edu/software/sme4/index.html
- computationally with qualitative visual representations and analogy. We extracted 3405 analogical comparisons from the computational experiment.
3. Moral Decision Making : this domain was taken from (Dehghani et al. 2008a), which introduced a computational model of moral decision making that used SME to reason through moral dilemmas. From the works of (Dehghani et al. 2008a,b), we extracted 420 analogical comparisons.
4. Geometric Analogies : this domain is from one of the first computational analogy experiments (Evans 1964). Each problem was an incomplete analogy of the form A : B :: C : ? , where each of A , B , and C were manually encoded geometric figures and the goal was to select the figure that best completed the analogy from an encoded set of possible answers. While in the original work all figures had to be manually encoded, in (Lovett et al. 2009; Lovett and Forbus 2012) it was shown that the analogy problems could be solved with structure-mapping over automatic encodings (produced by the CogSketch system (Forbus et al. 2011)). From that work we extracted 866 analogies.
## 5.3 Results and Discussion
Table 1a shows the results for AMN across different values of r , where r denotes the re-run hyperparameter detailed in Section 4.2. When evaluating on the synthetic data, the comparison set of correspondences was given by the data generator; whereas when evaluating on the three other analogy domains, the comparison set of correspondences was given by the output of SME. It is important to note that we are using SME as our stand-in for SMT (as it is the most widely accepted computational model of SMT). Thus, we do not want significantly different results from SME in the correspondence selection experiments (e.g., substantially higher or lower structural evaluation scores). Matching SME's performance (i.e., not producing higher or lower values) gives evidence that we are modeling SMT.
In the Struct. Perf. column, the numbers reflect the average across examples of the structural evaluation score of AMN divided by that of the comparison correspondence sets. For the other columns of Table 1a, the numbers represent average
(a) AMN correspondence prediction results for performance ratio (left), solution type rate (middle, β better), and error rate (right, β better)
| Domain | r | Struct. Perf. | Larger | Equiv. | Err. Free | 1-to-1 Err. | PC Err. | Degen. Err. |
|-----------|-----|-----------------|----------|----------|-------------|---------------|-----------|---------------|
| Synthetic | 1 | 0.713 | 0 | 0.313 | 0.346 | 0.007 | 0.102 | 0.02 |
| Synthetic | 16 | 0.952 | 0.001 | 0.683 | 0.695 | 0.005 | 0.02 | 0.011 |
| Oddity | 1 | 0.774 | 0.061 | 0.404 | 0.484 | 0.153 | 0.225 | 0 |
| Oddity | 16 | 0.955 | 0.074 | 0.485 | 0.564 | 0.131 | 0.139 | 0 |
| MoralDM | 1 | 0.61 | 0.014 | 0.021 | 0.093 | 0.002 | 0.17 | 0.03 |
| MoralDM | 16 | 0.958 | 0.081 | 0.164 | 0.329 | 0 | 0.041 | 0.016 |
| Geometric | 1 | 0.871 | 0.064 | 0.533 | 0.649 | 0.039 | 0.116 | 0 |
| Geometric | 16 | 1.04 | 0.069 | 0.714 | 0.788 | 0.029 | 0.043 | 0 |
(b) AMN candidate inference prediction results
| Domain | r | Avg. CI F1 | Avg. CI Prec. | Avg. CI Rec. | Avg. CI Acc. | Avg. CI Spec. |
|-----------|-----|--------------|-----------------|----------------|----------------|-----------------|
| Synthetic | 16 | 0.9 | 0.867 | 0.967 | 0.861 | 0.735 |
| Oddity | 16 | 0.992 | 0.995 | 0.994 | 0.991 | 0.911 |
| MoralDM | 16 | 0.899 | 0.834 | 0.985 | 0.832 | 0.439 |
| Geometric | 16 | 0.958 | 0.955 | 0.99 | 0.951 | 0.917 |
Table 1: AMN experimental results
fractions of examples or correspondences (e.g., 0.684 should be interpreted as 68.4%). Candidate inference prediction performance was measured relative to the set of correspondences AMNgenerated, i.e., all candidate inferences were computed from the predicted correspondences, and treated as the true positives. In many problems from the non-synthetic domains, every non-correspondence node was a candidate inference (which can lead to inflated precision and recall values). Thus, we also report the specificity (i.e., true negative rate) of AMN for only problems with non-candidate inference nodes.
the same / larger structural evaluation score as compared to gold set of correspondences. The Equiv. column provides the best indication that AMN could model SMT. It shows that 50% of AMN's outputs were SMT-satisfying, error-free analogical matches with the exact same structural score as SME (the lead computational model of SMT) in two of the non-synthetic analogy domains.
In addition to our main results, we also provide qualitative examples of AMN's outputs on real analogy problems and ablation studies for various aspects of AMN's design. Both the matching shown in Figure 5 as well as the solar system analogy shown in Figure 1 were produced by AMN. Further examples of AMN's outputs can be found in Appendix 7.5. Ablation experiments regarding the impact of both the signature graph and unit normalization of signature embeddings (each detailed in Section 4.1) are given in Appendix 7.3.
Analysis: The left side of Table 1a shows the average ratio of AMN's performance (labeled Struct. Perf.), as measured by structural evaluation score, against the comparison method's performance (i.e., data generator correspondences or SME). As can be seen, AMN produced matches with structural evaluation scores at 95-104% the level of SME on the non-synthetic domains, which indicates that it was finding similar structural matches. This is ideal as it shows that AMN matches SME's systematicity preference, and thus likely conforms fairly well to SMT in terms of systematicity.
The middle of Table 1a gives us the best sense of how well AMN modeled SMT. We observe AMN's performance in terms of the proportion of larger , equivalent , and error-free matches it produces (labeled Larger, Equiv., and Err. Free, respectively). Error-free matches do not contain degenerate correspondences or SMT constraint violations, whereas equivalent and larger matches are both error-free and have
The right side of Table 1a shows the frequency of the different types of errors, including violations of the one-to-one / parallel connectivity constraints, and degenerate correspondences (labeled 1-to-1 Err., PC Err., and Degen. Err.). It shows that AMN had fairly low error rates across domains (except for Visual Oddity). Importantly, degenerate correspondences were very infrequent, which is significant because it verifies that AMN leveraged higher-order relational structure.
Table 1b shows that AMN was fairly effective in predicting candidate inferences. The high accuracy (labeled Avg. CI Acc.) scores for both the Visual Oddity and Geometric Analogies domains indicate that AMN was able to capture the notion of structural support when determining candidate inferences. The non-zero specificity (labeled Avg. CI Spec.) results show that, while it more often classified nodes as candidate inferences, it was capable of distinguishing noncandidate inference nodes as well.
## 6 Conclusions
In this paper, we introduced the Analogical Matching Network, a neural approach that learned to produce analogies consistent with Structure-Mapping Theory. AMN was trained on completely synthetic data and was capable of performing well on a varied set of analogies drawn from previous work involving analogical reasoning. AMN demonstrated renaming invariance, structural sensitivity, and the ability to find solutions in a combinatorial search space, all of which are key properties of symbolic reasoners and are known to be important to human reasoning.
Ba, J. L.; Kiros, J. R.; and Hinton, G. E. 2016. Layer normalization. arXiv preprint arXiv:1607.06450 .
Benaim, S.; Mokady, R.; Bermano, A.; Cohen-Or, D.; and Wolf, L. 2020. Structural-analogy from a Single Image Pair. arXiv preprint arXiv:2004.02222 .
Bojanowski, P.; Grave, E.; Joulin, A.; and Mikolov, T. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5: 135-146.
Bowdle, B. F.; and Gentner, D. 1997. Informativity and asymmetry in comparisons. Cognitive Psychology 34(3): 244-286.
Chalmers, D. J.; French, R. M.; and Hofstadter, D. R. 1992. High-level perception, representation, and analogy: A critique of artificial intelligence methodology. Journal of Experimental & Theoretical Artificial Intelligence 4(3): 185-211.
Chen, K.; Rabkina, I.; McLure, M. D.; and Forbus, K. D. 2019. Human-Like Sketch Object Recognition via Analogical Learning. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33, 1336-1343.
Clevert, D.-A.; Unterthiner, T.; and Hochreiter, S. 2015. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289 .
Crouse, M.; Abdelaziz, I.; Cornelio, C.; Thost, V.; Wu, L.; Forbus, K.; and Fokoue, A. 2019. Improving Graph Neural Network Representations of Logical Formulae with Subgraph Pooling. arXiv preprint arXiv:1911.06904 .
Dehaene, S.; Izard, V.; Pica, P.; and Spelke, E. 2006. Core knowledge of geometry in an Amazonian indigene group. Science 311(5759): 381-384.
Dehghani, M.; Tomai, E.; Forbus, K.; Iliev, R.; and Klenk, M. 2008a. MoralDM: A Computational Modal of Moral Decision-Making. In Proceedings of the Annual Meeting of the Cognitive Science Society .
Dehghani, M.; Tomai, E.; Forbus, K. D.; and Klenk, M. 2008b. An Integrated Reasoning Approach to Moral Decision-Making. In AAAI , 1280-1286.
Eliasmith, C.; and Thagard, P. 2001. Integrating structure and meaning: A distributed model of analogical mapping. Cognitive Science 25(2): 245-286.
Emami, P.; and Ranka, S. 2018. Learning permutations with sinkhorn policy gradient. arXiv preprint arXiv:1805.07010 .
Evans, T. G. 1964. A program for the solution of a class of geometric-analogy intelligence-test questions. Technical report, AIR FORCE CAMBRIDGE RESEARCH LABS LG HANSCOM FIELD MASS.
Falkenhainer, B.; Forbus, K. D.; and Gentner, D. 1989. The structure-mapping engine: Algorithm and examples. Artificial Intelligence 41(1): 1-63.
Forbus, K.; Usher, J.; Lovett, A.; Lockwood, K.; and Wetzel, J. 2011. CogSketch: Sketch understanding for cognitive science research and for education. Topics in Cognitive Science 3(4): 648-666.
Forbus, K. D.; Ferguson, R. W.; Lovett, A.; and Gentner, D. 2017. Extending SME to handle large-scale cognitive modeling. Cognitive Science 41(5): 1152-1201.
Gentner, D. 1983. Structure-mapping: A theoretical framework for analogy. Cognitive Science 7(2): 155-170.
Gentner, D. 2003. Why we're so smart. Language in mind: Advances in the study of language and thought 195235.
Gentner, D.; Brem, S.; Ferguson, R. W.; Markman, A. B.; Levidow, B. B.; Wolff, P.; and Forbus, K. D. 1997. Analogical reasoning and conceptual change: A case study of Johannes Kepler. The Journal of the Learning Sciences 6(1): 3-40.
Gentner, D.; and Markman, A. B. 1993. Analogy-Watershed or Waterloo? Structural alignment and the development of connectionist models of analogy. In Advances in Neural Information Processing Systems , 855-862.
Gentner, D.; and Markman, A. B. 1998. Analogy-based reasoning. In The handbook of brain theory and neural networks , 91-93. MIT Press.
Georgiev, D.; and LiΒ΄ o, P. 2020. Neural Bipartite Matching. arXiv preprint arXiv:2005.11304 .
Gick, M. L.; and Holyoak, K. J. 1983. Schema induction and analogical transfer .
Halford, G. S.; Wilson, W. H.; Guo, J.; Gayler, R. W.; Wiles, J.; and Stewart, J. ???? Connectionist implications for processing capacity limitations in analogies .
Hill, F.; Santoro, A.; Barrett, D. G.; Morcos, A. S.; and Lillicrap, T. 2019. Learning to make analogies by contrasting abstract relational structure. International Conference on Learning Representations .
Hofstadter, D. 1995. Fluid concepts and creative analogies: Computer models of the fundamental mechanisms of thought. Basic books.
Hofstadter, D. R. 2001. Analogy as the core of cognition. The Analogical Mind: Perspectives from Cognitive Science 499-538.
Holyoak, K. J.; Holyoak, K. J.; and Thagard, P. 1996. Mental leaps: Analogy in creative thought .
Holyoak, K. J.; Junn, E. N.; and Billman, D. O. 1984. Development of analogical problem-solving skill. Child Development 2042-2055.
Holyoak, K. J.; and Thagard, P. 1989. Analogical mapping by constraint satisfaction. Cognitive Science 13(3): 295-355.
Hummel, J. E.; and Holyoak, K. J. 1997. Distributed representations of structure: A theory of analogical access and mapping. Psychological Review 104(3): 427.
Hummel, J. E.; and Holyoak, K. J. 2005. Relational reasoning in a neurally plausible cognitive architecture: An overview of the LISA project. Current Directions in Psychological Science 14(3): 153-157.
Kahneman, D.; Treisman, A.; and Gibbs, B. J. 1992. The reviewing of object files: Object-specific integration of information. Cognitive Psychology 24(2): 175-219.
Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
Klenk, M.; Forbus, K. D.; Tomai, E.; Kim, H.; and Kyckelhahn, B. 2005. Solving everyday physical reasoning problems by analogy using sketches. In AAAI Conference on Artificial Intelligence .
Kool, W.; Van Hoof, H.; and Welling, M. 2018. Attention, learn to solve routing problems! arXiv preprint arXiv:1803.08475 .
Lovett, A.; and Forbus, K. 2011. Cultural commonalities and differences in spatial problem-solving: A computational analysis. Cognition 121(2): 281-287.
Lovett, A.; and Forbus, K. 2012. Modeling multiple strategies for solving geometric analogy problems. In Proceedings of the Annual Meeting of the Cognitive Science Society , volume 34.
Lovett, A.; Tomai, E.; Forbus, K.; and Usher, J. 2009. Solving geometric analogy problems through two-stage analogical mapping. Cognitive Science 33(7): 1192-1231.
Lu, H.; Liu, Q.; Ichien, N.; Yuille, A. L.; and Holyoak, K. J. 2019. Seeing the Meaning: Vision Meets Semantics in Solving Pictorial Analogy Problems. In CogSci , 2201-2207.
Mikolov, T.; Yih, W.-t.; and Zweig, G. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 conference of the north american chapter of the association for computational linguistics: Human language technologies , 746-751.
Mitchell, M. 1993. Analogy-making as perception: A computer model .
Novick, L. R. 1988. Analogical transfer, problem similarity, and expertise. Journal of Experimental Psychology: Learning, memory, and cognition 14(3): 510.
O'Donoghue, T. V. D.; and Keane, M. 1999. Computability as a limiting cognitive constraint: Complexity concerns in metaphor comprehension about which cognitive linguists should be aware. In Cultural, Psychological and Typological Issues in Cognitive Linguistics: Selected papers of the biannual ICLA meeting in Albuquerque, July 1995 , volume 152, 129. John Benjamins Publishing.
Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems , 8024-8035.
Reed, S. E.; Zhang, Y.; Zhang, Y.; and Lee, H. 2015. Deep visual analogy-making. In Advances in Neural Information Processing Systems , 1252-1260.
Ribeiro, D.; Hinrichs, T.; Crouse, M.; Forbus, K.; Chang, M.; and Witbrock, M. 2019. Predicting State Changes in Procedural Text using Analogical Question Answering .
Sagi, E.; Gentner, D.; and Lovett, A. 2012. What difference reveals about similarity. Cognitive Science 36(6): 1019-1050.
Tai, K. S.; Socher, R.; and Manning, C. D. 2015. Improved Semantic Representations From Tree-Structured Long ShortTerm Memory Networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , 1556-1566.
Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ε.; and Polosukhin, I. 2017. Attention is all you need. In Advances in Neural Information Processing Systems , 5998-6008.
Veale, T.; and Keane, M. T. 1997. The competence of suboptimal theories of structure mapping on hard analogies. In IJCAI , 232-237.
Vinyals, O.; Fortunato, M.; and Jaitly, N. 2015. Pointer networks. In Advances in Neural Information Processing Systems , 2692-2700.
Wang, R.; Yan, J.; and Yang, X. 2019. Learning combinatorial embedding networks for deep graph matching. In Proceedings of the IEEE International Conference on Computer Vision , 3056-3065.
Wetzel, L. 2006. Types and tokens .
Zhang, C.; Gao, F.; Jia, B.; Zhu, Y.; and Zhu, S.-C. 2019. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 5317-5327.
Zhou, L.; Cui, P.; Yang, S.; Zhu, W.; and Tian, Q. 2019. Learning to learn image classifiers with visual analogy. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 11497-11506.
## 7 Appendix
## 7.1 Model Details
In the DAG LSTM, the node embeddings were 32dimensional vectors and the edge embeddings were 16dimensional vectors. For all Transformer components, our model used multi-headed attention with 2 attention layers each having 4 heads. In each multi-headed attention layer, the query and key vectors were projected to 128-dimensional vectors. The feed forward networks used in the Transformer components had one hidden layer with a dimensionality twice that of the input vector size. The feed forward networks used to compute the values in the correspondence selector used two 64-dimensional hidden layers. The Ξ» parameter applied to the candidate inference loss L ci was set to Ξ» = 0 . 1 in our experiments. The models were constructed with the Pytorch (Paszke et al. 2019) library.
## 7.2 Training Data Generation
In Figure 6, the dark green nodes indicate the initial random graphs C after being copied into the base and target. The red and blue nodes show the graphs built around B and T . The light green edges indicate the gold set of correspondences generated from C . On average, each example consisted of 26.9 expressions and 14.3 entities in the base (41.2 distinct items in total), 27.0 expressions and 14.3 entities in the target (41.3 distinct items in total), and 26.8 correspondences.
Figure 6: Synthetic example with a base (red), target (blue), and shared subgraphs (green)
<details>
<summary>Image 6 Details</summary>

### Visual Description
\n
## Diagram: Dependency Graph
### Overview
The image depicts a directed graph representing dependencies between various elements labeled with strings like "rel_ord_c_2" and "const". The nodes are connected by directed edges (arrows) indicating the direction of dependency. The graph appears to be structured in a roughly horizontal layout, with multiple levels of nodes and connections.
### Components/Axes
The diagram consists of nodes (circles or ovals) and directed edges (arrows). The nodes are labeled with strings. There are no explicit axes or scales. The nodes can be categorized as follows:
* **"const"**: Represents a constant or base element. These appear frequently throughout the graph.
* **"rel_ord_*"**: Nodes starting with "rel_ord_" likely represent relational ordered elements.
* **"rel_unord_*"**: Nodes starting with "rel_unord_" likely represent relational unordered elements.
The edges are colored:
* **Red**: Indicates a dependency relationship.
* **Green**: Indicates a dependency relationship.
* **Black**: Indicates a dependency relationship.
### Detailed Analysis or Content Details
The diagram can be broken down into sections, reading from left to right:
**Leftmost Section:**
* `rel_ord_c_2` depends on three `const` nodes (bottom row).
* `rel_unord_e_3` depends on three `const` nodes (bottom row).
* `rel_ord_o_2` depends on `rel_unord_e_3`.
* `rel_unord_a_2` depends on `rel_unord_e_3`.
* `rel_unord_e_2` depends on `rel_unord_a_2`.
* `rel_unord_e_3` depends on `rel_unord_e_2`.
* `rel_ord_a_3` depends on `rel_unord_e_3`.
**Middle Section:**
* `rel_ord_r_2` depends on `const`.
* `rel_unord_a_1` depends on `const`.
* `rel_unord_e_3` depends on `rel_ord_r_2` and `rel_unord_a_1`.
**Rightmost Section:**
* `rel_unord_e_3` depends on `const`.
* `rel_unord_b_1` depends on `const`.
* `rel_unord_e_3` depends on `rel_unord_b_1`.
**Connections:**
* Multiple nodes depend on the `const` nodes at the bottom.
* `rel_ord_a_3` is the highest-level node in the leftmost section.
* `rel_unord_e_3` appears multiple times and acts as a central dependency point.
### Key Observations
* The "const" nodes are foundational, serving as inputs for many other elements.
* `rel_unord_e_3` is a highly connected node, indicating its importance in the overall dependency structure.
* The graph is not symmetrical; dependencies flow in specific directions.
* The use of different colors for the edges doesn't seem to indicate a specific type of dependency, but rather a visual distinction.
### Interpretation
This diagram likely represents a computational or logical dependency graph. The nodes represent operations, variables, or data elements, and the edges represent the flow of data or control. The "const" nodes likely represent fixed values or initial conditions. The "rel_ord_" and "rel_unord_" nodes suggest relationships that may be ordered or unordered, respectively.
The graph demonstrates a hierarchical structure where higher-level elements depend on lower-level elements. The repeated use of `rel_unord_e_3` suggests it's a crucial intermediate result or operation. The diagram could be used to visualize the execution order of a program, the dependencies between modules in a software system, or the relationships between variables in a mathematical model.
Without further context, it's difficult to determine the precise meaning of the labels. However, the diagram provides a clear visual representation of the dependencies between these elements. The diagram suggests a complex system with multiple interconnected components, where the "const" nodes form the base and the higher-level nodes build upon them. The presence of both ordered and unordered relationships indicates a potentially diverse set of dependencies.
</details>
## 7.3 Additional Experiments
Unit Normalization for Signature Embeddings: In Section 4.1, we described how signature embeddings were scaled to unit length to provide a simple criterion for whether two nodes were likely the same node (i.e., they have a dot product of 1). Intuitively, this feature would be most important for allowing AMN to follow SMT's one-to-one constraint, as it gives AMN the ability to determine which nodes have already been selected for correspondence. To measure the importance of this feature, we performed a simple experiment where we did not scale the signature embeddings to unit length (keeping all other components of AMN the same). We retrained AMN following the same training methodology as before, and tested AMN on the synthetic domain.
Interestingly, we found that performance in all categories (not just conformance to SMT's one-to-one constraint) became significantly worse. The structural performance of AMNdropped from 0.948 to 0.750, indicating that systematicity was impacted. The fraction of problems that were equivalent to the gold standard correspondence set (i.e., no SMT errors and the same structural evaluation score as the gold standard) dropped from 0.671 to 0.278. In terms of errors, the percent of correspondences that violated one-to-one increased from 0.6% to 1.6% and those violating parallel connectivity increased from 2.1% to 12.0%. Degenerate errors remained about the same, increasing from 0.9% to 1.2%, likely reflecting that the dot product of two signature embeddings still incorporates their shared descendants.
Value of the Signature Graph: Given that label graph captures almost all of the graph structure, it is natural to question whether the signature graph is necessary for producing SMT-conforming matchings. To determine the value of the signature graph, we performed an experiment where we completely excised the signature embeddings from AMN, leaving only the label graph for correspondence and candidate inference selection. We retrained this ablated version of AMN with the standard training methodology and tested it on the synthetic set of analogy problems.
Without the signature graph, AMN's performance plummeted in all categories. The one-to-one error rate increased from 0.6% to 92.4% and the parallel connectivity error rate increased from 1.2% to 99.4%. Consequently, the number of error free matches dropped to 0. This matches our intuitions, that without a distinction between the labels of objects and the objects themselves, AMN is incapable of modeling SMT.
## 7.4 Background
DAG LSTMs: DAG LSTMs extend Tree LSTMs (Tai, Socher, and Manning 2015) to DAG-structured data. As with Tree LSTMs, DAG LSTMs compute each node embedding as the aggregated information of all their immediate predecessors (the equations for the DAG LSTM are identical to those of the Tree LSTM). The difference between the two is that DAG LSTMs stage the computation of a node's embedding based on the order given by a topological sort of the input graph. Batching of computations is done by grouping together updates of independent nodes (where two nodes are independent if they are neither ancestors nor predecessors of one another). As in (Crouse et al. 2019), for a node, v , its initial node embedding, s v , is assigned based on its label and arity. The DAG LSTM then computes the final embedding h v to be
<!-- formula-not-decoded -->
where is element-wise multiplication, Ο is the sigmoid function, P is the predecessor function that returns the arguments for a node, U ( e vw ) i , U ( e vw ) o , U ( e vw ) c , and U ( e vw ) f are learned matrices per edge type. i and o represent input and output gates, c and Λ c are memory cells, and f is a forget gate.
Multi-Headed Attention: The multi-headed attention (MHA) mechanism of (Vaswani et al. 2017) is used in our work to compare correspondences against one another. In this work, MHA is given two inputs, a query vector q and a list of key vectors to compare the query vector against β© k 1 , . . . , k n βͺ . In N -headed attention, N separate attention transformations are computed. For transformation i we have
<!-- formula-not-decoded -->
where each of W ( q ) i , W ( k ) i , and W ( v ) i are learned matrices and b Λ q is the dimensionality of Λ q i . The final output vector q β² for input q is then given as a combination of its N transformations
<!-- formula-not-decoded -->
where each W ( o ) i is a distinct learned matrix for each i . In implementation, the comparisons of query and key vectors are batched together and performed as efficient matrix multiplications.
Transformer Encoder-Decoder: The Transformer-based encoder-decoder is given two inputs, a comparison set C and an output set O . At a high level, C will be encoded into a new set E , which will inform a selection process that picks elements of O to return. In the context of pointer networks, the set O begins as the encoded input set, i.e., O = E .
Encoder: First, the elements of C , i.e. h c β C , are passed through N layers of an attention-based transformation. For element h c in the i -th layer (i.e., h ( i -1) c ) this is performed as follows
<!-- formula-not-decoded -->
<!-- formula-not-decoded -->
where LN denotes the use of layer normalization (Ba, Kiros, and Hinton 2016), MHA ( i ) C (Appendix 7.4) denotes the use of self multi-headed attention for layer i (i.e., attention between h ( i ) c and the other elements of C ( i -1) ), and FFN ( i ) is a two-layer feed-forward neural network with ELU (Clevert, Unterthiner, and Hochreiter 2015) activations. After N layers of processing, the set of encoded inputs E is given by E = C ( N )
Decoder: With encoded comparison elements E and a set of potential outputs O , the objective of the decoder is to use E to inform the selection of some subset of output options D β O to return. Decoding happens sequentially; at each timestep t β { 1 , . . . , n } the decoder selects an element from O βͺ { END-TOK } (where END-TOK is a learned triple) to add to D . If END-TOK is chosen, the decoding procedure stops and D is returned.
Let D t be the set of elements that have been selected by timestep t and O t be the remaining unselected elements at timetstep t . First, D t is processed with an N -layered attention-based transformation. For an element h ( i -1) d this is given by
<!-- formula-not-decoded -->
where MHA ( i ) D denotes the use of self multi-headed attention, MHA ( i ) E denotes the use of multi-headed attention against elements of E , and FFN ( i ) is a two-layer feed-forward neural network with ELU activations. We will consider the already selected outputs to be the transformed selected outputs, i.e., D t = D ( N ) t . For a pair, β© h o , h d βͺ β O t Γ D t , we compute their compatibility as Ξ± od
<!-- formula-not-decoded -->
where W q and W k are learned matrices, b o is the dimensionality of h o , and FFN is a two layer feed-forward network with ELU activations. This defines a matrix H β R |O t |Γ|D t | of compatibility scores. One can then apply some operation (e.g., max pooling) to produce a vector of values v t β R |O t | which can be fed into a softmax to produce a distribution over options from O t . The highest probability element Ξ΄ β from the distribution is then added to the set of selected outputs, i.e., D = D t βͺ { Ξ΄ β } .
## 7.5 AMNExample Outputs
For the outputs from the non-synthetic domains (all but the first figure), only small subgraphs of the original graphs are shown (the original graphs were too large to be displayed)
<details>
<summary>Image 7 Details</summary>

### Visual Description
\n
## Diagram: Dependency Graph
### Overview
The image depicts a directed acyclic graph (DAG) representing dependencies between various components. The nodes are labeled with strings like "rel_ord_x_y" and "const", and the relationships are shown with directed edges (arrows). The graph is split into two main sections, with connections between them. The diagram appears to represent a computational or logical flow.
### Components/Axes
There are no axes in this diagram. The components are nodes labeled with strings. The edges represent dependencies. The nodes can be categorized as follows:
* **rel_ord_x_y:** Nodes labeled with "rel_ord" followed by two underscores and two numbers (e.g., rel_ord_n_2, rel_ord_d_3).
* **rel_mord_x_y:** Nodes labeled with "rel_mord" followed by two underscores and two numbers (e.g., rel_mord_f_1, rel_mord_o_1).
* **rel_mord_k_3:** A single node labeled "rel_mord_k_3".
* **const:** Nodes labeled simply as "const".
### Detailed Analysis or Content Details
**Left Section:**
* `rel_umord_i_2` points to `rel_ord_n_2`.
* `rel_ord_n_2` points to `rel_ord_d_3`.
* `rel_ord_d_3` points to `rel_ord_n_3`.
* `rel_ord_n_3` points to `const` (twice).
* `rel_umord_f_2` points to `rel_ord_k_3`.
* `rel_ord_k_3` points to `rel_mord_k_3`.
* `rel_mord_k_3` points to `rel_ord_n_2`.
* `rel_mord_k_3` points to `rel_ord_d_3`.
* `rel_ord_d_3` points to `rel_ord_j_1`.
* `rel_ord_j_1` points to `const` (three times).
**Right Section:**
* `rel_ord_i_1` points to `rel_mord_f_1`.
* `rel_mord_f_1` points to `rel_ord_d_3`.
* `rel_ord_d_3` points to `rel_ord_n_2`.
* `rel_ord_n_2` points to `rel_ord_r_3`.
* `rel_ord_r_3` points to `const` (twice).
* `rel_mord_o_1` points to `const` (twice).
* `rel_ord_j_1` points to `rel_mord_o_1`.
**Connections between Sections:**
* `rel_ord_d_3` (left section) points to `rel_ord_d_3` (right section).
* `rel_ord_n_2` (left section) points to `rel_ord_n_2` (right section).
**Edge Colors:**
* Most edges are black.
* Edges connecting `rel_mord_k_3` to `rel_ord_n_2` and `rel_ord_d_3` are green.
* The edge connecting `rel_ord_d_3` (left) to `rel_ord_d_3` (right) is orange.
* The edge connecting `rel_ord_n_2` (left) to `rel_ord_n_2` (right) is teal.
### Key Observations
The diagram shows a complex network of dependencies. The "const" nodes appear to be terminal nodes, representing the end of a computation or logical path. The "rel_ord" and "rel_mord" nodes likely represent intermediate steps or operations. The green edges highlight specific dependencies involving `rel_mord_k_3`, suggesting it might be a critical component. The orange and teal edges indicate cross-section dependencies.
### Interpretation
This diagram likely represents a computational pipeline or a logical dependency graph. The nodes represent operations or data transformations, and the edges represent the flow of data or control. The "const" nodes could represent constant values or final results. The two sections might represent different modules or stages of the pipeline. The colored edges could indicate different types of dependencies or data flows.
The presence of multiple paths to the "const" nodes suggests redundancy or alternative computation paths. The connections between the two sections indicate that the modules are not entirely independent and share some dependencies. The diagram could be used for optimization, debugging, or understanding the overall structure of the system. The naming convention of the nodes ("rel_ord", "rel_mord") suggests they relate to some kind of ordering or modification operation. Without further context, it's difficult to determine the specific meaning of these operations.
</details>
Figure 7: AMN output for an example from the Synthetic domain
Figure 8: AMN output for an example from the Visual Oddity domain
<details>
<summary>Image 8 Details</summary>

### Visual Description
\n
## Diagram: Object Relationship Visualization
### Overview
The image presents a diagram illustrating relationships between several objects (Object-715, Object-716, Object-717, Object-718) and associated properties. The diagram uses nodes to represent objects and edges (lines) to represent relationships. The relationships are labeled with terms like "rightOf" and "reflectedShapes-YAxis". Each object appears to have associated properties related to color and border color.
### Components/Axes
The diagram consists of:
* **Nodes:** Representing objects labeled Object-715, Object-716, Object-717, and Object-718. There is also a central node labeled "reflectedShapes-YAxis" and another labeled "rightOf".
* **Edges:** Green lines connecting the nodes, indicating relationships.
* **Labels:** Text labels associated with the edges ("rightOf", "reflectedShapes-YAxis").
* **Object Properties:** Labels surrounding each object node indicating properties like "ObjectsColoredFnBlackColor" and "ObjectsBorderColoredFnBlackColor".
### Detailed Analysis or Content Details
The diagram shows the following relationships:
* **Object-716** is connected to "ObjectsColoredFnBlackColor" and "ObjectsBorderColoredFnBlackColor" via implicit relationships. It is also connected to "rightOf".
* **Object-715** is connected to "ObjectsColoredFnBlackColor" and "ObjectsBorderColoredFnBlackColor" via implicit relationships. It is also connected to "rightOf".
* **Object-717** is connected to "ObjectsColoredFnBlackColor" and "ObjectsBorderColoredFnBlackColor" via implicit relationships. It is also connected to "reflectedShapes-YAxis".
* **Object-718** is connected to "ObjectsColoredFnBlackColor" and "ObjectsBorderColoredFnBlackColor" via implicit relationships. It is also connected to "reflectedShapes-YAxis".
The central node "reflectedShapes-YAxis" connects Object-717 and Object-718. The node "rightOf" connects Object-716 and Object-715.
The properties "ObjectsColoredFnBlackColor" and "ObjectsBorderColoredFnBlackColor" appear to be associated with each of the four main objects (715, 716, 717, 718).
### Key Observations
* The diagram appears to represent a spatial or relational arrangement of objects.
* The relationships "rightOf" and "reflectedShapes-YAxis" suggest a possible coordinate system or mirroring effect.
* All objects share the same color and border color properties.
* The diagram is symmetrical around the "reflectedShapes-YAxis" node.
### Interpretation
The diagram likely represents a visual or geometric arrangement of objects in a 2D space. The "rightOf" relationship suggests a horizontal positioning, while "reflectedShapes-YAxis" indicates a mirroring across a vertical axis. The consistent color and border color properties might indicate a uniform visual style or a shared attribute. The symmetry suggests a balanced or mirrored design.
The diagram could be a simplified representation of a user interface layout, a scene in a game, or a geometric configuration. The labels "ObjectsColoredFnBlackColor" and "ObjectsBorderColoredFnBlackColor" suggest that the diagram is related to visual properties of the objects. The diagram does not provide any quantitative data, but rather focuses on the qualitative relationships between the objects.
</details>
Figure 9: AMN output for an example from the Moral Decision Making domain
<details>
<summary>Image 9 Details</summary>

### Visual Description
\n
## Diagram: Process Flow/Dependency Graph
### Overview
The image depicts a directed graph representing a process flow or dependency relationships between various actions or entities. The nodes are labeled with text strings, and the edges are directed, labeled with "choices" or "performedBy". The diagram appears to model a decision-making process or a sequence of actions with dependencies.
### Components/Axes
The diagram consists of nodes (ovals) and directed edges (arrows). The nodes contain text labels. The edges are labeled with the terms "choices" and "performedBy". There are no explicit axes or scales. The diagram is organized with a central node ("you611") and branching paths extending outwards.
### Detailed Analysis or Content Details
The diagram can be broken down into several branches. Here's a detailed transcription of the nodes and edges, following the flow from left to right and top to bottom:
* **Left Branch 1:**
* "SelectingSomething" -> "SelectingSomething" (labeled "choices")
* "SelectingSomething" -> "OpeningOrNotOpeningTheDam" (labeled "choices")
* "OpeningOrNotOpeningTheDam" -> "choices"
* "choices" -> "order11049" (labeled "choices")
* "order11049" -> "open8135" (labeled "performedBy")
* **Left Branch 2:**
* "SelectingSomething" -> "choices"
* "choices" -> "SelectingSomething11049" (labeled "choices")
* **Center Branch:**
* "SelectingSomething11049" -> "function11050" (labeled "choices")
* "function11050" -> "notOpen8135" (labeled "choices")
* "notOpen8135" -> "performedBy"
* "performedBy" -> "you611" (labeled "performedBy")
* "you611" -> "you611" (labeled "performedBy")
* **Right Branch:**
* "you611" -> "preventingExtinctionRiver1FishGroup1" (labeled "performedBy")
* "preventingExtinctionRiver1FishGroup1" -> "PreventingSomething" (labeled "performedBy")
* "PreventingSomething" -> "PreventingSomething"
* "you611" -> "preventingExtinctionRiver11FishGroup2" (labeled "performedBy")
* "preventingExtinctionRiver11FishGroup2" -> "PreventingSomething"
* "PreventingSomething" -> "PreventingSomething"
* **Top Right Branch:**
* "smv61097" -> "performedBy"
* "performedBy" -> "you611" (labeled "performedBy")
The node IDs (e.g., "you611", "order11049") suggest a system where each action or entity is uniquely identified.
### Key Observations
The diagram shows a central decision point ("you611") that is influenced by multiple upstream actions. The "performedBy" edges indicate that "you611" is responsible for executing or overseeing the actions on the right side of the diagram. The left side of the diagram represents a series of choices and actions leading up to "you611". The diagram appears to model a complex process with multiple branching paths and dependencies.
### Interpretation
This diagram likely represents a workflow or decision-making process, potentially within a software system or a business process. The "you611" node could represent a user, a system component, or a role responsible for making decisions or taking actions. The "choices" edges indicate that decisions are being made at various points in the process. The "performedBy" edges show the flow of responsibility and execution. The nodes related to "preventingExtinctionRiver..." suggest a focus on environmental conservation or resource management.
The diagram's structure suggests a hierarchical organization, with higher-level actions branching into more specific tasks. The presence of node IDs indicates that the system is designed to track and manage individual actions or entities. The diagram could be used for process documentation, system design, or workflow analysis. The diagram is a visual representation of a complex system, and its purpose is to clarify the relationships between its components.
</details>