2512.23828v1
Model: gemini-2.0-flash
# Hoffman-London graphs: When paths minimize $H$-colorings among trees
**Authors**: David Galvin, Phillip Marmorino, Emily McMillon, JD Nir, Amanda Redlich
## Hoffman-London graphs: When paths minimize H -colorings among trees
David Galvin β University of Notre Dame dgalvin1@nd.edu
JD Nir Oakland University jdnir@oakland.edu
Phillip Marmorino β Purdue University pmarmori@purdue.edu
Emily McMillon β‘ Rice University em72@rice.edu
Amanda Redlich University of Massachusetts Lowell amanda redlich@uml.edu
January 1, 2026
## Abstract
Given a graph G and a target graph H , an H -coloring of G is an adjacency-preserving vertex map from G to H . The number of H -colorings of G , hom( G,H ), has been studied for many classes of G and H . In particular, extremal questions of maximizing and minimizing hom( G,H ) have been considered when H is a clique or G is a tree.
In this paper, we develop a new technique using automorphisms of H to show that hom( T, H ) is minimized by paths as T varies over trees on a fixed number of vertices. We introduce the term Hoffman-London to refer to graphs that are minimal in this sense. In particular, we define an automorphic similarity matrix which is used to compute hom( T, H ) and give matrix conditions under which H is Hoffman-London.
We then apply this technique to identify several families of graphs that are Hoffman-London, including loop threshold graphs and some with applications in statistical physics (e.g. the Widom-Rowlinson model). By combining our approach with a few other observations, we fully characterize the minimizing trees for all graphs H on three or fewer vertices.
## 1 Introduction
Graph coloring is a well-known problem in which vertices of a graph are assigned a color with the restriction that adjacent vertices receive different colors. In this paper, we consider the related notion of H -colorings of a graph G in which vertices of G are labeled by vertices of H in such a way that adjacent vertices in G must be 'colored' by adjacent vertices of H . Formally, given a simple, loopless graph G = ( V ( G ) , E ( G )) and a target graph H = ( V ( H ) , E ( H )) without multi-edges, but possibly containing loops, an H -coloring of G is a map f : V ( G ) β V ( H ) that preserves adjacency: whenever u βΌ G v we must have f ( u ) βΌ H f ( v ). Note that when H = K q , the complete graph on q vertices, an H -coloring of G is an assignment of the vertices of G to one of q values with the only
β Galvin is in part supported by Simons Foundation Gift no. 854277 - Travel Support for Mathematicians.
β Marmorino is in part supported by NSF AGS-10002554.
β‘ McMillon is in part supported by NSF DMS-2303380.
restriction that adjacent vertices receive different labels-in other words, the proper q -colorings of G . In this sense, H -colorings are a generalization of proper vertex colorings, though, as we will see, H -colorings generalize several other common graph theoretic notions as well.
While we mainly focus on the framework of H -colorings, it should be noted that an H -coloring of G is just a graph homomorphism from G to H . Fittingly, we denote the set of all H -colorings of G by Hom( G,H ) and use hom( G,H ) to count the number of such colorings.
As mentioned, H -colorings encompass proper q -colorings by allowing H to be a complete graph. By setting H = H ind , which is one looped vertex connected by an edge to an unlooped vertex, we see H -colorings of G also encode the independent (or stable ) sets of G : any collection of vertices mapped to the unlooped vertex contains no internal edges. LovΒ΄ asz [32] investigated connections between H -colorings and graph limits, quasi-randomness, and property testing. In statistical physics, the language of H -coloring has been adopted to describe hard-constraint spin models; see e.g. [1, 2].
This article contributes to a broader investigation into an extremal enumeration question: for a given family of graphs G and a target graph H , can we characterize those G β G which maximize or minimize hom( G,H )? The origins of this question are attributed to Birkhoff's work on the 4-color conjecture [3, 4] with more recent attention coming from questions of Wilf [42] and Linial [30] concerning which graph on n vertices and m edges admits the most proper q -colorings, which is to say maximizes hom( G,K q ). For further results and conjectures, we direct the reader to the surveys [10, 45].
Here we consider the case where G is a family of trees; in particular we consider the family of graphs G = T n , the set of all trees on n vertices. As in many such questions, the path graph P n and star graph S n = K 1 ,n -1 are natural candidates for extremal graphs. When H = H ind , Prodinger and Tichy [34] proved that for each n , the number of independent sets among n vertex trees is maximized by stars and minimized by paths, confirming that for each T n β T n ,
$$\frac { 1 } { m ( T _ { n } , H _ { ind } ) } \leq h o m ( P _ { n } , H _ { ind } ) \leq h o$$
The Hoffman-London inequality (see [25, 31, 39]) is equivalent to the statement that hom( P n , H ) β€ hom( S n , H ) for any choice of H and n . Sidorenko [40] extended the Hoffman-London inequality to the following very general result:
Theorem 1.1 (Sidorenko [40]) . Fix H and n β₯ 1 . For any T n β T n ,
$$\tan ( \alpha , \beta ) = \frac { C _ { 1 } A _ { 1 } } { B _ { 1 } A _ { 1 } }$$
In other words, for any target graph H and order n , not only P n , but every tree T n admits at most as many H -colorings as S n . Sidorenko's result entirely resolves the maximization question for trees (and, it can be shown, connected graphs).
Given the result of Prodinger and Tichy, one might ask whether paths minimize the number of H -colorings; could one show a result analogous to Sidorenko's that hom( P n , H ) β€ hom( T n , H ) for all target graphs? CsikvΒ΄ ari and Lin [6], following earlier work of Leontovich [29], dash such hopes by describing a target graph H and a tree E 7 on seven vertices in which hom( E 7 , H ) < hom( P 7 , H ). They raise the following natural problem [6, Problem 6.2]:
Problem 1.2. Characterize those H for which hom( P n , H ) β€ hom( T n , H ) holds for all n and all T n β T n .
Inspired by [25, 31], we introduce the following definitions:
Definition 1.3. We say a target graph H is Hoffman-London if for all n β₯ 1,
$$\frac { 1 } { n } \sum _ { i = 1 } ^ { n } h ( T _ { n } , H ) = \min _ { n \in T _ { n } } h ( P _ { n } , H )$$
If it holds that hom( P n , H ) < hom( T n , H ) for all T n β T n \ { P n } and all n β₯ 1, we say that H is strongly Hoffman-London .
We can thus restate Problem 1.2 as: Which target graphs H are Hoffman-London? To make progress on this problem, we develop criteria for identifying Hoffman-London and strongly HoffmanLondon target graphs based on concepts introduced in a companion paper [19], which we restate here. Given a target graph H , possibly with loops, the orbit partition of H is the partition of V ( H ) obtained by declaring a pair of vertices u, v to be equivalent if there is an automorphism that maps u to v . We say such vertices are automorphically similar . Denote the equivalence classes of the orbit partition, which we call automorphic similarity classes , by H 1 , . . . , H k , where k is the number of automorphism orbits of V ( H ). We will show in Lemma 3.1 that for each 1 β€ i β€ k and 1 β€ j β€ k , there is a constant m i,j counting the number of neighbors any v β H i has in H j , independent of the specific choice of v . The automorphic similarity matrix of H (relative to the specific ordering of the automorphic similarity classes) is the k Γ k matrix M = M ( H ) whose ( i, j )-entry is m i,j .
We also make use of the following matrix property, which is related to the idea of stochastic domination.
Definition 1.4. A k Γ k matrix M with ( i, j )-entry m i,j has the increasing columns property if any terminal segment of columns is non-decreasing: for each 1 β€ c β€ k and each 1 β€ i β€ k -1,
$$\sum _ { j = c } ^ { k } m _ { i , j } \leq \sum _ { j = c } ^ { k } m _ { i + 1 , j }$$
Our first contribution, captured by Theorem 1.5, is a test that can be applied to the automorphic similarity matrix of a graph H which, when it applies, proves that H is Hoffman-London.
Theorem 1.5. Let H be a target graph, possibly with loops. Suppose there is an ordering of the automorphic similarity classes H 1 , . . . , H k of H such that the automorphic similarity matrix M has the increasing columns property. Then H is Hoffman-London.
In certain cases, the reasoning behind Theorem 1.5 can be extended to show H is strongly Hoffman-London. For a graph G with designated vertex v , let Hom v β i ( G,H ) denote the set of H -colorings of G in which v is mapped to some (arbitrary but specific) representative of H i . We use hom v β i ( G,H ) for the number of such H -colorings. Although Hom v β i ( G,H ) clearly depends on the choice of representative, in Lemma 3.2 we will show that hom v β i ( G,H ) does not.
Corollary 1.6. Let H be a target graph that satisfies the hypotheses of Theorem 1.5. Suppose that for each t β₯ 2 , there exist two classes, H a ( t ) and H b ( t ) , such that there is at least one H -coloring of P t that sends one endvertex of P t to a vertex in H a ( t ) and the other endvertex to a vertex in H b ( t ) . Suppose further that for all s β₯ 2 , it holds that hom v β b ( t ) ( P s , H ) > hom v β a ( t ) ( P s , H ) , where v is one of the end vertices of P s . Then H is strongly Hoffman-London.
In Section 4 we apply Theorem 1.5 to prove (and in some cases, reprove known results) that several families of target graphs are Hoffman-London. These examples include vertex transitive
graphs to which looped dominating vertices are added (partially recovering a result of Engbers and the first author [15]), paths (partially recovering a result of CsikvΒ΄ ari and Lin [7]) and fully looped paths, loop threshold graphs, blow-ups of fully looped stars, and families of graphs (including the Widom-Rowlinson model) originating from statistical physics. When possible, we also apply Corollary 1.6 to show that these graphs are strongly Hoffman-London.
## 2 Preliminary observations
In this section, we gather some observations about enumerating H -colorings of trees that will prove useful at various points throughout the paper. We also present a proof that complete bipartite graphs are Hoffman-London.
We begin with two observations pertaining to target graphs with more than one component. Both results follow immediately from the definition of H -coloring.
Observation 2.1. If H has components H 1 , . . . , H k , then for any connected graph G we have
$$h o m ( G , H ) = \sum _ { i = 1 } ^ { k } h o m ( G , H _ { i } )$$
In particular, if it holds that hom( P n , H i ) β€ hom( T n , H i ) for all i , n , and T n β T n , then it holds that hom( P n , H ) β€ hom( T n , H ) for all n and T n β T n .
That is, the set of graphs that are Hoffman-London is closed under taking disjoint unions. The second observation follows from the first, but is useful on its own when considering graphs with isolated vertices.
Observation 2.2. If H has isolated vertices and H β² is obtained from H by removing the isolated vertices, then for each G without isolated vertices we have hom( G,H ) = hom( G,H β² ). In particular, this means that for the purpose of minimizing H -colorings of trees on two or more vertices, we can ignore isolated vertices.
We now make an observation about regular target graphs. Note that here and throughout this paper, the degree of a vertex v is the number of edges (including loops) that contain v -so loops contribute 1 to the degree, not 2, as is used elsewhere in the literature.
Observation 2.3. If H is regular, then all trees on n vertices admit the same number of H -colorings. Specifically, hom( T, H ) = | V ( H ) | d n -1 , where d is the degree of each vertex in H .
It follows that regular graphs are, trivially, Hoffman-London. To see the validity of Observation 2.3, put an ordering v 1 , . . . , v n on the vertices of T satisfying that for each i β₯ 2, v i is adjacent to exactly one v j , j < i . For example, start with an arbitrary vertex as v 1 , then list all the vertices at distance 1 from v 1 , in some arbitrary order, then move on to all the vertices at distance 2 from v 1 , and so on. When H -coloring the vertices of T in this order, there are | V ( H ) | options for the color at v 1 , then d options for the color of each successive vertex.
Next, we refer to the notion of the tensor product of two graphs. For graphs H 1 and H 2 , the graph H 1 Γ H 2 has vertex set V ( H 1 ) Γ V ( H 2 ), and pairs ( x 1 , y 1 ), ( x 2 , y 2 ) are adjacent in H 1 Γ H 2 if and only if x 1 x 2 β E ( H 1 ) and y 1 y 2 β E ( H 2 ); see Figure 1.
for all n and T n β T n .
That is, the set of graphs that are Hoffman-London is closed under taking tensor products.
Observation 2.5. Let R be a regular graph of degree d with at least one edge. Then for any H and any n β₯ 1, the trees T n in T n that minimize hom( T n , H ) are the same as the trees T n in T n that minimize hom( T n , H Γ R ).
Observation 2.5 follows from Observation 2.4 and from the fact that hom( T n , R ) = | V ( R ) | d n -1 (see Observation 2.3) and is thus positive and independent of the specific choice of T n β T n .
Finally, we present a result whose justification is more involved. A bipartition V ( G ) = X βͺ Y of a graph is called balanced if the sizes are as equal as possible, i.e. | X | = | Y | if | V ( G ) | is even and β£ β£ | X | - | Y | β£ β£ = 1 if | V ( G ) | is odd.
ΜΈ
Theorem 2.6. Each complete bipartite graph H = K a,b is Hoffman-London. When a = b , T n β T n minimizes hom( T n , H ) if and only if T n has a balanced bipartition.
ΜΈ
Proof. If a = b , then H is a regular graph and Observation 2.3 tells us that every tree admits the same number of H -colorings. For the remainder of the proof we assume that a = b .
Any tree T n on n vertices has a unique bipartition, say V ( T ) = X βͺ Y , with | X | = k and | Y | = n -k . Let A be the part of H containing a vertices and B be the part containing b vertices. Let Ο β Hom( T n , K a,b ) and x β X . Then, as T n is connected, each vertex in X is adjacent to some vertex in Y and each vertex in Y is adjacent to some vertex in X , so if Ο ( x ) β A , we have Ο ( X ) β A and Ο ( Y ) β B , and if Ο ( x ) β B , Ο ( X ) β B and Ο ( Y ) β A . Furthermore, as H is complete bipartite, the image of x (and each other vertex in X ) can be any vertex in the appropriate part, and similarly for the vertices of Y , so
$$\sum _ { n = 1 } ^ { \infty } f ( k ) = a ^ { k } b ^ { - k }$$
Figure 1: The tensor product K 2 Γ K 2 .
<details>
<summary>Image 1 Details</summary>

### Visual Description
## Diagram: Cartesian Product of Two Edges
### Overview
The image depicts the Cartesian product of two edges. It shows two separate edges, labeled with x1, x2 and y1, y2 respectively, and then illustrates the resulting graph after taking their Cartesian product, which results in a complete bipartite graph K2,2.
### Components/Axes
* **Left Edge:** A vertical line segment connecting two nodes labeled x1 (top) and x2 (bottom).
* **Middle Edge:** A vertical line segment connecting two nodes labeled y1 (top) and y2 (bottom).
* **Multiplication Symbol:** A "x" symbol between the two edges, indicating a Cartesian product operation.
* **Equals Symbol:** An "=" symbol indicating the result of the Cartesian product.
* **Resulting Graph:** A graph with four nodes labeled (x1, y1), (x2, y1), (x1, y2), and (x2, y2). There are edges connecting (x1, y1) to (x2, y2) and (x2, y1) to (x1, y2).
### Detailed Analysis
The diagram illustrates the Cartesian product of two edges. The first edge connects nodes x1 and x2. The second edge connects nodes y1 and y2. The Cartesian product results in a graph with nodes that are pairs of the original nodes: (x1, y1), (x1, y2), (x2, y1), and (x2, y2). The edges in the resulting graph connect (x1, y1) to (x2, y2) and (x2, y1) to (x1, y2).
### Key Observations
* The Cartesian product of two edges results in a complete bipartite graph.
* The nodes in the resulting graph are formed by pairing the nodes from the original edges.
* The edges in the resulting graph connect nodes with different x and y components.
### Interpretation
The diagram demonstrates how the Cartesian product operation combines two edges to create a new graph. The resulting graph's structure is determined by the connections in the original edges. This operation is fundamental in graph theory and has applications in various fields, including computer science and network analysis. The resulting graph is a complete bipartite graph K2,2.
</details>
There is a strong connection between tensor products and H -colorings, namely that for any graphs G , H 1 , and H 2 we have
$$\hom ( G , H _ { 1 } \times H _ { 2 } ) = h$$
(see e.g. [32, Equation (5.30)]). From this fact we draw the following two conclusions:
Observation 2.4. If H 1 , H 2 satisfy hom( P n , H i ) β€ hom( T n , H i ) for each i β { 1 , 2 } , n β₯ 1, and T n β T n , then
$$\leq h o m ( P _ { n } , H _ { 1 } \times H _ { 2 } )$$
We have
ΜΈ
$$f ^ { \prime } ( k ) = \frac { \log ( a ) - \log ( b ) } { ( a ^ { 2 } k b ^ { n } - a ^ { n } b ^ { 2 k } ) . }$$
As a = b , the only zero of f β² occurs when a n -2 k = b n -2 k , which, for distinct positive integers, requires n -2 k = 0, or k = n 2 . Noting
$$f ^ { \prime } ( k ) = \frac { ( \log ( a ) - \log ( b ) ) ^ { 2 } } { ( a ^ { 2 } k ^ { 2 } + a ^ { n } b ^ { 2 } k ) > 0 , }$$
we see k = n 2 is a minimum of f , and furthermore f is decreasing as k increases from 0 to n 2 . Therefore
$$\min _ { T \in F } ( T , H ) \geq 2 a ^ { n } - b ^ { n }$$
when n is even. When n is odd, since k must be an integer,
$$\frac { 1 } { T _ { c } ^ { n } } \min h o m ( T _ { n } , H ) \geq a ^ { ( n - 1 ) / 2 }$$
In both cases, P n achieves the bound, as does any tree whose bipartition is balanced. If T is a tree whose bipartition is not balanced, f ( k ) is strictly greater than the minimum value.
CsikvΒ΄ ari and Lin [7] observed the case a = 1 of Theorem 2.6. The question of whether complete multipartite graphs with more than two parts are Hoffman-London remains open.
## 3 Proofs of Theorem 1.5 and Corollary 1.6
In this section, we prove Theorem 1.5, our primary condition for identifying Hoffman-London graphs H , and Corollary 1.6, our condition for identifying strongly Hoffman-London graphs. Recall from the introduction that if H is a target graph, possibly with loops, and u and v are vertices in H , we say u and v are automorphically similar if there is an automorphism of H that sends u to v . Automorphic similarity defines an equivalence relation on V ( H ), and we call the equivalence classes of that relation the automorphic similarity classes of H . The resulting partition of the vertices is sometimes called the orbit partition . As discussed in [23, Section 9.3], the orbit partition is an equitable partition , which means that for any two (not necessarily distinct) automorphic similarly classes A and B , there is a constant c ( A,B ) such | N ( v ) β© B | = c ( A,B ) for each v β A . In other words, vertices in the same automorphic similarity class have the same number of neighbors in each other class. For completeness, and because the proof is brief and illustrative, we include the following lemma (which was first presented in [19]).
Lemma 3.1. Let H be a graph, possibly with loops, and let H 1 , . . . , H k be the automorphic similarity classes of H . For any 1 β€ i β€ k , any u, v β H i , and any 1 β€ j β€ k ,
$$N a O H ^ { + } = N a O H _ { 2 }$$
Proof. Let Ο be an automorphism of V ( H ) that sends u to v . Then each w β N ( u ) β© H j is mapped to some w β² . As w βΌ u , we see w β² βΌ v , and as w β H j and Ο ( w ) = w β² we see w and w β² are automorphically similar, so w β² β H j as well. We have shown | N ( u ) β© H j | β€ | N ( v ) β© H j | . Repeating the analysis with Ο -1 , we see | N ( u ) β© H j | β₯ | N ( v ) β© H j | and so equality holds.
Using Lemma 3.1, given an ordering H 1 , . . . , H k of the automorphic similarity classes of H , we define m i,j = c ( H i , H j ) to be the number of neighbors that an arbitrary vertex in H i has in H j . We then define an automorphic similarity matrix of H to be a k Γ k matrix M = M ( H ) whose ( i, j )-entry is m i,j , where we note that different orderings of the automorphic similarity classes of H may result in different automorphic similarity matrices.
As we will see in Lemma 3.3, the matrix M , together with the sizes of the automorphic similarity classes of H , contains all of the information necessary to count H -colorings of trees. This could also be accomplished using the adjacency matrix of H , but for highly symmetric target graphs the automorphic similarity matrix is much smaller than the adjacency matrix and is thus easier to analyze. First, though, we need the following result, also presented in [19].
Lemma 3.2. Let H be a graph, possibly with loops, and let G be an arbitrary graph with distinguished vertex v . Suppose w and w β² are automorphically similar vertices in H . Let G be the set of H -colorings of G in which v is sent to w , and let G β² be the set of H -colorings of G in which v is sent to w β² . Then |G| = |G β² | .
Proof. Let Ο be an automorphism of H that sends w to w β² , which exists as w and w β² are automorphically similar. Then the function mapping f to f β¦ Ο is a bijection mapping G to G β² .
We now present another lemma which we will require in the proof of Theorem 1.5 but which we also feel is also of independent interest: a generalization to the setting of automorphic similarity classes and matrices of the tree-walk algorithm of CsikvΒ΄ ari and Lin [6]. Recall that Hom v β i ( T, H ) denotes the set of H -colorings of T in which v is mapped to some (arbitrary but specific) representative of H i and that we set hom v β i ( T, H ) = | Hom v β i ( T, H ) | .
Lemma 3.3. Let H be a target graph with automorphic similarity classes H 1 , . . . , H k and associated automorphic similarity matrix M . Let T be a tree with root v . Let a ( H ) be the row vector whose i th entry is | H i | and define h ( T, v ) to be the column vector whose i th entry is hom v β i ( T, H ) .
Then hom( T, H ) = a ( H ) h ( T, v ) , and h ( T, v ) can be explicitly computed from T , a ( H ) , and M , via recursion on T .
Proof. To see that hom( T, H ) = a ( H ) h ( T, v ), partition Hom( T, H ) according to the color given to v . There are | H i | classes in this partition in which v is send to some vertex in H i , and by Lemma 3.2, each of these classes has size hom v β i ( T, H ). Hence the total count of H -colorings is
$$\sum _ { i = 1 } ^ { k } | H ^ { i } | h o m _ { u \rightarrow i } ( T , v ) .$$
To compute h ( T, v ), we proceed recursively. When T consists of a single vertex, necessarily v , we have that hom v β i ( T, H ) = 1 for any i , and so h ( T, v ) is the constant vector with all entries equal to 1.
When T has more than one vertex, we consider separately the cases deg( v ) = 1 and deg( v ) β₯ 2. When deg( v ) = 1, let w be the neighbor of v . Then
$$\sum _ { j = 1 } ^ { k } h o m _ { w - i } ( T , H ) = \sum _ { j = 1 } ^ { k } n$$
Indeed, once we have colored v with the representative vertex from H i , there are, for each j , m i,j choices for a color from H j for w (as the representative vertex in H i has m i,j neighbors in H j for each j ). For each of these m i,j choices, recalling that hom w β j ( T -v, H ) is independent of the choice of representative from H j due to Lemma 3.2, there are hom w β j ( T -v, H ) ways to extend the coloring to the rest of T . More succinctly, we have
$$h ( T , v ) = M h ( T - v , w )$$
where M is the automorphic similarity matrix of H .
When deg( v ) = d β₯ 2, let w be any neighbor of v . Form tree T β² by taking the component of T -v that contains w and adding a new vertex v β² βΌ w . Form tree T β²β² by taking each component of T -v that does not contain w (of which there is at least one, because v has degree at least two) and adding a new vertex v β²β² adjacent to each vertex that was adjacent to v in T . Note that by gluing v β² to v β²β² we recover T . (See Figure 2.)
Figure 2: Deconstructing T into T β² and T β²β² .
<details>
<summary>Image 2 Details</summary>

### Visual Description
## Diagram: Tree Structure with Subtrees
### Overview
The image depicts a tree structure with labeled nodes and subtrees. It illustrates a hierarchical relationship between nodes, with specific nodes and subtrees highlighted using dashed and dotted lines.
### Components/Axes
* **Nodes:** Represented by black circles.
* **Edges:** Represented by solid black lines connecting the nodes.
* **Labels:**
* `v`: Label for the root node.
* `w`: Label for a node in the left subtree.
* `T'`: Label for the subtree enclosed by a dashed line.
* `T''`: Label for the subtree enclosed by a dotted line.
* `...`: Ellipsis indicating potentially more nodes or subtrees.
* **Subtrees:**
* Subtree `T'` is enclosed by a dashed line.
* Subtree `T''` is enclosed by a dotted line.
### Detailed Analysis
The diagram shows a tree with a root node labeled `v`. From node `v`, there are multiple branches leading to other nodes. One branch leads to node `w`, which is part of subtree `T'`. The subtree `T'` is enclosed by a dashed line. Another branch leads to a subtree `T''`, which is enclosed by a dotted line. The subtree `T''` contains an ellipsis (`...`), indicating that there may be more nodes or subtrees within it.
Specifically:
* Node `v` is the root of the entire tree.
* Node `w` is a child of `v` and the root of a smaller subtree within `T'`.
* Subtree `T'` includes node `w` and its descendants.
* Subtree `T''` includes at least two nodes connected to `v`, with an ellipsis suggesting more.
### Key Observations
* The diagram highlights two distinct subtrees, `T'` and `T''`, originating from the root node `v`.
* The use of dashed and dotted lines clearly distinguishes the boundaries of these subtrees.
* The ellipsis in subtree `T''` indicates an unspecified number of additional nodes or subtrees.
### Interpretation
The diagram likely represents a step in an algorithm or a proof involving tree structures. The subtrees `T'` and `T''` might represent different cases or partitions of the original tree. The ellipsis suggests that the analysis or algorithm is generalizable to trees with varying numbers of branches and nodes. The diagram is a visual aid to understand the relationships between different parts of the tree and how they are being considered in the context of the problem being addressed.
</details>
$$1 6 7 = 1 6 7 \textcircled { + } 1 6 7 = 1 6 7$$
where β represents the Hadamard product in which entries are multiplied term-wise. Indeed, to H -color T while sending v to H i , we take any H -coloring of T β² where v β² is sent to H i (hom v β² β i ( T β² , H ) many options) and independently any H -coloring of T β²β² where v β²β² is sent to H i (hom v β²β² β i ( T β²β² , H ) many options).
ΜΈ
We require an additional ingredient before proving Theorem 1.5. The KC ordering, introduced by CsikvΒ΄ ari [5] (under the name generalized tree shift ) as a generalization of an operation introduced by Kelmans [28], is a partial ordering of trees on n vertices. Let T be a tree, and let v β = v r be two non-leaf vertices with the property that the unique v β to v r path in T is a bare path, meaning that each of the internal vertices, if they exist, have degree two. Let P denote that path on t β₯ 2 vertices.
Removing the edges and internal vertices (if any) of P from T leaves two components; call them L (the one with v β ) and R (the one with v r ). We think of these as the 'left' and 'right' components. Denote by T KC the tree built as follows: glue L and R together into a single tree by identifying the vertices v β and v r ; let v be the vertex created by the identification. Then complete the construction of T KC by appending a path with t -1 new vertices to v ; denote by w the other end of the appended path. We say that T KC is obtained from T by a KC move . Figure 3 illustrates this process. Note that T has at least one such a pair of non-leaf vertices v β , v r exactly if T is not a star.
Then
Figure 3: A demonstration of a KC move.
<details>
<summary>Image 3 Details</summary>

### Visual Description
## Diagram: Graph Transformation
### Overview
The image depicts a transformation of a graph structure. Initially, two circular graphs, labeled 'L' and 'R', are connected by a path 'P'. The transformation, indicated by a double arrow, merges the two graphs at a vertex 'v' and extends the path 'P' downwards to a vertex 'w'.
### Components/Axes
* **Left Side:**
* Circular graph labeled 'L' on the left.
* Vertex 'vl' (v sub l) on the right edge of graph 'L'.
* Path 'P' connecting 'vl' to 'vr'. The path consists of a solid line segment connected to a dashed line segment.
* Vertex 'vr' (v sub r) on the left edge of graph 'R'.
* Circular graph labeled 'R' on the right.
* **Transformation Arrow:** A double arrow pointing from left to right, indicating the transformation.
* **Right Side:**
* Two circular graphs, labeled 'L' (left) and 'R' (right), merged at a vertex 'v'.
* Vertex 'v' where graphs 'L' and 'R' meet.
* Path 'P' extending downwards from vertex 'v'. The path consists of a solid line segment connected to a dashed line segment, and then another solid line segment.
* Vertex 'w' at the bottom of path 'P'.
### Detailed Analysis or ### Content Details
* **Initial State:** The initial state shows two distinct circular graphs, 'L' and 'R', connected by a path 'P'. The path 'P' consists of at least two segments, one solid and one dashed, connecting the vertices 'vl' and 'vr' on the respective graphs.
* **Transformation:** The transformation merges the two graphs 'L' and 'R' at a single vertex 'v'. The path 'P' is extended downwards from 'v' to a new vertex 'w'. The path 'P' on the right side consists of at least three segments, two solid and one dashed.
### Key Observations
* The transformation merges two separate graphs into a single structure.
* The path 'P' is extended during the transformation.
* The vertices 'vl' and 'vr' are no longer directly connected after the transformation.
### Interpretation
The diagram illustrates a graph transformation where two separate graphs are merged, and a connecting path is extended. This type of transformation could represent a simplification or restructuring of a network, where two distinct entities are combined into a single entity with an extended connection to another element. The dashed lines in path 'P' may represent a different type of connection or a less direct relationship compared to the solid lines.
</details>
CsikvΒ΄ ari [5, Remark 2.4] shows that the relation S β€ T defined by T being obtainable from S by a sequence of KC moves defines, for each n , a partial order on T n (the KC ordering ), with P n the unique minimal element and S n the unique maximal element.
The following lemma is necessary for the proof of Theorem 1.5. The proof that we give follows [27, Theorem 1.1, ( a ) β ( c )]. We choose to present the full details because [27] only treats matrices whose row sums are 1.
Lemma 3.4. Let M be a k Γ k non-negative matrix that has the increasing columns property, and let h be a non-negative column vector of dimension k whose entries are non-decreasing. Then the entries of M h are non-negative and non-decreasing.
Proof. Let S be the k Γ k matrix that has 1s on and below the main diagonal and 0s everywhere else. Note that S -1 has 1s down the main diagonal, -1s down the first subdiagonal, and 0s everywhere else, which can be verified by computing SS -1 .
Observe that for a vector v = [ v 1 v 2 Β· Β· Β· v k ] T , the entries of v being non-negative and non-decreasing is equivalent to the entries of S -1 v being non-negative because the first entry of S -1 v is v 1 and for i > 1, the i th entry is v i -v i -1 .
We wish to show that the entries of M h are non-negative and non-decreasing. This is implied by S -1 M h having non-negative entries, which is in turn implied by ( S -1 MS )( S -1 h ) having nonnegative entries. By assumption, the entries of h are non-negative and non-decreasing, so the entries of S -1 h are non-negative. Therefore it suffices to show that S -1 MS has non-negative entries.
Let A = S -1 MS . We have
$$= \sum _ { p = 1 } ^ { k } \sum _ { q = 1 } ^ { j } ( S ^ { - 1 } ) _ { i,p } S _ { q } ^ { j }$$
where we adopt the convention that m 0 ,q = 0. Then
$$\sum _ { q = 1 } ^ { j } m _ { i , q } - \sum _ { q = 1 } ^ { j } m _ { i , q - 1 } \geq 0$$
follows from the increasing columns property.
We're now prepared to prove Theorem 1.5, which we restate here for convenience.
Theorem 1.5. Let H be a target graph, possibly with loops. Suppose there is an ordering of the automorphic similarity classes H 1 , . . . , H k of H such that the automorphic similarity matrix M has the increasing columns property. Then H is Hoffman-London.
Proof. We show that when H is a graph whose automorphic similarity classes can be ordered in such a way that the automorphic similarity matrix of H has the increasing columns property, applying a non-trivial KC move to a tree does not decrease the number of H -colorings of the tree. As the path is the minimum element of the KC ordering, it must therefore be among the minimizers of hom( T n , H ).
ΜΈ
We begin by establishing some notation. Let T be a non-star tree, let v β = v r be two non-leaf vertices of T with the property that all of the internal vertices on the unique v β to v r path have degree two, let there be t vertices in that internal path, and let T KC be the tree obtained from T by applying a KC move at vertices v β and v r . Suppose H has automorphic similarity classes H 1 , . . . , H k , labeled in such a way that the automorphic similarity matrix M has the increasing columns property. For 1 β€ i β€ k , denote by L i the set of H -colorings of L in which v β is mapped to some specific (but arbitrarily chosen) vertex w i of H i , and let β i = |L i | . Define R i and r i analogously (with L replaced by R ). Note that by Lemma 3.2, β i and r i depend only on i and not on the specific choice of w i . For w,v β V ( H ), denote by P t ( w,v ) the set of H -colorings of any path x 1 , . . . , x t (where, recall, t is the number of vertices on the unique path in T between v β and v r ) in which x 1 is mapped to w and x t is mapped to v , and let p t ( w,v ) = |P t ( w,v ) | . Finally, for 1 β€ i β€ k and 1 β€ j β€ k , let P t i,j = β w β H i , v β H j P t ( w,v ) and let p t i,j = |P t i,j | = β w β H i , v β H j p t ( w,v ).
The idea at the core of this proof is that hom( T, H ) and hom( T KC , H ) can both be expressed in terms of the β i , r j , and p t i,j s by partitioning the H -colorings according to, in the case of T , the automorphic similarity classes to which v β and v r are mapped, and in the case of T KC , the automorphic similarity classes to which v and w are mapped. In each case we get expressions that are sums of k 2 terms, and we show hom( T KC , H ) β₯ hom( T, H ) by proving each term in the summation expression for hom( T KC , H ) -hom( T, H ) is non-negative.
When computing hom( T, H ), v β and v r can take any pair of values, and those pairs of values then determine the values taken by the endvertices of P , which gives
$$\sum _ { i = 1 } ^ { k } \sum _ { j = 1 } ^ { k } \sum _ { i = 1 } ^ { k } l _ { i r } p ^ { t } ( v , w )$$
Note here that the count of H -colorings of T that send v β to some w β H i and v r to some v β H j should, a priori , involve terms that depend on w and v , but by Lemma 3.2, these terms ( β i and r j , respectively) are independent of w and v .
In contrast, when computing hom( T KC , H ), v β and v r are forced to take a common value, which determines the values at one of the endvertices of P , and the value at the other endvertex is free, which means
$$\sum _ { i = 1 } ^ { k } \sum _ { j = 1 } ^ { l } \sum _ { i = 1 } ^ { k } \sum _ { j = 1 } ^ { l } \ell _ { i r p t } ( w , v )$$
The term β i r i p t i,i appears in both sums for each 1 β€ i β€ k . Using symmetry to note p t i,j = p t j,i for 1 β€ i < j β€ k , we combine equations (5) and (6) to write
$$\sum _ { i \leq j \leq k } ( l _ { i r } + l _ { j r } - l _ { i r } - l _ { j r } ) = \sum _ { i \leq j \leq k } ( l _ { j } - l _ { i } ) ( r _ { i } - r _ { j } ) p _ { i , j }$$
Noting that trivially p t i,j β₯ 0, we conclude that in order to prove hom( T KC , H ) β₯ hom( T, H ) it suffices to show that for each i, j we have ( β j -β i )( r j -r i ) β₯ 0.
Given any tree T and distinguished vertex v , recall that hom v β i ( T, H ) denotes the number of H -colorings of T in which distinguished vertex v is mapped to some specific (but arbitrarily chosen) vertex w i of H i , where we again note that by Lemma 3.2, hom v β i ( T, H ) depends only on i and not on the specific choice of w i . As in the statement of Lemma 3.3, define h ( T, v ) to be the k -dimensional column vector whose i th entry is hom v β i ( T, H ). We now prove that our condition on the columns of M , the automorphic similarity matrix of H , ensures that h ( T, v ) is non-decreasing for any choice of T and v , so that in particular when i < j ,
$$l _ { i } = h o m _ { v _ { i } } \rightarrow j ( T , H ) = l _ { i }$$
and analogously r j β₯ r i .
To see that h ( T, v ) is always non-decreasing, we use Lemmas 3.3 and 3.4. By the proof of Lemma 3.3, h ( T, v ) can be computed from the all-1s column vector by a sequence of steps that involve taking Hadamard products of vectors and pre-multiplying vectors by M . Combining the facts that the all-1s vector is non-negative and non-decreasing, that the Hadamard product of nonnegative, non-decreasing vectors is both non-negative and non-decreasing, and that (by Lemma 3.4) pre-multiplying by M preserves the properties of being non-negative and non-decreasing, we see that h ( T, v ) is indeed non-negative and non-decreasing. Therefore, β j β₯ β i and r j β₯ r i , so for each i and j , ( β j -β i )( r j -r i ) β₯ 0 and we must have hom( T KC , H ) β₯ hom( T, H ), as desired.
We now restate and prove Corollary 1.6, which gives conditions under which paths uniquely minimize hom( T, H ). The proof focuses on equation (7) in the special case where T is a path, and aims to show that at least one of the summands on the right-hand of equation (7) is (under certain circumstances) strictly positive. As an aside, we note that unlike traditional corollaries which follow directly from a theorem, we justify Corollary 1.6 using ideas from the proof of Theorem 1.5 rather than using the statement. Nonetheless, we use the term 'corollary' to highlight the connection between the results.
Corollary 1.6. Let H be a target graph that satisfies the hypotheses of Theorem 1.5. Suppose that for each t β₯ 2 , there exist two classes, H a ( t ) and H b ( t ) , such that there is at least one H -coloring of P t that sends one endvertex of P t to a vertex in H a ( t ) and the other endvertex to a vertex in H b ( t ) . Suppose further that for all s β₯ 2 , it holds that hom v β b ( t ) ( P s , H ) > hom v β a ( t ) ( P s , H ) , where v is one of the end vertices of P s . Then H is strongly Hoffman-London.
Proof. Let P KC n be obtained from P n by a KC move. Suppose that the bare path involved in this move has t vertices. We have
$$\sum _ { i = j } ^ { n } ( l _ { j } - l _ { i } ) ( r _ { j } - r _ { i } ) p _ { i j }$$
$$\geq ( b ( t ) - l _ { a } ( t ) ) ( r _ { t } ) - r _ { a }$$
Equation (8) is an instance of equation (7), while inequality (9) uses that ( β j -β i )( r j -r i ) p t ij β₯ 0 for all i, j , which comes from the proof of Theorem 1.5.
Note that p t a ( t ) ,b ( t ) counts the number of H -colorings of P t that send one end vertex of P t to something in H a ( t ) and the other end to something in H b ( t ) ; by hypothesis there is at least one such H -coloring, so p t a ( t ) ,b ( t ) > 0.
We have that β b ( t ) = hom v β b ( t ) ( L, H ) and β a ( t ) = hom v β a ( t ) ( L, H ). As L is a path, we have by hypothesis that β b ( t ) > β a ( t ) . Similarly, r b ( t ) > r a ( t ) . We conclude that
$$( b ( t ) - l _ { a } ( t ) ) ( r _ { b } ( t )$$
It follows from inequality (9) that hom( P KC n , H ) -hom( P n , H ) > 0, and so for any T n obtained from P n by a sequence of KC moves, hom( P n , H ) < hom( P KC n , H ) β€ hom( T n , H ). Since all trees on n vertices can be thus obtained, the result follows.
## 4 Applications of Theorem 1.5
One goal of this paper is to use Theorem 1.5 to contribute towards the resolution of Problem 1.2. Before we state our results in this direction, we summarize some pertinent results from the literature. We then show how Theorem 1.5 can reprove and sometimes strengthen some of these results, before moving on to establishing some new Hoffman-London families.
## 4.1 Two previous results on Hoffman-London graphs
In inequality (1), we saw that the path minimizes the number of independent sets among trees, i.e. that the target graph H ind is Hoffman-London, where, recall, H ind consists of two vertices joined by an edge with a loop at one vertex. Engbers and Galvin [15] gave a significant generalization of this result by showing that target graphs H formed by adding looped dominating vertices (looped vertices that are adjacent to all other vertices) to a regular graph are Hoffman-London, and they characterized the strongly Hoffman-London target graphs in this family.
Theorem 4.1 (Engbers, Galvin [15]) . If H is obtained from a regular graph by adding any number of looped dominating vertices, then H is Hoffman-London. If H is not regular (equivalently, if H is not a fully looped complete graph), then H is strongly Hoffman-London.
A special case of Theorem 4.1-that H is strongly Hoffman-London when it is obtained from an empty (edgeless) graph by adding any non-zero number of looped dominating vertices, or equivalently when it is the join of empty graph and a fully looped complete graph-can also be easily derived from earlier work of Wingard [43, Theorem 5.1 and Theorem 5.2] on independent sets of fixed size in trees.
CsikvΒ΄ ari and Lin [7] showed that paths and stars are Hoffman-London.
Theorem 4.2 (CsikvΒ΄ ari, Lin [7]) . If H is either a path or a star (on any number of vertices), then for all n and all T n β T n we have
$$\vert \sin ( P _ { 1 } ) \vert < \vert \sin ( P _ { 2 } ) \vert$$
Note that CsikvΒ΄ ari and Lin do not comment on when paths and stars are strongly HoffmanLondon. We will address this question for paths in Section 4.3, specifically after the proof of Theorem 4.6. For stars, note that Theorem 2.6 shows that S k is not strongly Hoffman-London for any k .
## 4.2 Target graphs with two automorphic similarity classes
In this section we specialize Theorem 1.5 to the case in which H has exactly two automorphic similarity classes and is not a regular graph. (Note that by Observation 2.3, if H is regular, then hom( T n , H ) is independent of T n , so H is Hoffman-London but not strongly Hoffman-London.) Recall that automorphic similarity classes partition the vertex set, and so in this case all vertices of H are in either H 1 or H 2 . We have the following corollary of Theorem 1.5 and Corollary 1.6.
Corollary 4.3. Let H be a target graph that is not regular and that has two automorphic similarity classes, H 1 and H 2 , ordered so that the (common) degree of any vertex in H 1 is smaller that the (common) degree of any vertex in H 2 . If m 1 , 2 β€ m 2 , 2 , then H is Hoffman-London. If, further, there is a vertex in H that has a neighbor in H 1 and a neighbor in H 2 , then H is strongly HoffmanLondon.
Proof. To see that H is Hoffman-London, note that for any vertices x β H 1 and y β H 2 we have
$$2 < m _ { 1 } + m _ { 2 } = deg ( y ) .$$
Together with m 1 , 2 β€ m 2 , 2 , inequality (10) shows that the automporphic similarity matrix of H satisfies the increasing columns property and so is Hoffman-London by Theorem 1.5.
We now use Corollary 1.6 to establish that if H has a vertex that is adjacent to vertices in both H 1 and H 2 , then H is strongly Hoffman-London. Observe that for 0 < a β€ b we have
<!-- formula-not-decoded -->
and our hypotheses that m 1 , 1 + m 1 , 2 < m 2 , 1 + m 2 , 2 and 0 < a β€ b imply that 0 < a β² < b β² . We use this fact to show that for all s β₯ 2, we have hom v β 1 ( P s , H ) < hom v β 2 ( P s , H ). As in the statement of Lemma 3.3, define h ( T, v ) to be the k -dimensional column vector whose i th entry is hom v β i ( T, H ). When s = 1, the i th entry of h ( P 1 , v ) is hom v β i ( P 1 , H ), the count of H -colorings of P 1 in which v is mapped to a representative of H 1 , so h ( P 1 , v ) = [ 1 1 ] T , where v is the unique
vertex of P 1 . Then for s β₯ 2, we have h ( P s , v ) = M h ( P s -1 , v ), so by equation (11), we conclude hom v β 1 ( P s , H ) < hom v β 2 ( P s , H ).
To apply Corollary 1.6 and conclude that H is strongly Hoffman-London, it remains to show that for all t β₯ 2 there is an H -coloring of P t that sends one end vertex to H 1 and the other to H 2 . Let x β H be the vertex in H that has a neighbor y β H 1 and z β H 2 , noting that x, y , and z need not necessarily be distinct. Suppose x β H 1 . If t is even, then we can map the vertices of P t alternately to x and z ; if t is odd, then we can map the first vertex of P t to y , the second to x , and then proceed as in the even case. The argument for x β H 2 follows analogously.
We can use Corollary 4.3 to recover a portion of Theorem 4.1. Specifically, Corollary 4.3 yields Theorem 4.1 in the case when looped dominating vertices are added to a vertex-transitive (and so necessarily regular) graph. Although the result below is weaker than Theorem 4.1, we include it as it makes our classification of target graphs on three vertices (see Section 5) self-contained.
Proof of special case of Theorem 4.1. Let H be obtained from a vertex transitive graph, say H 0 , by adding some number of looped dominating vertices. If either the number of looped dominating vertices added is zero or H 0 is a fully looped complete graph, then H is regular, and so by Observation 2.3 H is Hoffman-London but not strongly Hoffman-London. For the remainder of the proof, we assume that H 0 is regular but not a fully looped complete graph and at least one looped dominating vertex is added. Let A be the vertex set of H 0 and let B be the set of added looped dominating vertices. Let k be the degree of vertices in H 0 ; note k < | A | .
It is easy to check that H has two automorphic similarity classes, namely H 1 = A and H 2 = B . Let x and y be arbitrary vertices in H 1 and H 2 , respectively. We have
<!-- formula-not-decoded -->
$$m _ { 2 } = \vert B \vert = m _ { 2 }$$
We apply Corollary 4.3 to conclude that H is Hoffman-London. Furthermore, because any vertex in H 2 is adjacent to itself and to all vertices in H 1 , Corollary 4.3 also shows that H is strongly Hoffman-London.
We note that Theorem 1.5 alone could not possibly imply the full strength of Theorem 4.1, which we demonstrate in Example 4.4.
Example 4.4. Consider the Folkman graph [16], a 4-regular graph on 20 vertices constructed from a K 5 by first subdividing each edge and then cloning each of the vertices of the original K 5 ; see Figure 4 where the vertices of the original K 5 are white and those from subdivided edges are black.
The Folkman graph is bipartite and has two automorphic similarity classes, specifically the two bipartition classes. By Theorem 4.1, adding a looped dominating vertex to the Folkman graph produces an H that is Hoffman-London. This H has three automorphic similarity classes: the two bipartition classes of the Folkman graph, each of size ten, which we call A and B , and the looped dominating vertex, which we call C . Vertices in A (respectively, B ) have no neighbors in A (respectively, B ), four neighbors in B (respectively, A ), and one neighbor in C . The vertex in C has ten neighbors in A , ten in B and one in C . So if we set A = H 1 , B = H 2 (or A = H 2 , B = H 1 )
and
Figure 4: The Folkman graph.
<details>
<summary>Image 4 Details</summary>

### Visual Description
## Diagram: Graph Visualization
### Overview
The image presents a complex graph visualization, featuring nodes (represented by black and white circles) and edges (represented by black lines) connecting these nodes. The graph appears to be a three-dimensional structure, possibly representing a polyhedron or a network.
### Components/Axes
* **Nodes:** Represented by black and white circles. The distribution of black and white nodes seems to follow a pattern, but it's not immediately clear what the pattern represents.
* **Edges:** Represented by black lines connecting the nodes. The edges define the structure and connectivity of the graph.
* **Spatial Arrangement:** The nodes and edges are arranged in a three-dimensional manner, creating a complex visual structure.
### Detailed Analysis
* **Node Distribution:** The nodes are distributed throughout the space, with some concentrated in certain areas. The black nodes appear to form a circular or polygonal shape in the center of the graph.
* **Edge Connectivity:** The edges connect the nodes in a complex network, creating multiple paths between different nodes. The density of edges varies across the graph.
* **Symmetry:** The graph exhibits some degree of symmetry, but it is not perfectly symmetrical.
### Key Observations
* The graph appears to be a representation of a complex network or structure.
* The black nodes form a distinct shape within the graph.
* The connectivity of the graph is high, with multiple paths between nodes.
### Interpretation
The graph visualization likely represents a complex system or structure, such as a molecular structure, a social network, or a computer network. The black nodes may represent a specific subset of nodes with a particular property or function. The high connectivity of the graph suggests that the system is highly interconnected and resilient to failures. Without additional context, it is difficult to determine the exact meaning of the graph.
</details>
and C = H 3 , the corresponding automorphic similarity matrix is
$$M = \left[ \begin{array}{ccc} 0 & 4 & 1 \\ 4 & 0 & 1 \\ 10 & 10 & 1 \end{array} \right] ,$$
which fails the increasing columns property of Theorem 1.5. Setting C = H 1 or C = H 2 also produces automorphic similarity matrices that fail the increasing columns property, meaning that Theorem 1.5 cannot be used to conclude that H is Hoffman-London.
As an example of the utility of Corollary 4.3 to obtain new families of Hoffman-London graphs, consider the family of constraint graphs H ( a, b, β ), defined for a, b β₯ 1 and β β₯ 0. Start with a clique on b vertices. At each vertex v of the clique, append β copies of a clique on a vertices, with v the only vertex in common to the β appended cliques; see Figure 5. This family generalizes many well known families of graphs; for example, H (2 , 1 , β ) is the star graph, H (2 , 2 , β ) is the balanced double star -an edge with β edges appended to each endpoint, and H (3 , 1 , β ) is the fan graph -a collection of triangles sharing a single common vertex. More generally, H ( a, 1 , β ) is a collection of β cliques on a vertices each, all sharing a single common vertex; following standard notation from topology, we call this a bouquet of β a -cliques .
For some choices of parameters, it is straightforward to verify whether H ( a, b, β ) is HoffmanLondon. In the cases β = 0, β β₯ 1 and a = 1, or β = 1 and b = 1, we see H ( a, b, β ) is just a complete graph, which is regular and so Hoffman-London but not strongly Hoffman-London. If β β₯ 2, a = 2, and b = 1 then H ( a, b, β ) is, as observed above, a star, and so is Hoffman-London by Theorem 4.2, although, as we saw in Theorem 2.6, it is not strongly Hoffman-London. For many other choices of parameters, we can use Corollary 4.3.
Corollary 4.5. For all a, b β₯ 2 and β β₯ 1 , the target graph H ( a, b, β ) is strongly Hoffman-London.
Proof. The graph H ( a, b, β ) has two automorphic similarity classes: H 2 consisting of the b vertices of the initial clique and H 1 consisting of all β ( a -1) added vertices. We have m 1 , 2 = 1 β€ b -1 = m 2 , 2 , and if x is any vertex in H 1 and y any vertex in H 2 we have deg( x ) = a -1 < b -1+ β ( a -1) = deg( y ). We conclude that H is Hoffman-London by Corollary 4.3.
Figure 5: A visualization of the family of graphs H ( a, b, β ).
<details>
<summary>Image 5 Details</summary>

### Visual Description
## Diagram: Graph Structure
### Overview
The image depicts a diagram of a graph structure consisting of a central circular node labeled "Kb" with two attached nodes, each having three "Ka" lobes extending from them. The number of "Ka" lobes is indicated as "l times" with a dotted line.
### Components/Axes
* **Nodes:**
* Central circular node: Labeled "Kb"
* Two attached nodes: Each connected to the central node and having three lobes.
* **Lobes:**
* Each attached node has three lobes labeled "Ka".
* **Annotation:**
* Dotted lines with the text "l times" indicate the repetition of the "Ka" lobes.
### Detailed Analysis or ### Content Details
The diagram shows a graph where a central node, denoted as "Kb", is connected to two other nodes. Each of these two nodes has three lobes extending from it, each labeled "Ka". The annotation "l times" suggests that the number of "Ka" lobes could be generalized to 'l' instead of just three.
### Key Observations
* The diagram illustrates a specific graph structure with labeled nodes and lobes.
* The "l times" annotation implies a variable number of "Ka" lobes.
### Interpretation
The diagram represents a graph structure where a central node (Kb) is connected to two nodes, each having a set of lobes (Ka). The annotation "l times" suggests a generalization where the number of lobes can vary. This type of diagram could be used to represent relationships or connections in various systems, such as networks, data structures, or even social relationships. The specific meaning of "Ka" and "Kb" would depend on the context in which this diagram is used.
</details>
Any vertex in H 2 is adjacent to at least one other vertex in H 2 and at least one vertex in H 1 , and so Corollary 4.3 also demonstrates that H is strongly Hoffman-London.
The graphs H ( a, b, β ) that are not covered by Corollary 4.5 or the preceding discussion are those with a β₯ 3, b = 1, and β β₯ 2. In these cases, as mentioned above, H is a bouquet of β a -cliques. Such target graphs have two automorphic similarity classes: one that includes only the central vertex and another that includes all other vertices. For this collection of target graphs, we cannot apply Theorem 1.5: if we choose the central vertex to be H 1 , then the associated automorphic similarity matrix has β ( a -1) > 1 as its (1 , 2)-entry and 1 as its (2 , 2)-entry and fails the increasing columns property, while if we choose the central vertex to be H 2 then the associated automorphic similarity matrix has 1 as its (1 , 2)-entry and 0 as its (2 , 2)-entry, and so again fails the increasing columns property. It is unknown which graphs of this subfamily, if any, are Hoffman-London.
## 4.3 Paths and looped paths as target graphs
In this section, we consider paths and fully looped paths as target graphs. We begin by using Theorem 1.5 to recover part of Theorem 4.2, specifically the case when H is a path with an even number of vertices. We also strengthen this portion of Theorem 4.2 by using Corollary 1.6 to show that target graphs H in this case with at least four vertices are strongly Hoffman-London.
Theorem 4.6. The path graph P 2 m is Hoffman-London for all m β₯ 1 and strongly Hoffman-London for all m β₯ 2 .
Proof. Let H = P 2 m for some m β₯ 1, specifically with vertices u 1 , . . . , u 2 m and edges u i u i +1 for 1 β€ i β€ 2 m -1. We claim that P 2 m admits exactly two automorphisms: the identity and the
reversal map that sends u i to u 2 m +1 -i for i = 1 , . . . , 2 m . To verify this claim, first note that an automorphism of P 2 m must preserve degrees and thus leaves must be mapped to leaves. If u 1 is mapped to u 1 , then u 2 , the unique neighbor of u 1 , must be mapped to u 2 , u 3 must be mapped to u 3 , and so on, resulting in the identity map. If u 1 is instead mapped to u 2 m , then u 2 must be mapped to u 2 m -1 , u 3 must be mapped to u 2 m -2 , and so on, giving resulting in the reversal map. It follows that P 2 m has m automorphic similarity classes, each consisting of a pair of vertices { u i , u j } such that i + j = 2 m +1.
Set H i = { u i , u 2 m +1 -i } for i = 1 , . . . , m . The automorphic similarity matrix of H with respect to this ordering of the automorphic similarity classes has the form
<!-- formula-not-decoded -->
That is, it has 1s on the first subdiagonal, 1s on the first superdiagonal, a 1 in the ( m,m ) position, and 0s everywhere else. It is straightforward to check that all terminal sums of the i th row are less than or equal to the corresponding terminal sums of the ( i + 1) th row for all i < m , and so this matrix satisfies the increasing columns property, establishing that P 2 m is Hoffman-London by Theorem 1.5.
We now turn to establishing that P 2 m is strongly Hoffman-London for m β₯ 2. We seek to apply Corollary 1.6. Towards this goal, we will show by induction on t that
$$\sum _ { i = 1 } ^ { n } \sum _ { j = 1 } ^ { m } ( P _ { i } , P _ { j } ) < h o m _ { v - 1 } ( P _ { i } , P _ { 2 m } ) < \ldots < h o m _ { v - 3 } ( P _ { i } , P _ { 2 m } ) = \ldots$$
where v is an endvertex of P t .
When t = 2, inequality (12) holds because hom v β 1 ( P 2 , P 2 m ) = 1 and hom v β i ( P 2 , P 2 m ) = 2 for 2 β€ i β€ m . For t > 2, let v β² be the unique neighbor of v in P t . Using equation (4) from the proof of Lemma 3.3, we have the recurrence relations
$$\begin{array}{ll}
h o m v \rightarrow ( P _ { 1 } , P _ { 2 m } ) = h o & \\
h o m v \rightarrow ( P _ { 1 } , P _ { 2 m } ) = h o & \\
h o m v \rightarrow m ( P _ { 1 } , P _ { 2 m } ) = h o & \\
\end{array}$$
By induction, we have
$$\sum _ { i = 1 } ^ { n } h ( m _ { i } - 1 ) ( P _ { i } - 1 , P _ { 2 m } ) < \ldots < h ( m _ { n } - 1 )$$
When m = 2, we use the fact that hom v β² β 1 ( P t -1 , P 4 ) > 0 to say
$$\hom _ { v } \left ( P _ { t - 1 } , P _ { 4 } \right ) = \hom _ { v } - 2 \left ( P _ { t - 1 } , P _ { 4 } \right ) = \hom _ { p } \left ( P _ { t - 1 } , P _ { 4 } \right ).$$
which establishes inequality (12).
For m β₯ 3, we first establish hom v β 1 ( P t , P 2 m ) < hom v β 2 ( P t , P 2 m ). By inequality (13), we see that hom v β² β 2 ( P t -1 , P 2 m ) β€ hom v β² β 3 ( P t -1 , P 2 m ). Then, again using hom v β² β 1 ( P t -1 , P 2 m ) > 0, we have
<!-- formula-not-decoded -->
Next, we show hom v β i ( P t , P 2 m ) β€ hom v β i +1 ( P t , P 2 m ) for 2 β€ i β€ m -2 by using inequality (13) to say
$$\hom _ { v } ^ { - i } ( P _ { t - 1 } , P _ { 2 m } ) = \hom _ { v } ^ { - i + 1 } ( P _ { t - 1 } , P _ { 2 m } )$$
Finally, we demonstrate hom v β m -1 ( P t , P 2 m ) β€ hom v β m ( P t , P 2 m ) by once again using inequality (13):
<!-- formula-not-decoded -->
which establishes inequality (12).
Finally, we use inequality (12) to show that P 2 m is strongly Hoffman-London. We consider the cases m = 2 and m > 2 separately. For m = 2 and t β₯ 2, set a ( t ) = 1 and b ( t ) = 2. From inequality (12), we have hom v β a ( t ) ( P s , P 4 ) < hom v β b ( t ) ( P s , P 4 ) for all s β₯ 2. To apply Corollary 1.6 to show that P 4 is strongly Hoffman-London, it remains to show that for all t β₯ 2 there is a P 4 -coloring of P t that sends one endvertex v of P t to some vertex in H 1 and the other endvertex w to some vertex in H 2 . For t even, such a coloring is given by sending v to u 1 β H 1 , then sending the vertices of P t alternatively to u 2 and u 1 , ending by sending w to u 2 β H 2 . For t odd, such a coloring is given by sending v to u 1 β H 1 , then sending the vertices of P t alternatively to u 2 and u 1 except for w , which is sent to u 3 β H 2 instead of u 1 .
For m > 2, set a ( t ) = 1 for t β₯ 2, and set b ( t ) = 2 for even t β₯ 2 and set b ( t ) = 3 for odd t β₯ 2. From inequality (12), we again have hom v β a ( t ) ( P s , P 2 m ) < hom v β b ( t ) ( P s , P 2 m ) for all s β₯ 2. To exhibit a coloring of P t that sends v to a vertex in H a ( t ) and w to a vertex in H b ( t ) , we proceed exactly as in the m = 2 case: send v to u 1 β H 1 and proceed to alternate sending the vertices of P t to u 2 and u 1 , ending by sending w to u 2 β H 2 when t is even and deviating by sending w to u 3 β H 3 instead of u 1 when t is odd.
Theorem 4.6 shows that P β is strongly Hoffman-London for all even β β₯ 4, and Observation 2.3 establishes that P 2 is not strongly Hoffman-London, because all trees admit the same number of P 2 -colorings. What can be said of P β when β is odd? As demonstrated in Theorem 2.6, P 3 is not strongly Hoffman-London but in a different manner than P 2 : trees with a balanced bipartition, including paths but not all trees, minimize hom( T n , P 3 ). We know that paths are among the
minimizers of hom( T n , P β ) for odd β due to CsikvΒ΄ ari and Lin [7], but the precise set of trees that minimize the number of P β -colorings for odd β β₯ 5 remains open.
The fully looped path P β¦ n is the graph on vertex set u 1 , . . . , u n with edges u i u i +1 for 1 β€ i β€ n -1 and u i u i for each 1 β€ i β€ n . Unlike for unlooped paths, for which we were only able to apply our techniques when the number of vertices is even, we can completely classify all fully looped paths.
Theorem 4.7. The fully looped path P β¦ n is Hoffman-London for all n β₯ 1 and strongly HoffmanLondon for all n β₯ 3 .
Proof. For n = 1 or 2, P β¦ n is a fully looped complete graph and so Observation 2.3 tells us that P β¦ n is Hoffman-London but not strongly Hoffman-London. When n = 3, P β¦ n is an instance of a regular graph with a looped dominating vertex added, and so applying Theorem 4.1 establishes that P β¦ n is strongly Hoffman-London.
For n β₯ 4, we start by considering n even, say n = 2 m . Exactly as for P 2 m , we find that the automorphic similarity classes of P β¦ 2 m are H i = { v i , v 2 m +1 -i } , i = 1 , . . . , m , each of size 2, and the the associated automorphic similarity matrix has the form
$$\begin{bmatrix} 1 & 1 & 0 & 0 & \cdots & 0 & 0 \\ 1 & 1 & 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & 1 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 1 & 1 \\ 0 & 0 & 0 & 0 & \cdots & 0 & 1 \\ \end{bmatrix}$$
(i.e., it is the automorphic similarity matrix of P 2 m modified by the addition of the identity matrix I 2 m ). It is straightforward to check that all terminal sums of the i th row are less than or equal to the corresponding terminal sums of the ( i +1) th row for all i < m , and so this matrix satisfies the increasing columns property and P β¦ 2 m is Hoffman-London.
For n β₯ 5 and odd, say n = 2 m -1, the automorphic similarity classes of P β¦ 2 m -1 are H i = { v i , v 2 m -i } , i = 1 , . . . , m -1, each of size 2, and H m = { v m } (of size 1). The structure of the corresponding automorphic similarity matrix is the same as in the even case, except that in the final row the 1 and 2 are flipped. That M satisfies the increasing columns condition follows almost exactly as it did in the even case.
We have established that P β¦ n is Hoffman-London for all n and strongly Hoffman-London for n = 3. To establish that P β¦ n is strongly Hoffman-London for n β₯ 4, we will show that Corollary 1.6 can be applied. We first observe that for all s β₯ 2 we have hom v β 1 ( P s , P β¦ n ) < hom v β 2 ( P s , P β¦ n ), where v is one of the endvertices of P s . We can see this via a non-surjective injection from Hom v β 1 ( P s , P β¦ n ) into Hom v β 2 ( P s , P β¦ n ). For concreteness, we take a fixed x β H 1 (necessarily one of the endvertices of P β¦ n ) to be the representative of H 1 that v is mapped to, and take its unique neighbor x β² β H 2 to be the representative of H 2 that v is mapped to. The injection is simple: given f β Hom v β 1 ( P s , P β¦ n ) with f ( v ) = x , map f to f β² by changing the value at v to x β² . That f β² β Hom v β 2 ( P s , P β¦ n ) (i.e., that f β² is actually a homomorphism) follows from the fact that f β² maps v β² (the unique neighbor of v in P s ) to a neighbor of x , so either x or x β² , and both of these are neighbors of x β² . To see that the injection is not a surjection, consider f β² β Hom v β 2 ( P s , P β¦ n ) that sends v to x β² and all other vertices of P s to x β²β² , the neighbor of x β² in H 2 . There is no f β Hom v β 1 ( P s , P β¦ n ) that maps onto f β² via our injection, since x is not adjacent to x β²β² in P β¦ n .
We next observe that for all t β₯ 2 there is a P β¦ n -coloring of P t that sends one endvertex to a vertex in H 1 and the other to a vertex in H 2 -simply send one endvertex to x and all the others to x β² . That P β¦ n ( n β₯ 4) is strongly Hoffman-London now follows from Corollary 1.6.
We end our discussion of paths with a quick corollary of Theorem 4.7 to the family of ladder graphs . The ladder graph L n has vertices u 1 , . . . , u n , v 1 , . . . , v n and edges u i u i +1 and v i v i +1 , for 1 β€ i β€ n -1, and u i v i for 1 β€ i β€ n .
Corollary 4.8. For n β₯ 1 , the ladder graph L n is Hoffman-London. For n β₯ 3 it is strongly Hoffman-London.
Proof. Wehave L n = P β¦ n Γ K 2 , so the proposition follows from Observation 2.5 and Theorem 4.7.
## 4.4 Loop threshold graphs
Here we define the family of loop threshold graphs and prove that they are Hoffman-London. Denote by K 1 the graph on one vertex with no edges, and by K β¦ 1 the graph on one vertex with a loop. The family of loop threshold graphs is the minimal (by inclusion) family of graphs that includes K 1 and K β¦ 1 , is closed under adding isolated vertices, and is closed under adding looped dominating vertices. Loop threshold graphs were introduced by Cutler and Radcliffe [13] where they were shown to be a natural family to consider in the context of H -coloring; one of the results of [13] is that if H is loop threshold, then among n -vertex m -edge graphs, there is a threshold graph that maximizes the number of H -colorings. For further work on loop threshold graphs, see also [11] (regarding H -coloring) and [21] (regarding enumeration).
The following characterization of loop threshold graphs is stated in [13], where the authors suggest adapting a proof from [8]. For completeness, we present the details here.
Lemma 4.9. For a graph H on n vertices, possibly with loops, the following are equivalent.
1. H is loop threshold.
2. There is an ordering of the vertices of H , say v 1 , v 2 , . . . , v n , with the property that
$$\frac { 1 } { n } \times ( 1 - \frac { 1 } { n } ) = \frac { 1 } { n }$$
We refer to an ordering of vertices satisfying the second condition in Lemma 4.9 as a nested neighborhood ordering .
Proof. We begin by showing that if H is a loop threshold graph, then it admits a nested neighborhood ordering. Let O be the family of graphs for which a nested neighborhood ordering exists. Trivially, K 1 , K β¦ 1 β O . Suppose now that H β O . Let v 1 , . . . , v n be a nested neighborhood ordering for H . If H β² is obtained from H by adding an isolated vertex, u , then since N H β² ( u ) = β
we have that u, v 1 , . . . , v n is a nested neighborhood ordering for H β² , so H β² β O . If instead H β² is obtained from H by adding a looped dominating vertex, u , then since N H β² ( u ) = { v 1 , . . . , v n , u } , we have that v 1 , . . . , v n , u is a nested neighborhood ordering for H β² , so H β² β O . It follows from the definition of the family of loop threshold graphs that all loop threshold graphs are in O .
For the reverse implication, let H be a graph on n vertices that admits a nested neighborhood ordering v 1 , . . . , v n . We show by induction on n that H is loop threshold. When n = 1 this is clear, so we assume n > 1.
If N ( v n ) = { v 1 , . . . , v n } , then v n is a looped dominating vertex in H . Let H β² be obtained from H by deleting v n . We have that
$$v _ { n } \in N ( v _ { n - 1 } ) \cap N ( v _ { n } )$$
and so v 1 , . . . , v n -1 is a nested neighborhood ordering for H β² . By induction, H β² is a loop threshold graph, so H , being obtained from H β² by adding a looped dominating vertex, is also loop threshold.
Otherwise there is v i in H that is not in N ( v n ). Then v i must be isolated: if there were v j such that v i βΌ v j , then v i β N ( v j ) β N ( v n ). So we observe that for each j , N ( v j ) \ { v i } = N ( v j ), meaning if H β² is obtained from H by deleting v i , then H β² admits the nested neighborhood ordering v 1 , . . . , v i -1 , v i +1 , . . . , v n . By induction, H β² is a loop threshold graph, and as we obtain H by adding v i , an isolated vertex, H is also loop threshold.
Using Lemma 4.9, we now prove that loop threshold graphs are Hoffman-London, and we characterize the strongly Hoffman-London loop threshold graphs.
Theorem 4.10. Each loop threshold graph is Hoffman-London. Furthermore, if H is a loop threshold graph, then H is strongly Hoffman-London if and only after removing all isolated vertices from H , the resulting graph is neither empty nor a fully looped complete graph.
Note that we consider a vertex v isolated if N ( v ) β { v } . In other words, a single vertex with a loop is still an isolated vertex.
Proof. Let H be a loop threshold graph. To establish the Hoffman-London property, we seek to apply Theorem 1.5. By Lemma 4.9, there is an ordering of the vertices of H , say v 1 , v 2 , . . . , v n , with the property that
$$\frac { 1 } { n } \sum _ { k = 1 } ^ { n } ( - 1 ) ^ { k }$$
ΜΈ
Suppose v i β H a and v i +1 β H b for some a = b . We claim v j / β H a for all j > i . Suppose otherwise: assume v i β H a , v i +1 β H b , and v j β H a for some j > i . As v a and v j both belong to H a , | N ( v i ) | = | N ( v j ) | , and as i < j , N ( v i ) β N ( v j ), so N ( v i ) = N ( v j ). Then
$$The formula you've provided is not in a standard mathematical notation. Could you please provide more context or clarify what you're asking? I'm here to help with any mathematical problems or concepts you have!$$
so N ( v i +1 ) = N ( v i ) as well. Thus the function on V ( H ) that swaps v i and v i +1 and fixes all other vertices is an automorphism of H , contradicting that v i +1 / β H a . We conclude that this ordering of the vertices induces an ordering of the automorphic similarity equivalence classes of H , say H 1 , . . . , H k , such that if u β H i and v β H j for i < j then N ( u ) β N ( v ). If we use this ordering of the automorphic similarity classes to define the automorphic similarity matrix M , we see that each column of M is non-decreasing: for any 1 β€ j β€ k and 1 β€ i β€ k -1, m i,j β€ m i +1 ,j because for any u β H i and v β H i +1 , N ( u ) β N ( v ), so each w β H j is adjacent to u only if it is adjacent to v . Sums of non-decreasing vectors are also non-decreasing, so this ordering of the automorphic similarity classes of H satisfies the hypothesis of Theorem 1.5. We conclude that H is Hoffman-London, as claimed.
We now turn to the characterization of loop threshold graphs that are strongly Hoffman-London. For n β₯ 2, each vertex of any tree on n vertices is incident to an edge and cannot be mapped to an isolated vertex of H . (This is essentially the content of Observation 2.2). Thus if H is loop threshold and H β² is the graph resulting from removing the isolated vertices from H , we have
hom( T n , H ) = hom( T n , H β² ) for all n β₯ 2. If H β² is empty, then hom( T n , H β² ) = 0 for all T n β T n . If H β² is a fully looped complete graph, it is regular, so each T n β T n admits the same number of H -colorings by Observation 2.3.
Now assume that upon deletion of all isolated vertices, H is non-empty and not a fully looped complete graph. Note that the resulting graph is also loop threshold: because only isolated vertices were removed, the neighborhoods of the remaining vertices are unchanged, so the resulting graph still admits a nested neighborhood ordering. Thus it suffices to assume H simply has no isolated vertices and is not a fully looped complete graph. To show that H is strongly Hoffman-London, we use Corollary 1.6.
We begin by showing that H has at least two automorphic similarity classes. If every vertex of H is looped, then H must have been constructed by adding looped dominating vertices at every step, contradicting that H is not a fully looped complete graph. Let u be a vertex of H without a loop. Because u was not removed, it is not isolated, so there is v βΌ u . We claim that u comes before v in the nested neighborhood ordering of H ; if not, then u β N ( v ) β N ( u ), which contradicts our claim that u does not have a loop. By similar reasoning, v must have a loop: v β N ( u ) β N ( v ). Because u does not have a loop but v does, no automorphism of H can send u to v . We conclude H has at least two distinct automorphic similarity classes.
Now for each t β₯ 2, let a ( t ) = i , where H i is the class containing u , and let b ( t ) = j , where H j is the class containing v . We first observe that there is an H -coloring of P t that sends the first vertex of the path to some vertex in H a ( t ) and the last vertex of the path to some vertex in H b ( t ) . Indeed, mapping the first vertex of P t to u and all subsequent vertices to v yields such a coloring.
To apply Corollary 1.6 to conclude that H is strongly Hoffman-London, it remains to show that for each t β₯ 2 and s β₯ 2, hom w β b ( t ) ( P s , H ) > hom w β a ( t ) ( P s , H ) where w is one of the endvertices of P s . In what follows, when enumerating hom w β a ( t ) ( P s , H ) (respectively, hom w β b ( t ) ( P s , H )) we will take u (respectively, v ) to be the specific vertex of H a ( t ) (respectively, H b ( t ) ) to which w is mapped in an H -coloring. There is a simple injection from Hom w β a ( t ) ( P s , H ) into Hom w β b ( t ) ( P s , H ): change the color at w from u to v . That this is a valid map comes from the fact that N ( u ) β N ( v ). To see that this injection is not a bijection, and so hom w β b ( t ) ( P s , H ) > hom w β a ( t ) ( P s , H ), consider the H -coloring of P s that sends w to v , that sends the unique neighbor of w in P s to u , and that sends all other vertices of P s (if there are any) to v . This coloring is in Hom w β b ( t ) ( P s , H ), but, since there is not a loop at u in H , it is not the image of any coloring in Hom w β a ( t ) ( P s , H ).
We conclude our discussion of loop threshold graphs with an application of Theorem 4.10 to a family of target graphs that have arisen both in statistical physics and communications networks. For integer C β₯ 1, define a capacity C independent set in a graph G to be a function f : V ( G ) β { 0 , 1 , 2 , . . . , C } satisfying f ( x )+ f ( y ) β€ C for all xy β E ( G ). Standard independent sets are capacity 1 independent sets. This notion was introduced by Mazel and Suhov [33] as a statistical physics model of a random surface and was subsequently studied in [18, 35] in the context of multicast communications networks. Capacity C independent sets can be encoded as H -colorings by taking H = H C to be the graph on vertex set { 0 , 1 , . . . , C } with edge set {{ a, b } : a + b β€ C } . The graph H C admits a nested neighborhood ordering, namely C, C -1 , . . . , 1 , 0, since N H C ( i ) = { 0 , 1 , . . . , C -i } , so by Lemma 4.9 H C is loop threshold, and the following result is a corollary of Theorem 4.10.
Corollary 4.11. For C β₯ 1 and n β₯ 1 , if T n is a tree on n vertices that is different from P n , then P n admits strictly fewer capacity C independent sets than T n .
## 4.5 Adding looped dominating vertices to a target graph
In this section we establish that the set of graphs H for which Theorem 1.5 is applicable is closed under adding looped dominating vertices and present an application that will be useful later.
Proposition 4.12. Suppose that H is a target graph that has an automorphic similarity matrix satisfying the increasing columns property. Let Λ H be obtained from H by adding some number of looped dominating vertices. Then Λ H is Hoffman-London.
Proof. We begin by considering the case where H does not have any looped dominating vertices. Let the automorphic similarity classes of H be H 1 , . . . , H k , presented in such an order that the associated automorphic similarity matrix M satisfies the increasing columns property. Then let Λ H be obtained from H by adding b looped dominating vertices.
Any automorphism of Λ H must be an extension of an automorphism of H obtained by permuting the looped dominating vertices of Λ H , and so Λ H has automorphic similarity classes Λ H 1 , . . . , Λ H k , Λ H k +1 where Λ H i = H i for i β€ k and Λ H k +1 is the set of b added looped dominating vertices. The associated automorphic similarity matrix Λ M is obtained from M by adding a new row at the bottom whose j th entry ( j = 1 , . . . , k ) is | H j | , and then adding a new column at the far left, all of whose entries are b . The last column of this matrix is constant and so non-decreasing.
For i and j with 1 β€ i β€ k -1 and 1 β€ j β€ k , the sum Λ S i,j of the entries of row i of Λ M starting from the j th entry is b greater than S i,j , the sum of the entries of row i of M starting from the j th entry, and similarly Λ S i +1 ,j = S i +1 ,j + b , so Λ S i,j β€ Λ S i +1 ,j follows from S i,j β€ S i +1 ,j .
Finally, the ( k, j )-entry (1 β€ j β€ k ) of Λ M , which counts the number of neighbors an element of H k has in H j , is trivially bounded above by | H j | , which is the ( k +1 , j )-entry of Λ M ; this, together with the fact that the ( k, k +1) th and ( k +1 , k +1) th entries of Λ M agree, shows that Λ S k,j β€ Λ S k +1 ,j . We conclude that Λ M has the increasing columns property and that Λ H is Hoffman-London.
We now consider the case where H has some looped dominating vertices. The collection of looped dominating vertices in H forms an autmorphic similarity class. Moreover if the automorphic similarity classes of H are H 1 , . . . , H k , presented in such an order that the associated automorphic similarity matrix M satisfies the increasing columns property, then the class H k must be the class consisting of all the looped dominating vertices. To see this, note that the increasing columns property applied to the sum of all k columns of M simply says d 1 β€ d 2 β€ Β· Β· Β· β€ d k , where d i is the common degree of the vertices in H i . Looped dominating vertices have the maximum possible degree, namely | H | , and any vertex with degree | H | is a looped dominating vertices. It follows that d k = | H | and that H k consists of looped dominating vertices.
Now once again let Λ H be obtained from H by adding b looped dominating vertices. The automorphic similarity classes of Λ H are Λ H 1 , Λ H 2 , . . . , Λ H k , where Λ H i = H i for i < k and Λ H k is the union of H k and the b added looped dominating vertices. The associated automorphic similarity matrix Λ M is obtained from M by adding b to every entry in the final column of M . In a similar manner to the previous case, it follows that Λ M has the increasing columns property and that Λ H is Hoffman-London.
Before presenting an application of Proposition 4.12, we discuss blow-ups of graphs.
Definition 4.13. Let H be a target graph on vertex set { v 0 , v 1 , . . . , v k } without multiple edges but perhaps with loops. Let n = ( n 0 , n 1 , . . . , n k ) be a vector of positive integers. The blow-up of H by n , denoted H n , has vertex set β k i =0 V i , where V i is a set of size n i and the V i s are pairwise
ΜΈ
disjoint. If v i βΌ H v j (with either i = j or i = j ), then there is an edge in H n between every x β V i and y β V j , and there are no other edges.
In other words, H n is obtained from H by 'blowing up' each vertex of H to a cluster of vertices, replacing unlooped vertices with empty graphs, looped vertices with fully looped complete graphs, and edges with complete bipartite graphs. As an example, the join of an empty graph and a fully looped complete graph is a blow-up of H ind , the target graph that encodes independent sets; see Figure 6.
Figure 6: An example of a blow-up with n = (3 , 2) where the unlooped vertex of H = H ind is labeled v 0 and the looped vertex is labeled v 1 .
<details>
<summary>Image 6 Details</summary>

### Visual Description
## Diagram: Graph Representations
### Overview
The image presents two graph diagrams, labeled *H* and *HβΏ*. The first graph, *H*, consists of two nodes connected by a single edge, with a self-loop on one of the nodes. The second graph, *HβΏ*, consists of four nodes with multiple edges connecting them, including a self-loop on one of the nodes.
### Components/Axes
* **Nodes:** Represented by black filled circles.
* **Edges:** Represented by black lines connecting nodes.
* **Self-loop:** A curved line originating and terminating at the same node.
* **Labels:**
* *H*: Label for the first graph, located below the diagram.
* *HβΏ*: Label for the second graph, located below the diagram.
### Detailed Analysis
**Graph *H***:
* Two nodes are connected by a single edge.
* One of the nodes has a self-loop.
**Graph *HβΏ***:
* Four nodes are present.
* Each node is connected to every other node, forming a complete graph.
* One of the nodes has a self-loop.
* There are a total of 6 edges connecting the 4 nodes.
### Key Observations
* Graph *H* is a simple graph with two nodes and one edge, plus a self-loop on one node.
* Graph *HβΏ* is a more complex graph with four nodes and multiple edges, including a self-loop.
### Interpretation
The image illustrates two different graph structures. The graph *HβΏ* appears to be a more complex version of *H*, possibly representing a transformation or evolution of the initial graph structure. The self-loops indicate a relationship of a node to itself. The complete connectivity of *HβΏ* suggests a high degree of interaction or relationship between the nodes.
</details>
We now show, as a corollary of Proposition 4.12, that blow-ups of fully looped stars (stars augmented by putting a loop at every vertex) are Hoffman-London. We will present an application of this corollary, to the Widom-Rowlinson model from statistical physics, in Section 4.6.
Corollary 4.14. Let S β¦ t +1 be the fully looped star on t +1 vertices v 0 , v 1 , . . . , v t , with v 0 the center of the star. Let n = ( n 0 , n 1 , . . . , n t ) be any vector of positive integers. The blow-up of S β¦ t +1 by n is Hoffman-London.
Proof. We start by considering the graph H ( n 1 , . . . , n t ), which is the disjoint union of t fully looped complete graphs K β¦ 1 , . . . , K β¦ t on n 1 , . . . , n t vertices, respectively. Let S 1 , . . . , S k be the partition of { 1 , . . . , t } induced by the equivalence relation i β‘ j if and only if n i = n j , and let s i = β j β S i n j . Without loss of generality, we may assume that s 1 β€ s 2 β€ Β·Β· Β· β€ s k . For example, if ( n 1 , n 2 , n 3 , n 4 , n 5 , n 6 ) = (3 , 10 , 4 , 3 , 10 , 10), then we would take S 1 = { 3 } with s 1 = 4, S 2 = { 1 , 4 } with s 2 = 3 + 3 = 6 and S 3 = { 2 , 5 , 6 } with s 3 = 10 + 10 + 10 = 30. The automorphic similarity classes of H are H i = β j β S i V ( K j ) and the corresponding automorphic similarity matrix is diag { s 1 , . . . , s k } . By our choice of ordering on the s i , this satisfies the increasing columns property.
It follows from Theorem 1.5 that H ( n 1 , . . . , n t ) is Hoffman-London, and therefore it follows from Proposition 4.12 that the graph H ( n 0 , n 1 , . . . , n t ) obtained from H ( n 1 , . . . , n t ) by adding n 0 looped dominating vertices is also Hoffman-London. But H ( n 0 , n 1 , . . . , n t ) is exactly the blow-up of S β¦ t +1 by n , and this observation completes the proof.
## 4.6 Connection to statistical physics
In this section we briefly discuss a connection between graph homomorphisms and statistical physics models, and we use some of our results to derive consequences for the partition functions of some of these models. For more thorough (though still accessible) treatments of the connection, see e.g. [1, 2, 24].
One goal of statistical physics is to understand the global behavior of models that are defined by local rules. In a hard constraint spin model , space is modeled by a graph G . The vertices of G represent sites that are occupied by at most one particle per site. Each particle has a certain spin, from among some specified collection of possible spins. The edges of G encode pairs of sites that are close enough that the particles at those sites interact. The interaction rule in a hard constraint model is that only certain pairs of spins are allowed to appear on adjacent sites. This can be encoded by a graph H whose vertices are the possible spins, with an edge between two vertices exactly when the corresponding spins are allowed to appear on adjacent sites. A valid configuration of spins on the vertices of G is thus exactly an H -coloring of G . Note that in this context it is very natural to allow H to have loops-a loop at vertex v β V ( H ) encodes the fact that two particles on adjacent sites of G are allowed to both have spin v .
Typically, a hard-constraint spin model, with constraint graph H on vertex set { v 0 , . . . , v k } , comes with a vector of positive activities Ξ» = ( Ξ» 0 , Ξ» 1 , . . . , Ξ» k ) which (informally) measure how frequently particles of each spin typically occur. Here Ξ» i is the activity associated with v i . The weight of a configuration, or equivalently the weight of an H -coloring f : V ( G ) β V ( H ), is given by
$$w ( H _ { 2 } X ) ( f ) = \sum _ { v \in V ( G ) } ^ { k } \lambda _ { v } ^ { l }$$
where note that f -1 ( v i ) is the set of vertices in G that are mapped to v i by f . The partition function of the model is the sum
$$z ^ { \prime } ( G , H ) = \sum _ { j = 1 } ^ { n } w ^ { ( i + j ) } ( f )$$
The partition function is the normalizing constant used to turn the assignment of weights into an assignment of probabilities, making the model stochastic. The partition function encodes significant information about the model.
We present three specific examples; these are the three for which we later present results.
Example 4.15. The hard-core or lattice gas model is a model of the occupation of space by atoms that have non-zero volume, with a 'hard core' around their boundary, meaning that if an atom sits at a site, no other atom can sit at any nearby site. Dating back at least to the investigations of Gaunt and Fisher and of Runnels [22, 36], this model can be represented using two spins, v out representing 'unoccupied' and v in representing 'occupied,' with the occupation rule being that unoccupied sites can be adjacent to each other, unoccupied sites can be adjacent to occupied sites, and occupied sites cannot be adjacent to each other. Since possible collections of occupied sites in this model correspond exactly with the collection of independent sets in G , the hard-core model can be encoded using the target graph H ind , where v out is the looped vertex of H ind and v in is the unlooped vertex. Traditionally, the activity associated with v out in this model is 1 and the activity associated with v in is some Ξ» > 0. If Ξ» is small, then the model favors sparser configurations of occupied vertices, while if Ξ» is large, then it favors denser configurations. If Ξ» = 1, then the model gives all configurations equal weight. See Figure 7 for an example.
Example 4.16. The Widom-Rowlinson model is a model of the occupation of space by k mutually repulsive particles. This model was introduced by Widom and Rowlinson [41] as a model of liquidvapor phase transition. There are k +1 spins, v 0 through v k . Empty space is represented by v 0 ,
Figure 7: The constraint graph encoding the hard-core model (left) and a hard-core configuration on a tree (right). If v out has activity 1 and v in has activity Ξ» , then the weight of the illustrated configuration is Ξ» 4 .
<details>
<summary>Image 7 Details</summary>

### Visual Description
## Graph Diagrams: Two Example Graphs
### Overview
The image presents two separate graph diagrams. The first graph, on the left, consists of two nodes labeled "vout" and "vin", connected by a single edge. The "vout" node also has a self-loop. The second graph, on the right, is a more complex tree-like structure with alternating black and white nodes.
### Components/Axes
* **Nodes:** Represented by circles. Some are filled black, others are white.
* **Edges:** Represented by straight lines connecting the nodes.
* **Labels:** "vout" and "vin" are labels for specific nodes in the first graph.
### Detailed Analysis
**Left Graph:**
* Node "vout": A white (hollow) circle with a self-loop (an edge connecting the node to itself).
* Node "vin": A black (filled) circle.
* Edge: A single edge connects "vout" to "vin".
**Right Graph:**
* The graph has a central node (white) with three branches extending from it.
* Each branch consists of alternating white and black nodes.
* The nodes are connected by edges to form a tree-like structure.
* From the central node, the first branch goes to a black node, the second to a white node, and the third to a black node.
* The first branch (going to a black node) then connects to a white node, which connects to two black nodes.
* The second branch (going to a white node) connects to a black node, which connects to a white node, which connects to a black node.
* The third branch (going to a black node) connects to a white node, which connects to two white nodes.
### Key Observations
* The left graph is a simple directed graph with a self-loop.
* The right graph is a more complex, undirected tree-like graph with alternating node colors.
* The node colors (black and white) may represent different states or categories within the graph.
### Interpretation
The image presents two distinct graph structures. The left graph could represent a simple state transition or a basic network connection. The right graph, with its alternating node colors and tree-like structure, could represent a hierarchical relationship or a branching process where different levels or categories alternate. The specific meaning of the node colors and the graph structures would depend on the context in which these graphs are used.
</details>
ΜΈ
and the remaining spins are used to represent k different particles. A site occupied by a particle of type i can be adjacent to empty space or to other particles of type i , but not to particles of type j for any j = i . Empty space comes with no restriction. A valid configuration of particles is thus modeled by an H -coloring of G where H is the fully looped star on vertex set { v 0 , . . . v k } with v 0 as the central vertex. A vertex of G being mapped to v 0 represents 'unoccupied' and being mapped to v i represents 'occupied by a particle of type i .' Traditionally, the activity associated with v 0 in this model is 1 and for each i > 0 the activity associated with v i is some Ξ» i > 0. See Figure 8 for an example with two particles, denoted red and blue.
Figure 8: The constraint graph encoding the Widom-Rowlinson model (left) and a Widom-Rowlinson configuration on a tree (right). If v 0 has activity 1, v R has activity Ξ» R , and v B has activity Ξ» B , then the weight of the illustrated configuration is Ξ» 3 R Ξ» 4 B .
<details>
<summary>Image 8 Details</summary>

### Visual Description
## Graph Diagrams: Two Example Graphs
### Overview
The image presents two distinct graph diagrams. The first graph, located on the left, is a simple linear graph with three nodes and self-loops on the end nodes. The second graph, on the right, is a more complex tree-like structure with multiple branches. Nodes are colored red, white, or blue.
### Components/Axes
**Left Graph:**
* Nodes: Three nodes labeled vR (red), v0 (white), and vB (blue).
* Edges: Two edges connect the nodes linearly.
* Self-Loops: vR and v0 have self-loops.
**Right Graph:**
* Nodes: Seven nodes colored red, white, or blue.
* Edges: Six edges connect the nodes in a tree-like structure.
### Detailed Analysis or ### Content Details
**Left Graph:**
* Node vR: Red node with a self-loop.
* Node v0: White node with a self-loop.
* Node vB: Blue node.
* The nodes are connected in a linear sequence: vR - v0 - vB.
**Right Graph:**
* The graph has a central structure of two white nodes connected by an edge.
* Branches extend from these central nodes.
* Node colors are distributed as follows:
* Two white nodes in the center.
* Two red nodes.
* Three blue nodes.
* One white node at the end of a branch.
* The graph has a branching structure, with no cycles.
### Key Observations
* The left graph is a simple linear graph with self-loops.
* The right graph is a more complex tree-like structure.
* Both graphs use red, white, and blue nodes.
### Interpretation
The image presents two examples of graphs with different structures and node colorings. The left graph demonstrates a simple linear structure with self-loops, while the right graph showcases a more complex tree-like structure. The use of different node colors could represent different properties or states of the nodes within the graph. The tree-like structure of the right graph suggests a hierarchical relationship between the nodes.
</details>
Example 4.17. The capacity C independent set or capacity C multicast model was introduced in Section 4.4. Recall that configurations in this model are encoded as H -colorings by taking H = H C to be the graph on vertex set { 0 , 1 , . . . , C } with edge set {{ a, b } : a + b β€ C } . At C = 1, H C reduces to the hard-core model of Example 4.15. Traditionally, in this model the activity assigned to vertex i is Ξ» i for some Ξ» > 0. See Figure 9 for an example.
The problem of maximizing or minimizing the partition function of a weighted hard-constraint spin model with constraint graph H is very closely related to enumeration of H n -colorings, as we shall now see. Consider a weighted hard-constraint spin model on a graph G with H on vertex set
Figure 9: The constraint graph encoding the capacity 3 independent set model (left) and a capacity 3 independent set on a tree (right). If spin i has activity Ξ» i , then the configuration has weight Ξ» 12 .
<details>
<summary>Image 9 Details</summary>

### Visual Description
## Graph Diagrams: Two Example Graphs
### Overview
The image presents two separate graph diagrams. The first graph on the left is a smaller, more compact structure, while the second graph on the right is larger and more branched. Both graphs consist of nodes (vertices) connected by edges. The nodes are colored white, red, blue, or black.
### Components/Axes
* **Nodes (Vertices):** Represented by colored circles. Colors are white, red, blue, and black.
* **Edges:** Represented by black lines connecting the nodes.
* **Loops:** Present in the first graph, where an edge connects a node to itself.
* **Node Labels:** The first graph has nodes labeled 0, 1, 2, and 3.
### Detailed Analysis
**Left Graph:**
* Node 0 (white) has a self-loop and is connected to Node 1 (red), Node 2 (blue), and Node 3 (black).
* Node 1 (red) has a self-loop and is connected to Node 0 (white) and Node 2 (blue).
* Node 2 (blue) is connected to Node 0 (white) and Node 1 (red).
* Node 3 (black) is connected to Node 0 (white).
**Right Graph:**
* The central node is red.
* Connected to the central red node are two blue nodes and one white node.
* One blue node is connected to a red node and a red node.
* The other blue node is connected to a white node and a white node.
* The white node connected to the central red node is connected to a black node and a blue node.
* The blue node connected to the white node is connected to a white node.
### Key Observations
* The left graph is a simple, connected graph with loops.
* The right graph is a tree-like structure with a central node and branching connections.
* The node colors appear to be significant, potentially representing different properties or categories.
### Interpretation
The image presents two distinct graph structures, each with its own unique characteristics. The node colors and connectivity patterns likely encode specific information or relationships within the context of the application where these graphs are used. Without further context, it's difficult to determine the precise meaning of the colors or the significance of the graph structures. However, the presence of loops in the first graph and the tree-like structure of the second graph suggest different types of relationships or data representations.
</details>
{ v 0 , v 1 , . . . , v k } encoding the constraints on the spins. Let Ξ» = ( Ξ» 0 , Ξ» 1 , . . . , Ξ» k ) be an assignment of positive rational activities to the spins. Let integer M be such that MΞ» i = n i β N for each i . For any H -coloring f : V ( G ) β V ( H ) we have, from equation (14),
$$M ^ { n } _ { u } ( H \lambda ) ( f ) = \sum _ { i = 0 } ^ { k } | r _ { u } ^ { - 1 } ( w _ { i } ) |$$
where n is the number of vertices of G . Summing over all f β Hom( G,H ) and recalling the definition of the blow-up graph, we get
$$M ^ { 2 } Z ^ { 2 } ( G , H ) = h o n ( G , H ^ { T } ).$$
This idea leads to the following lemma.
Lemma 4.18. Let H be an arbitrary graph on vertex set { v 0 , v 1 , . . . , v k } , perhaps with loops but without multiple edges. If G 1 and G 2 are graphs on the same number of vertices with the property that hom( G 1 , H n ) β€ hom( G 2 , H n ) for every vector n of positive integers of length k +1 , then for every assignment of real positive (not necessarily rational) activities Ξ» to the vertices of H , we have Z Ξ» ( G 1 , H ) β€ Z Ξ» ( G 2 , H ) .
Proof. When the Ξ» i s are all rational, the result follows directly from equation (15). If ( Ξ» i ) k i =0 is not a rational ( k + 1)-tuple, we can consider a sequence of rational ( k + 1)-tuples that converge to ( Ξ» i ) k i =0 . The validity of Lemma 4.18, together with the continuity of Z Ξ» ( G,H ), allows us to conclude that Z Ξ» ( G 1 , H ) β€ Z Ξ» ( G 2 , H ) in this case.
The question of which graphs in various families maximize and minimize the partition function of various hard-constraint models has a well-developed literature; see e.g. [9, 12, 38] for the WidomRowlinson model, [26, 44, 14] for the hard-core model, and [11, 12, 13, 20, 17, 37] for general models.
We can use Lemma 4.18 to draw conclusions about the hard-core model, the Widom-Rowlinson model, and the capacity C independent set model on trees.
Theorem 4.19. Let n β₯ 1 be arbitrary and let T n β T n .
- (a) For the hard-core model with Ξ» = (1 , Ξ» ) for arbitrary Ξ» > 0 , we have
$$z ^ { \lambda } ( P _ { n } , H _ { ind } ) \leq z ^ { \lambda } ( T _ { n }$$
- (b) For the k -state Widom-Rowlinson model with Ξ» = (1 , Ξ» 1 , Β· Β· Β· , Ξ» k ) for arbitrary Ξ» i > 0 , we have
$$z ^ { \prime } ( P _ { n } , H _ { WR } ( k ) ) \leq z ^ { \prime } ( T _ { n } ,$$
- (c) For the capacity C independent set model with Ξ» = (1 , Ξ», Β· Β· Β· , Ξ» C ) for arbitrary Ξ» > 0 , we have
$$z ^ { \lambda } ( P _ { n } , H C ) \leq z ^ { \lambda } ( T _ { n }$$
Proof. All three upper bounds follow from Theorem 1.1 and Lemma 4.18. For the lower bounds, it suffices to establish that arbitrary positive integer blow-ups of H ind , H WR( k ) , and H C are HoffmanLondon, and then appeal to Lemma 4.18. For H ind and H C , we use that arbitrary blow-ups of loop threshold graphs are still loop threshold and appeal to Theorem 4.10, and we use Corollary 4.14 for H WR( k ) .
## 5 Classification of target graphs with at most three vertices
There are twenty-eight graphs (with or without loops, not necessarily connected) on three or fewer vertices. These are shown in Figure 10. For each H on this list, each n , and each T n β T n , it holds that hom( P n , H ) β€ hom( T n , H ). The collection of results we have presented thus far establish this fact, and also allow us, for each H , to completely specify which other T n β T n , if any, also minimize the count of H -colorings. As hom( T 1 , H ) = | V ( H ) | for every H , we confine attention to trees on n β₯ 2 vertices.
We begin with those cases in which each tree on n β₯ 2 vertices admits the same number of H -colorings, using Observation 2.2 without additional mention to discuss sets of target graphs that behave identically. For unions of isolated vertices, namely H 1 , H 3 , and H 9 , there are no H -colorings of any such tree. We apply Observation 2.3 to those H whose non-isolated components are regular to find our trees have unique H 2 -, H 4 -, and H 10 -colorings, two H 6 - and H 13 -colorings, 2 n H 8 - and H 17 -colorings, 3 n H 28 -colorings, and finally 3 Β· 2 n -1 H 23 - and H 25 -colorings. Then we use Observation 2.1 to consider target graphs with regular components to see each of these trees have two H 5 -colorings, two H 11 -colorings, three H 12 -colorings, three H 14 -colorings, and 2 n +1 H 18 -colorings.
Because H 19 is a complete bipartite graph with unequal parts, Theorem 2.6 tells us the trees minimizing hom( T n , H 19 ) are precisely those with a balanced bipartition.
Theorem 4.1 demonstrates that regular graphs to which looped dominating vertices are added are strongly Hoffman-London, including H 7 , H 21 , H 24 , and H 26 . Furthermore, H 15 is strongly Hoffman-London due to Theorem 4.1 and Observation 2.2 and H 16 is strongly Hoffman-London due to Theorem 4.1 and Observation 2.1.
Note that H 20 Γ K 2 = P 6 . That hom( P n , H 20 ) β€ hom( T n , H 20 ) for all n and T n β T n now follows from Observation 2.5 and Theorem 4.2, and that hom( P n , H 20 ) < hom( T n , H 20 ) for all n and T n β T n different from P n follows from Theorem 4.6. Finally, H 22 and H 27 are loop threshold and thus strongly Hoffman-London by Theorem 4.10.
For convenience, we summarize these results in Table 1. In cases where all trees on n vertices admit the same number of H -colorings, that number is given.
Figure 10: All target graphs on up to three vertices.
<details>
<summary>Image 10 Details</summary>

### Visual Description
## Diagram: Graph Configurations
### Overview
The image presents a grid of 28 diagrams, each depicting a graph configuration. Each graph consists of nodes (represented by filled circles) and edges (represented by lines). Some nodes have self-loops. Each diagram is labeled with "H" followed by a number from 1 to 28.
### Components/Axes
* **Nodes:** Represented by filled black circles.
* **Edges:** Represented by black lines connecting nodes.
* **Self-Loops:** Represented by curved lines originating and terminating at the same node.
* **Labels:** Each diagram is labeled with "H" followed by a number (H1 to H28). The labels are located below each diagram.
* **Grid:** The diagrams are arranged in a 4x7 grid.
### Detailed Analysis or ### Content Details
Here's a breakdown of each graph configuration:
* **H1:** A single node.
* **H2:** A single node with a self-loop.
* **H3:** Two isolated nodes.
* **H4:** Two isolated nodes, one with a self-loop.
* **H5:** Two isolated nodes, each with a self-loop.
* **H6:** Two nodes connected by an edge.
* **H7:** Two nodes connected by an edge, with one node having a self-loop.
* **H8:** Two nodes connected by an edge, with each node having a self-loop.
* **H9:** Three isolated nodes.
* **H10:** Three isolated nodes, one with a self-loop.
* **H11:** Three isolated nodes, two with self-loops.
* **H12:** Three isolated nodes, each with a self-loop.
* **H13:** Two nodes connected by an edge, with one additional isolated node.
* **H14:** Two nodes connected by an edge, with one additional isolated node with a self-loop.
* **H15:** Two nodes connected by an edge, with one additional isolated node with a self-loop.
* **H16:** Two nodes connected by an edge, with one additional isolated node with a self-loop.
* **H17:** Two nodes connected by an edge, with one node having a self-loop. One additional isolated node.
* **H18:** Two nodes connected by an edge, with each node having a self-loop. One additional isolated node.
* **H19:** Three nodes connected in a "V" shape (a path of length 2).
* **H20:** Three nodes connected in a "V" shape, with one node having a self-loop.
* **H21:** Three nodes connected in a "V" shape, with one node having a self-loop.
* **H22:** Three nodes connected in a "V" shape, with one node having a self-loop.
* **H23:** Three nodes connected in a "V" shape, with one node having a self-loop.
* **H24:** Three nodes connected in a "V" shape, with one node having a self-loop.
* **H25:** Three nodes connected in a triangle.
* **H26:** Three nodes connected in a triangle, with one node having a self-loop.
* **H27:** Three nodes connected in a triangle, with one node having a self-loop.
* **H28:** Three nodes connected in a triangle, with one node having a self-loop.
### Key Observations
* The diagrams systematically explore different configurations of nodes, edges, and self-loops.
* The number of nodes ranges from 1 to 3.
* The presence and placement of self-loops vary across the diagrams.
* The connectivity of the nodes varies from isolated nodes to connected edges and triangles.
### Interpretation
The image likely represents a classification or enumeration of different graph structures. The systematic arrangement suggests an attempt to categorize graphs based on their fundamental properties, such as the number of nodes, the presence of edges, and the existence of self-loops. The diagrams could be used in graph theory to illustrate basic graph types or as building blocks for more complex graph structures.
</details>
## 6 Open problems
We conclude with a few questions that remain open and which we find of interest.
In addition to Problem 1.2, CsikvΒ΄ ari and Lin propose a weaker variant in which 'all n ' is replaced by 'all sufficiently large n .' To consider this version of the problem, we introduce the
Table 1: Classification of tree minimizers of hom( T n , H ) for n β₯ 2.
| Identifier | Minimizer | Identifier | Minimizer | Identifier | Minimizer |
|--------------|------------------|--------------|---------------------|--------------|--------------------------|
| H 1 | All trees (0) | H 11 | All trees (2) | H 20 | Paths |
| H 2 | All trees (1) | H 12 | All trees (3) | H 21 | Paths |
| H 3 | All trees (0) | H 13 | All trees (2) | H 22 | Paths |
| H 4 | All trees (1) | H 14 | All trees (3) | H 23 | All trees (3 Β· 2 n - 1 ) |
| H 5 | All trees (2) | H 15 | Paths | H 24 | Paths |
| H 6 | All trees (2) | H 16 | Paths | H 25 | All trees (3 Β· 2 n - 1 ) |
| H 7 | Paths | H 17 | All trees (2 n ) | H 26 | Paths |
| H 8 | All trees (2 n ) | H 18 | All trees (2 n +1) | H 27 | Paths |
| H 9 | All trees (0) | H 19 | Trees with balanced | H 28 | All trees (3 n ) |
| H 10 | All trees (1) | | bipartitions | | |
following terminology.
Definition 6.1. We say a target graph H is eventually Hoffman-London if for all sufficiently large n ,
$$I _ { \Delta } ( P _ { 1 } , H ) = m i n h o n ( C _ { n } H _ { n } )$$
If it holds that hom( P n , H ) < hom( T n , H ) for all sufficiently large n and T n β T n \ { P n } , we say that H eventually strongly Hoffman-London .
At present, no target graphs are known to be eventually Hoffman-London but not HoffmanLondon (nor eventually strongly Hoffman-London but not strongly Hoffman-London). In other words, there are no known target graphs H or which there is a non-path tree T n admitting fewer H -colorings than P n for a specific n but for which the path minimizes hom( T n , H ) when n is sufficiently large. To this end, we introduce the following problem.
Problem 6.2. Does there exist a target graph H and integers Ξ± < Ξ² such that
$$I o n ( P _ { 2 } , H ) > m i n h o u t ( C O _ { 3 } \cdot H _ { 2 } )$$
$$I _ { \Delta } ( P , I ) = \frac { i _ { \Delta } I _ { 0 } ( T , I ) } { I _ { 0 } I _ { 0 } }$$
but for each n β₯ Ξ² ?
Theorem 4.6 proves that paths on an even number of vertices other than P 2 are strongly Hoffman-London. As mentioned, Theorem 2.6 demonstrates that the case is more complicated for P 3 as the set of trees with minimal P 3 -colorings is neither just the path nor all trees. For paths with a larger odd number of vertices, the question remains open.
Problem 6.3. For m β₯ 2, classify the T β T n that minimize hom( T, P 2 m +1 ).
Theorem 2.6 shows that K a,b is Hoffman-London, but it remains open whether complete multipartite graphs H with more than two parts need be Hoffman-London.
Problem 6.4. Let r β₯ 3 and H be a complete r -partite graph. Does it follow that H is HoffmanLondon?
Note that K a 1 ,...,a q -colorings may be viewed as weighted q -colorings, so this question is equivalent to asking whether there exist weights such that some tree on n vertices has fewer weighted q -colorings than P n .
As discussed in Section 4.2, we proved that each graph in the family of H ( a, b, β ), in which each vertex of a b -clique is replaced with a bouquet of β a -cliques, is Hoffman-London except in the case a β₯ 3 , b = 1, and β β₯ 2.
Problem 6.5. For which values of a β₯ 3 and β β₯ 2, if any, is H ( a, 1 , β ) Hoffman-London?
Finally, we proved in Theorem 4.10 that loop threshold graphs are Hoffman-London. There are three closely related classes of graphs for which it would be interesting to prove an analogous result:
1. Threshold graphs: the graphs in the smallest family containing K 1 that is closed under adding isolated vertices and (unlooped) dominating vertices.
2. Quasi-loop threshold graphs: the graphs in the smallest family containing K 1 and K β¦ 1 that is closed under adding isolated vertices, adding looped dominating vertices, and taking disjoint unions.
3. Quasi-threshold graphs: the graphs in the smallest family containing K 1 that is closed under adding isolated vertices, adding (unlooped) dominating vertices, and taking disjoint unions.
There are threshold (and so quasi-threshold) graphs on four vertices that the methods and results discussed in this paper cannot handle, such as the triangle with a pendant edge. There are also quasi-loop threshold graphs on four vertices whose status remains open, including the graph obtained from the union of a loop and two isolated vertices by adding a dominating vertex.
## Acknowledgments
This work was started at the workshop 'Graph Theory: structural properties, labelings, and connections to applications' hosted by the American Institute of Mathematics (AIM), Pasedena CA, July 22-July 26, 2024. The authors thank AIM and the organizers of the workshop for facilitating their collaboration.
## References
- [1] Graham R. Brightwell and Peter Winkler, Graph homomorphisms and phase transitions, Journal of Combinatorial Theory Series B 77 (1999), 415-435.
- [2] Graham R. Brightwell and Peter Winkler, Hard constraints and the Bethe lattice: adventures at the interface of combinatorics and statistical physics, Proceedings of the International Congress of Mathematicians, Vol. III (Beijing, 2002), 605-624. Higher Education Press, Beijing, 2002.
- [3] George D. Birkhoff, A Determinant Formula for the Number of Ways of Coloring a Map, Annals of Mathematics 14 (1912/1913), 42-46.
- [4] G. D. Birkhoff and D. C. Lewis, Chromatic Polynomials, Transactions of the American Mathematical Society 60 (1946), 355-451.
- [5] PΒ΄ eter CsikvΒ΄ ari, On a poset of trees, Combinatorica 30 (2010), 125-137.
- [6] PΒ΄ eter CsikvΒ΄ ari and Zhicong Lin, Graph homomorphisms between trees, Electronic Journal of Combinatorics 21 (2014), article #P4.9.
- [7] PΒ΄ eter CsikvΒ΄ ari and Zhicong Lin, Homomorphisms of trees into a path, SIAM Journal of Discrete Mathematics 29 (2015), 1406-1422.
- [8] VaclΒ΄ av ChvΒ΄ atal and Peter L. Hammer, Aggregation of inequalities in integer programming, in Studies in Integer Programming, P. L. Hammer, E. L Johnson, B. H. Korte and G. L. Nemhauser editors, (Proceedings of the Workshop on Programming, Bonn, 1975), Annals of Discrete Mathematics 1 (1977), 145-162.
- [9] Emma Cohen, Will Perkins, and Prasad Tetali, On the Widom-Rowlinson occupancy fraction in regular graphs, Combinatorics, Probability & Computing 26 (2017), 183-194.
- [10] Jonathan Cutler, Coloring graphs with graphs: a survey, Graph Theory Notes of New York 63 (2012), 7-16.
- [11] Jonathan Cutler and Nicholas Kass, Homomorphisms into loop-threshold graphs, Electronic Journal of Combinatorics 27 (2020), Article Number P2.38.
- [12] Jonathan Cutler and A. J. Radcliffe, Extremal graphs for homomorphisms, Journal of Graph Theory 67 (2011), 261-284.
- [13] Jonathan Cutler and A. J. Radcliffe, Extremal Graphs for Homomorphisms II, Journal of Graph Theory 76 (2014), 42-59.
- [14] Ewan Davies, Matthew Jenssen, Will Perkins, and Barnaby Roberts, Independent sets, matchings, and occupancy fractions, Journal of the London Mathematical Society 96 (2017), 47-66.
- [15] John Engbers and David Galvin, Extremal H-colorings of trees and 2-connected graphs, Journal of Combinatorial Theory Series B 122 (2017), 800-814.
- [16] Jon Folkman, Regular Line-Symmetric Graphs, Journal of Combinatorial Theory 3 (1967), 215-232.
- [17] David Galvin, Bounding the partition function of spin systems, Electronic Journal of Combinatorics 13 (2006), #R72.
- [18] David Galvin, Fabio Martinelli, Kavita Ramanan, and Prasad Tetali, The multistate hard core model on a regular tree, SIAM Journal on Discrete Mathematics 25 (2011), 894-915.
- [19] David Galvin, Emily McMillon, JD Nir, and Amanda Redlich, Long paths need not minimize H -colorings among trees, arXiv:2510.18770.
- [20] David Galvin and Prasad Tetali, On weighted graph homomorphisms, Discrete Mathematics and Theoretical Computer Science 63 (2004), 97-104.
- [21] David Galvin, Greyson Wesley, and Bailee Zacovic, Enumerating threshold graphs and some related graph classes, Journal of Integer Sequences 25 (2022), Article 22.2.7.
- [22] David S. Gaunt and Michael E. Fisher, Hard-Sphere Lattice Gases. I. Plane-Square Lattice, Journal of Chemical Physics 43 (1965), 2840-2863.
- [23] Chris Godsil and Gordon Royle, Algebraic Graph Theory, Springer, 2001.
- [24] P. de la Harpe and V. F. R. Jones, Graph Invariants Related to Statistical Mechanical Models: Examples and Problems, Journal of Combinatorial Theory, Series B 57 (1993), 207-227.
- [25] A. J. Hoffman, Three observations on nonnegative matrices, Journal of Research of the National Bureau of Standards 71B (1967), 39-41.
- [26] Jeff Kahn, An Entropy Approach to the Hard-Core Model on Bipartite Graphs, Combinatorics, Probability & Computing 10 (2001), 219-237.
- [27] Julian Keilson and Adri Kester, Monotone matrices and monotone Markov processes, Stochastic Processes and their Applications 5 (1977), 231-241.
- [28] A. K. Kelmans, On graphs with randomly deleted edges, Acta Mathematica Academiae Scientiarum Hungaricae 37 (1981), 77-88.
- [29] A. M. Leontovich, The number of mappings of graphs, the ordering of graphs and the Muirhead theorem, Problemy Peredachi Informatsii 25 (1989), 91-104; translation in Problems Inform. Transmission 25 (1989), 154-165.
- [30] N. Linial, Legal colorings of graphs, Combinatorica 6 (1986), 49-54.
- [31] David London, Two inequalities in nonnegative symmetric matrices, Pacific Journal of Mathematics 16 (1966), 515-536.
- [32] LΒ΄ aszlΒ΄ o LovΒ΄ asz, Large Networks and Graph Limits, American Mathematical Society Colloquium Publications volume 60, Providence, Rhode Island, 2012.
- [33] A. E. Mazel and Yu. M. Suhov, Random surfaces with two-sided constraints: an application of the theory of dominant ground states, Journal of Statistical Physics 64 (1991), 111-134.
- [34] Helmut Prodinger and Robert F. Tichy, Fibonacci numbers of graphs, Fibonacci Quarterly 20 (1982), 16-21.
- [35] Kavita Ramanan, Anrivan Sengupta, Ilze Ziedins, and Partha Mitra, Markov random field models of multicasting in tree networks, Advances in Applied Probability 34 (2002), 1-27.
- [36] L. K. Runnels, Hard-Square Lattice Gas, Physical Review Letters 15 (1965), 581-584.
- [37] Ashwin Sah, Mehtaab Sawhney, David Stoner, and Yufei Zhao, A reverse Sidorenko inequality, Inventiones mathematicae 221 (2020), 665-711.
- [38] Luke Sernau, Graph operations and upper bounds on graph homomorphism counts, Journal of Graph Theory 87 (2018), 149-163.
- [39] A. F. Sidorenko, Proof of London's assumption on sums of elements of nonnegative matrices (in Russian), Matematicheskie Zametki 38 (1985), 376-377, 475; translation in Proof of London's conjecture on sums of elements of positive matrices, Mathematical notes of the Academy of Sciences of the USSR 38 (1985), 716-717.
- [40] Alexander Sidorenko, A partially ordered set of functionals corresponding to graphs, Discrete Mathematics 131 (1994), 263-277.
- [41] B. Widom and J. S. Rowlinson, New Model for the Study of Liquid-Vapor Phase Transitions, Journal of Chemical Physics 52 (1970), 1670-1684.
- [42] Herbert S. Wilf, Backtrack: An O (1) expected time algorithm for the graph coloring problem, Information Processing Letters 18 (1984), 119-121.
- [43] George Clifton Wingard, Properties and applications of the Fibonacci polynomial of a graph, Ph.D. thesis, University of Mississippi (1995).
- [44] Yufei Zhao, The Number of Independent Sets in a Regular Graph, Combinatorics, Probability & Computing 19 (2010), 315-320.
- [45] Yufei Zhao, Extremal regular graphs: independent sets and graph homomorphisms, American Mathematical Monthly 124 (2017), 827-843.