• No results found

Finding zero-free regions for the partition function of the k-state ferromagnetic Potts model through matroid theory

N/A
N/A
Protected

Academic year: 2021

Share "Finding zero-free regions for the partition function of the k-state ferromagnetic Potts model through matroid theory"

Copied!
53
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MSc Mathematics

Master Thesis

Finding zero-free regions for the

partition function of the k-state

ferromagnetic Potts model through

matroid theory

Author: Supervisor:

Vincent Schmeits

dr. G. Regts

Examination date:

(2)

Abstract

When trying to determine a zero-free region for the partition function of the k-state ferromagnetic Potts model on a graph G, it is helpful to write it as an evaluation of the k-tension polynomial Tk or the k-flow polynomial Fk of G. A zero-free region for

either polynomial then provides a zero-free region for the partition function. Recently, a theorem on zero-free regions for the so-called block polynomial of G has been used to improve existing results on the size of a zero-free region for Fk. If G is a planar graph,

then, as tensions are dual to flows, a zero-free region for Tk can be found by applying

said theorem to a planar dual of G. In this thesis, we generalize the theorem to a result on what we call the component polynomial of a matroid. This new result can be applied to the dual M∗(G) of the cycle matroid M (G) in order to find a zero-free region for Tk,

regardless of G being planar. However, it remains to be seen whether the new method can be used to improve existing results on zero-free regions for Tk.

Title: Finding zero-free regions for the partition function of the k-state ferromagnetic Potts model through matroid theory

Author: Vincent Schmeits, vincent.schmeits@student.uva.nl, 10816674 Supervisor: dr. G. Regts

Second Examiner: prof. dr. J.A. Ellis-Monaghan Examination date: January 19, 2021

Korteweg-de Vries Institute for Mathematics University of Amsterdam

Science Park 105-107, 1098 XG Amsterdam http://kdvi.uva.nl

(3)

Contents

Introduction 4

1 Preliminary notions from graph theory 7 2 Tensions, flows, and the Potts model 9

2.1 The k-tension polynomial of a graph . . . 9

2.2 Finding a zero-free region for the k-flow polynomial of a graph . . . 14

2.3 Tension-flow duality for planar graphs . . . 18

3 A brief introduction to matroid theory 22 3.1 Basic notions . . . 22

3.2 Duality . . . 26

3.3 Minors . . . 28

3.4 Connectedness . . . 29

4 Zero-free regions for the component polynomial of a matroid 32 4.1 The component polynomial of a matroid . . . 32

4.2 Splitting the elements of the collection AF\AFˆ . . . 34

4.3 A result on zero-free regions for the component polynomial of a matroid . 38 5 Finding a zero-free region for the k-tension polynomial 41 5.1 Writing Tk as an evaluation of the component polynomial of M∗(G) . . . 41

5.2 Bounding the left-hand side of (5.2) . . . 43

5.3 The collections B(F, f ) in the original graph G . . . 46

6 Discussion and further research 48 6.1 On bounding bm(F, f ) by a function of the ‘maxmaxflow’ . . . 48

6.2 Further research . . . 50

(4)

Introduction

When studying the properties of graphs, for instance their number of independent sets or proper vertex colorings, it can often be insightful to encode information about these properties in the form of graph polynomials. Such polynomials may give a better handle on the properties they represent, since they allow for a more (linear) algebraic approach. In this thesis, we will see how graph polynomials based on what we call tensions and flows of graphs, are related to the partition function of a model used in statistical mechanics, called the Potts model. These polynomials are closely related to the Tutte polynomial (or the dichromate of a graph, as Tutte himself calls it [13]), a graph poly-nomial that can be recursively defined on the contraction and deletion of the edges of a graph. For reference, see e.g. [14]. Although we will not be discussing the Tutte polyno-mial directly, the contraction and deletions of edges will play an important role in this thesis.

In statistical mechanics, the Potts model provides a way to describe how the local behavior of a complex system affects the global behavior of the system. The partition function of the Potts model is defined on a graph G = (V, E) that can represent a mul-titude of physical systems. For instance, G could represent a crystalline material, where the vertices of G represent the atoms or molecules, and the edges represent the chemical bonds between these particles. Or the elements of V could represent the particles of a gas, and the elements of E the interactions between these particles on a local level. Beaudin et al. [2] give a friendly introduction to the partition function of the Potts model (and its relation to the Tutte polynomial) from a graph theory perspective. In the following description of the model, we borrow heavily from their explanation. For a more comprehensive introduction to statistical mechanics, see e.g. [5].

A state of the graph G = (V, E) is an assignment of an s ∈ S to each vertex v of G, where S is a finite set of elements called spins. This can be seen as a (not necessarily proper) coloring of the vertices of G. The Hamiltonian of G gives a measure of the energy of a state σ : V → S, and is defined by

h(σ) = −J X

ij∈E

δσ(i),σ(j).

Here, the value J is called the interaction energy of G, and makes it so that h(σ) is essentially a weighted count of all the edges of G of which the endpoints have equal spins. More generally, one could bring J inside the sum and replace it with Jij to put

varying weights on the edges ij of G, but we consider a fixed weight J for every edge. Moreover, we assume that J > 0, in which case the model is called ferromagnetic.

(5)

The partition function of the k-state ferromagnetic Potts model is defined by Pk(G; T ) = X σ:V →S e− h(σ) kB T, (I.1)

where T ∈ (0, ∞) represents the temperature of the system and kB = 1.38 · 10−23 is

the Boltzmann constant. This partition function is the normalization factor for the Boltzmann distribution: for a fixed temperature T , the probability of the system being in the state σ is proportional to e−h(σ)/(kBT ), meaning that this probability is given by

p(σ) = e−h(σ)/(kBT )/P

k(G; T ).

At low temperatures, the system favors the lower energy states: as T tends to zero, the probabilities of the constant states (meaning the states in which all vertices have the same spin) dominate the probabilities of all the other states. For higher energy states on the other hand, there is less weight on the Hamiltonian of each state, so that the distribution tends to a uniform distribution as T → ∞.

The reduced free energy per vertex is given by fk(G; T ) =

1

|V |log(Pk(G; T )).

This expression is analytic as a function of T , because the partition function Pk does

not have real-valued roots for any fixed graph G (as it is a sum of positive terms for any T ). However, if {Gn}n∈Nis an appropriate family of graphs (usually a family of growing

subgraphs of some infinite lattice graph), the limit lim

n→∞fk(Gn; T ) (I.2)

is not necessarily analytic. Values of T for which the analyticity of (I.2) fails are ‘critical’ temperatures in the sense that at such values, when dealing with a large system, a small change in temperature can lead to an abrupt change in the physical properties of the system. Yang & Lee [15] showed that if there is a complex region R that contains a segment of the positive real axis such that the roots of Pk(Gn; T ) lie outside R for all n,

then the limit (I.2) is analytic on this segment.

In [1], Barvinok & Regts give a relation between Pk and the k-tension polynomial (as

we will call it), which they describe in the form of a weight of a linear code. A more general result [1, Theorem 1.6] on zero-free regions for weight functions of linear codes is then applied to find a zero-free region for the partition function. Their result can also be applied to the k-flow polynomial, a similar polynomial, which can be written in the form of a weight of a linear code as well. The zero-free region for the k-flow polynomial that can be obtained in this way is improved by Stam [11]. The main ingredient for this improvement is a theorem [11, Theorem 2.18] on zero-free regions for so-called block polynomials, inspired by some cluster expansion techniques that Bissacot et al. [3] describe.

(6)

It is well-known that tensions and flows are related by duality for planar graphs. As a consequence, a zero-free region for the k-tension polynomial of a graph G can be found by applying Stam’s method to a planar dual of G. In an attempt to improve the size of the region that is obtained in [1], we will expand on this idea of using duality in such a way that it can provide a zero-free region for the k-tension polynomial of any graph (rather than just any planar graph). In order to do so, we will extend Theorem 2.18 of [11] into the realm of matroid theory because, unlike graphs, all matroids have duals. In the first chapter of this thesis, we concisely give some basic notions from graph theory that will be used throughout. In Chapter 2, we define k-flows and k-tensions, and we show how the k-tension polynomial and the k-flow polynomial are related to the partition function of the k-state ferromagnetic Potts model. We also describe the method that Stam [11] uses for finding a zero-free region for the k-flow polynomial, and show how the duality between tensions and flows allows for the application of this method to the k-tension polynomial of a planar graph. Chapter 3 provides a short introduction to matroids by collecting the relevant parts of the first few chapters of Oxley’s book [9] on matroid theory. In Chapter 4, we use the tools presented in Chapter 3 to prove our main result (Theorem 4.12) about a zero-free region for the component polynomial of matroid. In Chapter 5, we then write the k-tension polynomial of a graph G as an evaluation of the component polynomial of the matroid M∗(G), the dual of the cycle matroid of G, and we explain how and under what conditions this could be used to find a zero-free region for the k-tension polynomial. Finally, in Chapter 6, we discuss some of the questions that remain unanswered, and options for further research.

(7)

1 Preliminary notions from graph

theory

In this chapter, we briefly discuss the notions from graph theory that will be used in this thesis. For a more comprehensive background on graph theory, see e.g. [4].

A (simple) graph is a pair G = (V, E), where V is a set and E is a collection of unordered pairs ij := {i, j} ⊆ V with i 6= j. The elements of V are the vertices of G, and the elements of E are called the edges of G. If e = ij is an edge of G, then we say that the vertices i and j are adjacent, and that they are the endpoints of e. Throughout, we will assume that V is a finite set. If unspecified, the sets of vertices and edges of a graph G are denoted by V (G) and E(G) respectively.

If we allow E to be a multiset, a pair of vertices can have multiple edges between them (edges with equal endpoints are said to be parallel ). If in addition we allow i = j for an edge e = ij (we say that e is a loop), then G is called a multigraph.

For a graph G = (V, E) and a subset U ⊆ V , define

E(U ) = {e ∈ E : both endpoints of e are in U }

A subgraph of G is a pair (U, F ) with U ⊆ V (G) and F ⊆ E(U ). Moreover, we write G[U ] = (U, E(U )). Similarly, for F ⊆ E, we define

V (F ) = {v ∈ V : v is an endpoint of some e ∈ F } and write G[F ] = (V (F ), F ).

A walk in a graph is a sequence of vertices v1v2. . . vn such that vivi+1 ∈ E for all

1 ≤ i < n. If necessary, we may also write down the edges that correspond to each pair of subsequent vertices in the walk (i.e. v1e1v2e2. . . en−1vn), for example if we want to

concisely denote the edges of the walk right away. A path is a walk in which no vertex appears more than once. If the first vertex of a walk is the same as the last, we say that the walk is closed. A cycle is a closed walk in which every vertex appears no more than once, except for the first vertex, which appears once more as the final vertex of the cycle.

A graph G is connected if there is a walk from u to v for every pair of vertices u, v of G. The relation of a pair of edges being ‘connected’ via a walk is clearly an equivalence relation on V (G). The connected components of G are the subgraphs G[X1], . . . , G[Xk],

where X1, . . . , Xk are the equivalence classes of V (G) (with respect to said relation).

The number of connected components of G is denoted by κ(G).

A tree is a connected graph without cycles. A subgraph of a connected graph G that is a tree and contains all vertices of G is a spanning tree of G. More generally, any (not

(8)

necessarily connected) graph without cycles is a forest. A subgraph H of a graph G is spanning forest if every connected component of H is a spanning tree of a connected component of G (and if all components of G are covered in this way). The number of edges of any spanning forest H of G is equal to |V (G)| − κ(G). This number is referred to as the rank of the graph G and it is denoted by r(G).

If e is an edge of the graph G = (V, E), we can delete e from G by removing it from E. The resulting graph (V, E\{e}) is denoted by G\e. We can also contract the edge e from G by removing e and identifying the endpoints u, v of e into a single new vertex w. Edges that had u or v as an endpoint in G, have w as an endpoint in the new graph, denoted by G/e. Note that even if G is a simple graph, G/e may be a multigraph.

It is not difficult to see that the order in a sequence of deletions an contractions is not important. Hence for a set F1 of edges that we want to delete and a set F2 of edges

that we want to contract, the notation G\F1/F2 is unambiguous, and it is the same as

G/F2\F1. The resulting graph is called a minor of G. Additionally, the graph G\F1 is

a deletion minor and the graph G/F2 is a contraction minor of G.

If G\e has more connected components than G, then e is a bridge. Note that if T is a tree, then every edge e = ij of T is a bridge, because otherwise there is a path P from i to j in T that does not include e, meaning that if we add e to P , it becomes a cycle. Similarly, a vertex v is a cut-vertex of G if G[V \{v}] has more connected components than G. We say that G is 2-connected if it has no cut-vertices. If a subgraph H of G either is maximal 2-connected without loops, or consists of a single loop of G and its endpoint, then H is called a block of G. Note that G[{e}] is a block if e is a bridge, and G[{v}] is a block if v is an isolated vertex. The edge sets of the blocks of a graph G form a partition of E(G). Moreover, if H1 and H2 are blocks of G, then H1 and H2 share at

(9)

2 Tensions, flows, and the Potts model

In this chapter we show how the partition function of the k-state ferromagnetic Potts model on a graph G can be written in terms of what are called the k-tension polynomial (Section 2.1) and the k-flow polynomial (Section 2.2) of G. In Section 2.3, we show how tensions and flows are dual notions. The definition of the partition function Pk that we

will use differs slightly from the one given in the introduction:

Definition 2.1. We define the partition function of the k-state ferromagnetic Potts model on a graph G = (V, E) by Pk(G; y) = X f :V →{1,...,k} Y uv∈E f (u)=f (v) y. (2.1)

Note that for y > 1, this corresponds with the definition (I.1) after the substitution y = eJ/(kBT ).

2.1 The k-tension polynomial of a graph

We begin by defining k-tensions on graphs and show how they can be turned into (ver-tex) k-colorings (and vice versa), which we then use to write (2.1) as an evaluation of the k-tension polynomial (with a prefactor). This section is essentially a more detailed explanation of a discussion by Barvinok & Regts [1, Section 2.5]. The notion of a ten-sion is somewhat well-known, although tenten-sions are rarely discussed without mentioning flows, and how these concepts are related by duality for planar graphs (see Section 2.3). In such a discussion it often makes more sense to relate flows on a planar graph to vertex colorings of its planar dual directly, without mentioning tensions at all (as is done in e.g. [4] and [14]).

Definition 2.2. Let G = (V, E) be a graph and suppose that an orientation has been assigned to every edge of G, turning G into a directed graph−→G . Let k ≥ 2 be an integer. A map ψ : E → Zk is called a k-tension (or occasionally just a tension) on

− →

G if for each cycle C = v0e1v1e2· · · vn−1envn (where vn= v0) of G, the following equality holds:

n X i=1 diψ(ei) = 0, where di = ( 1 if ei = (vi−1, vi); −1 if ei = (vi, vi−1). (2.2)

In other words, when traversing the edges of any cycle, the sum of the values on the edges that are directed in one way must equal the sum of the values on the edges that are directed in the opposite way. We will call (2.2) the tension condition.

(10)

1

2

3

3

2

-1=3

2

3

3

2

Figure 2.1: The ‘same’ 4-tension for different orientations of the same graph. One could define tensions for any finite abelian group (instead of just for Zk), but it will

turn out that for our purposes, only the size of the group is relevant, hence we will stick to groups of the form Zk. Note that if ψ is a k-tension on

− →

G , and if −→G0 has the same underlying undirected graph G, but with a different orientation on the edges, then the map

ψ0 : E → Zk : e 7→

(

ψ(e) if e has the same orientation in−→G0 as it has in −→G ; −ψ(e) otherwise,

is a k-tension on −→G0 (see Figure 2.1). The operation ψ 7→ ψ0 is clearly a bijection between the collections of tensions on −→G and −→G0. This means that values like the number of k-tensions or the number of nowhere-zero k-tensions (as described below) are really invariants of the undirected graph G, as will all other notions depending on tensions that are relevant for this thesis. For this reason, we will usually talk about k-tensions on G and assume that some arbitrary orientation has been assigned to its edges. The definition of a tension trivially extends to multigraphs, but dealing with multiple edges makes defining tensions somewhat tedious, so this has been omitted for brevity.

By the following proposition, ‘closed walk’ and ‘cycle’ can be used interchangably in Definition 2.2. Working with closed walks makes slightly more sense when relating tensions to vertex colorings.

Proposition 2.3. For a graph G = (V, E), the map ψ : E → Zk is k-tension if and only

if the tension condition (2.2) holds for any closed walk W = v0e1v1e2· · · vn−1envn in G.

Proof. One of the implications is trivial, since any cycle is a closed walk. For the other implication, assume first that W does not have a cycle as a subwalk. It is not difficult to see that in this case, W is a closed walk on a path P from v0to some vi. This means that

for every time some edge e is traversed in W , it is traversed in the reverse direction as well, as the walk cannot return to v0 otherwise. Hence the tension condition is satisfied,

because the contribution of any edge is cancelled out by the contribution of its reverse traversion.

If W does have a cycle C = vkek+1vk+1ek+2· · · vl−1elvl (with vk = vl) as a subwalk,

then W is of the form W1CW2, where the tension condition holds for the edges of C by

assumption, and where

(11)

is again a closed walk. Since W1W2 is shorter than W , it follows by induction that

W1W2 also satisfies the tension condition, hence n X i=1 diψ(ei) = k X i=1 diψ(ei) + l X i=k diψ(ei) + n X i=l diψ(ei) = 0.

Definition 2.4. A k-tension ψ on G is called nowhere-zero if ψ(e) 6= 0 for all edges e of G. The number of nowhere-zero k-tensions on G is denoted by τk(G).

The number of nowhere-zero k-tensions is strongly related to the number of proper k-colorings of the vertices of G, which is defined by

πk(G) = |{f : V → {1, . . . , k} | f (u) 6= f (v) for all u, v ∈ E}| .

In fact, as we will see in the proof of the following proposition, every (nowhere-zero) k-tension corresponds to precisely kκ(G) (proper) k-colorings.

Proposition 2.5. Let G = (V, E) be a graph and suppose that k ≥ 2. Then πk(G) = kκ(G)τk(G),

where κ(G) is the number of connected components of G.

Proof. Assume at first that G is connected (and that it has an arbitrary orientation on each of its edges) and suppose that ψ is a nowhere-zero k-tension on G. We will construct a coloring f from ψ. Let u ∈ V , pick a color a ∈ {1, . . . , k} and define f (u) = a. Then if v is a neighbor of u, let e be the edge with endpoints u and v and define

f (v) = (

f (u) + ψ(e) if e = (u, v); f (u) − ψ(e) if e = (v, u).

Because G is connected, we will encounter every vertex of G in this way, meaning that the choice of a ∈ {1, . . . , k} fixes the coloring. Since ψ is nowhere-zero, a pair of adjacent vertices will receive distinct colors. We need to show that f is well-defined: if we have walks W1 and W2 from u to any v, and if we subsequently color the vertices of W1 and

W2 according to the formula above, v should have received the same color at the end of

both walks. We can prove this by using the tension condition: write W1= p0e1p1e2. . . pk−1ekpk and W2 = q0f1q1f2. . . qm−1fmqm

(so p0 = q0 = u and pk = qm = v). Then if we concatenate W1 and the reverse of W2,

we get a closed walk W = ue1p1e2. . . pk−1ekvfmqm−1. . . f2q1f1u, meaning that, by the

tension condition, k X i=1 biψ(ei) + m X j=1 −cjψ(fj) = 0 (2.3)

(12)

where bi= ( 1 if ei = (pi−1, pi) −1 if ei= (pi, pi−1) and cj = ( 1 if fj = (qj−1, qj) −1 if fj = (qj, qj−1) .

Now if we apply the formula for constructing f , v receives the value a +Pk

i=1biψ(ei) if

we go along W1 and the value a +Pmj=1cjψ(fj) if we go along W2, which by (2.3) are

equal values.

Since the choice of a fixes the whole coloring, the tension ψ corresponds to k different vertex-colorings of G. Note that every coloring can arise in this way: if f0 : V → Zk is

any vertex-coloring, then ψ : E → Zk defined by ψ(e) = f0(v) − f0(u), where e = (u, v),

is nowhere-zero, and it is a tension because for any cycle C = v0e1v1e2· · · vk−1ekvk, we

have k X i=1 diψ(ei) = k X i=1 di· di(f0(vi−1) − f0(vi)) = k X i=1 f0(vi−1) − f0(vi) = f0(v0) − f0(vk) = 0.

Clearly, if we construct f from ψ by defining f (v0) = f0(v0) and using the formula

from before, we get f0 = f back, so any coloring is indeed related to a tension via this construction. Furthermore, a pair of different tensions can clearly never produce the same coloring. This shows that every tension corresponds to k unique colorings, so that πk(G) = k · τk(G). Finally, if G is disconnected, we can follow the procedure

for every component of G, and we need to pick a vertex and a starting value for each component, so that every tension corresponds to kκ(G) unique colorings. We conclude

that πk(G) = kκ(G)τk(G).

Now we define the k-tension polynomial Tk. Using the proposition above, we then

show how to express the partition function Pk in terms of Tk.

Definition 2.6. The k-tension polynomial of a graph G = (V, E) is defined as follows: Tk(G; x) =

X

ψ a k-tension on G

x|{e∈E : ψ(e)6=0}|.

We can rewrite Tk(G; x) by grouping together some of the terms, namely those

corre-sponding to tensions that vanish on the same set of elements: Tk(G; x) =

X

F ⊆E

τk(G/Fc)x|F |. (2.4)

Here, τk(G/Fc) is the number of nowhere-zero k-tensions on the graph G/Fc, obtained

from G by contracting all the edges that are not in F . We can do this because ψ : E → Zk

is a k-tension on G if and only if ψ|{e∈E : ψ(e)6=0}is a (necessarily nowhere-zero) k-tension

(13)

0

3

2

1

3

3

2

1

3

e

Figure 2.2: A 4-tension on G constructed from a nowhere-zero 4-tension on G/e.

The tension polynomial is related to the partition function as follows: Proposition 2.7. For a graph G = (V, E) and an integer k ≥ 2, we have

Pk(G; y) = kκ(G)y|E|Tk(G; y−1). (2.5)

Proof. We can write

Pk(G; y) = X f :V →{1,...,k} Y uv∈E f (u)=f (v) y = X f :V →{1,...,k} y|E|· Y uv∈E f (u)6=f (v) y−1 = y|E| X F ⊆E πk(G/Fc)y−|F | = kκ(G)y|E| X F ⊆E τk(G/Fc)y−|F | = kκ(G)y|E|Tk(G; y−1),

where the third equality holds because any map f : V → {1, . . . , k} is a proper vertex-coloring for the graph G/J , with J = {uv ∈ E : f (u) = f (v)} the set of edges that are contracted. The fourth equality holds by Proposition 2.5.

Since the trivial map ψ : E → Zk : e 7→ 0 is a k-tension, we have Tk(G; 0) = 1 6= 0.

Hence by continuity there exists some r > 0 such that Tk(G; x) 6= 0 for all x ∈ C that

satisfy |x| ≤ r. Then by Proposition 2.7, Pk(G; y) 6= 0 whenever |y| ≥ r−1. When we

view this in the context of statistical mechanics and recall that we substituted y for eJ/(kBT ) in (I.1), we see that large (real) values of y represent low temperatures T in

the Potts model. As discussed in the introduction, in light of the result from Yang & Lee [15], it is hence of interest to find a fixed r > 0 such that for an appropriate1 family {Gn}n∈Nof graphs, we have Tk(Gn; x) 6= 0 for all n ∈ N whenever |x| ≤ r. Then we can

be sure that there is no phase transition (for sufficiently large n) at temperatures T that satisfy

T ≤ J kBlog(r−1)

.

1

Here, ‘appropriate’ essentially means that the limit limn→∞|V (Gn)|−1log(Pk(Gn; T )), as mentioned

(14)

For an existing result on such an r, consider a basis C for the cycle space of G, let t be the maximum number of edges in a member of C, and let s be the maximum number of members of C that share a given edge. Barvinok & Regts use a theorem [1, Theorem 1.6] on the weight of a linear code (after all, (2.2) is essentially a system of linear equations) to show that Tk(G; x) 6= 0 for

|x| ≤ 0.46

(k − 1)t√s. (2.6) In the next section, we give the definition of a k-flow, and we describe a method from [11] that is used to find a zero-free region for the k-flow polynomial.

2.2 Finding a zero-free region for the k-flow polynomial of

a graph

Very similar to tensions on graphs is the notion of a flow. In this section, we will explain a method that is described in [11] for finding zero-free regions for the k-flow polynomial, a graph polynomial that was first introduced by Tutte [12]. Flows are well-known in the literature, see e.g. [4] and [14].

Definition 2.8. Let G = (V, E) be a graph and suppose that an orientation has been assigned to every edge of G, turning G into a directed graph−→G . Let k ≥ 2 be an integer. A map φ : E → Zk is called a k-flow (or occasionally just a flow ) on

− →

G if for each vertex v, the following equality holds

X

e : v∈e

deφ(e) = 0, where de=

(

1 if e = (v, u) for some u ∈ V ;

−1 if e = (u, v) for some u ∈ V . (2.7) In other words, the sum of the values on the edges that are directed out of v must equal the sum of the values on the edges that are directed into v. We will call (2.7) the flow condition. If φ(e) 6= 0 for all e ∈ E, then we say that φ is a nowhere-zero k-flow. The number of nowhere-zero k-flows of G is denoted by ξk(G).

It is good to note that what we call a k-flow is often referred to as a Zk-flow in the

literature. Like with tensions, values such as the number of k-flows or the number of nowhere-zero k-flows are invariants of the undirected graph G. Therefore in this case, we will usually talk about k-flows on G as well, rather than k-flows on some directed graph−→G .

Definition 2.9. The k-flow polynomial of a graph G = (V, E) is defined as follows: Fk(G; x) =

X

φ a k-flow on G

(15)

1

3

3

2

3

-1=3

3

3

2

3

Figure 2.3: The ‘same’ 4-flow for different orientations of the same graph.

Just like with k-tensions, we can write Pkin terms of Fk(see Corollary 2.9 of [11], which

can be obtained by writing both Pk and Fk as an evaluation of the Tutte polynomial,

e.g. by using the ‘recipe theorem’ from Oxley and Welsh [10]):

Pk(G; y) = k|V |−|E|(y − 1 + k)|E|Fk  G; y − 1 y − 1 + k  . (2.8) Again, we have Fk(G; 0) = 1 6= 0, meaning that Fk(G; x) 6= 0 whenever |x| is sufficiently

small. Then by (2.8), we find that Pk(G; y) 6= 0 for y sufficiently close to 1. To view this

in the context of statistical mechanics once more, assume that {Gn}n∈Nis an appropriate

family of graphs, and that there is an r > 0 such that Fk(G; x) 6= 0 for all |x| ≤ r. Then

Pk(G; y) 6= 0 if y − 1 y − 1 + k ≤ r, which means that for real y > 1 − k, we must have

1 − r(k − 1) 1 + r ≤ y ≤

1 + r(k − 1)

1 − r . (2.9) When considering the substitution y = eJ/(kBT ), we are only interested in values of y > 1

because T > 0 (meaning that the left inequality of (2.9) is always satisfied). Hence again, Yang & Lee’s theorem [15] tells us that we can be sure that there is no phase transition in the model whenever

T ≥ J

kB(log(1 + r(k − 1)) − log(1 − r))

.

The conclusion is that on the one hand, a zero-free region around 0 ∈ C for the k-tension polynomial provides a region free of phase transitions for temperatures around 0, whereas on the other hand, a zero-free region around 0 ∈ C for the k-flow polynomial provides a similar region for temperatures ‘around’ ∞ in the Potts model.

To find a zero-free region around 0 ∈ C for Fk, Stam [11] first writes Fkas an evaluation

of what is called the block polynomial of G:

Definition 2.10. Suppose that G = (V, E) is a graph and define the collection R(G) = {B : B is a block of the graph G[F ] for some F ⊆ E}.

(16)

The block polynomial of the graph G is the multivariate polynomial defined by ZG(w) = X F ⊆E Y B blocks of G[F ] wB,

where w ∈ CR(G) is the array of variables.

By Lemma 2.12 of [11], the number of nowhere-zero k-flows is multiplicative over blocks of G:

ξk(G) =

Y

B blocks of G

ξk(B).

This means that we can rewrite Fk as follows [11, Prop. 2.13]:

Fk(G; x) = X φ a k-flow on G x|{e∈E : φ(e)6=0}| (2.10) = X F ⊆E ξk(G[F ])x|F | = X F ⊆E Y B blocks of G[F ] ξk(B)x|E(B)| = X F ⊆E Y B blocks of G[F ] wB = ZG(w),

if we take wB = ξk(B)x|E(B)|for all B ∈ R(G). The second equality follows from the fact

that any flow φ becomes a nowhere-zero flow if it is restricted to F = {e ∈ E : φ(e) 6= 0}; there is a bijection between flows on G and nowhere-zero flows on subgraphs of G. So we can sum over all F ⊆ E instead and count nowhere-zero flows on these subsets to get the right contributions of x|F |.

The theorem below provides a zero-free region for the block polynomial of a graph G, provided that the absolute value of each wB is sufficiently small. Before we can state

the theorem, we need the definition of a ‘block path’:

Definition 2.11 (Section 8.1 of [7]). Suppose that G = (V, E) is graph. If B is a block of G, then we call B an isolated block of G if it has no cut-vertices. If B contains exactly one cut-vertex, it is an end block of G. Finally, for x, y ∈ V with x 6= y, we call the graph H an xy-block path if H is either an isolated block by itself, or else if H has exactly two end blocks, say B1 and B2, with the additional requirement that x is not a cut-vertex

of B1 and y is not a cut-vertex of B2.

Theorem 2.12 (Thm. 2.18 of [11]). Suppose that G = (V, E) is a connected graph and let w ∈ CR(G). Assume that there exists some a ∈ RV>0 such that the inequality

X u∈U X A∈Bu,v(U ) ePy∈V (A)\{u}ay Y B blocks of G[A] |wB| ≤ eav− 1 (2.11)

(17)

is satisfied for all U ⊆ V such that G[U ] is connected, and all v ∈ V \U such that G[U ∪ {v}] is connected, where

Bu,v(U ) = {A ⊆ E\E(U ) : G[A] is a vu-block path and V (A) ∩ U = {u}}. Then ZG(w) 6= 0.

u

v

G[U ]

Figure 2.4: Illustration of a block path from the collection Bu,v(U ).

By using (2.10), we can formulate a corollary that gives a zero-free region for the k-flow polynomial. Since we substitute ξk(B)x|E(B)| for |wB|, we may use the multiplicativity

of ξk over blocks to simplify the product in (2.11) again. Moreover, we let av = a for

some fixed a > 0 for all v ∈ V , so that the terms of the inner sum no longer depend on u, and we can get rid of the outer sum (this is inequality (3.1) of [11]):

Corollary 2.13. Suppose that G = (V, E) is a connected graph and let x ∈ C and k ≥ 2. Assume that there exists some a > 0 such that the inequality

X

A∈Bv(U )

(ea)|V (A)|−1· ξk(G[A]) · |x||A|≤ ea− 1 (2.12)

is satisfied for all U ⊆ V such that G[U ] is connected, and all v ∈ V \U such that G[U ∪ {v}] is connected, where

Bv(U ) = {A ⊆ E\E(U ) : ∃u ∈ U s.t. G[A] is a vu-bock path and V (A) ∩ U = {u}}. Then Fk(G; x) 6= 0.

Stam [11] gives two methods for finding an explicit zero-free region for Fk around

0 ∈ C, that depend on the maximum degree of the graph, by bounding the left-hand side of (2.12). The first method relies on a result from Jackson & Sokal [7, Proposition 8.1] that can be used to bound a certain generating function for |Bv(U )|. The other

method improves this result for k = 2 by the observation that the block path G[A] has a unique nowhere-zero 2-flow if and only if G[A] an Eulerian graph.

Unfortunately, this approach will not work for the k-tension polynomial, or at least not directly. We cannot write Tkas a block polynomial like we did in (2.10) for Fk: where

a flow φ on G[F ] remains to be a flow if we extend it to φ0 on G by letting φ0(e) = 0 if e /∈ F , the graph G can have cycles that G[F ] did not have, hence the tension condition

(18)

0

2

3

1

2

2

3

1

2

e

Figure 2.5: This tension on G\e does not (trivially) extend to a tension on G.

may become violated if we try to extend a tension G[F ] to a tension G in the same way. See Figure 2.5. However, we can write Tk in a way similar to (2.10), by rewriting it as

we did in (2.4). Moreover, the number of nowhere-zero tensions is multiplicative over blocks as well: Tk(G; x) = X F ⊆E τk(G/Fc)x|F | = X F ⊆E Y B blocks of G/Fc τk(B)x|E(B)|. (2.13)

The problem is that the expression (2.13) is not a block polynomial, because the blocks of G/Fc are generally not the same as the blocks of G[F ]. In the next section, we will describe how Tk can be written as a block polynomial (or more specifically as a k-flow

polynomial) in the case of G being a planar graph.

2.3 Tension-flow duality for planar graphs

In this final section we will show how tensions and flows are related via duality for planar graphs. The definitions and arguments in this section are rather informal and lack a lot of detail; for a more comprehensive background on the subject of planar graphs and duality, see for instance [8].

Definition 2.14. Suppose that G is a graph. If there exists an embedding of the graph G in the plane R2 in such a way that the edges of G do not cross, then we call G a planar graph. Whenever we talk about the plane graph G, we are talking about the graph G in combination with a fixed embedding of G in the plane.

A plane graph G divides the plane into regions that we call the faces of the plane graph G. By definition, every edge of G is adjacent to at most two faces.

Definition 2.15. The dual of a plane graph G is a graph G∗ that is defined as follows: • the faces of G are the vertices of G∗;

• there is an edge e∗ between distinct vertices of Gif the corresponding faces in G

(19)

G*

G

Figure 2.6: A plane graph G and its dual.

• there is a loop f∗ on every vertex of Gfor every bridge f of G that is adjacent

to the corresponding face of G (in a way, a face is adjacent to itself if there is a bridge that lies inside this face).

If we construct G∗ like in Figure 2.6, we can see that G∗ is again a plane graph. In turn, every vertex of G corresponds to a unique face of G∗, and if two faces of G share an adjacent edge e, then the vertices that correspond to those faces are connected by the edge e∗ in G∗. Hence the dual of G∗ is G. It is clear that G and G∗ have bijective edge sets; every edge e of G corresponds to a unique edge e∗ of G∗.

As can be seen in Figure 2.7, different embeddings of the same graph G can lead to dual plane graphs that are non-isomorphic. Any graph G∗ that has a planar embedding that is dual to a planar embedding of G, will be called a planar dual of G.

In Figure 2.8, we can see that deleting an edge e in a graph G is dual to contracting e∗in G∗. This relation between deletion and contraction in graphs that are planar duals of each other is well-known:

Proposition 2.16. Suppose that G = (V, E) is a planar graph and let G∗ = (V∗, E∗) be a planar dual of G. Then G∗/F∗ is a planar dual of G\F for all F ⊆ E.

Now suppose that G = (V, E) is a plane graph, and that an orientation has been assigned to its edges, turning G into a directed plane graph−→G . For a dual G∗= (V∗, E∗) of G, we will assign an orientation to every edge as well. If e = (u, v) is a directed edge of G, then the edge e∗ of G∗ crosses e when we look at the planar embeddings of G and G∗. Let the orientation of e∗ be such that when traversing the edge e from u to v, the edge e∗ is directed from the right of the crossing to the left of the crossing (see Figure 2.9.). We will call the resulting directed graph a directed planar dual of−→G , and denote it by−→G∗.

(20)

e*

e

Figure 2.8: Deletion and contraction are dual operations.

The duality between flows and tensions was first suggested by Tutte [13]. A compre-hensive proof of the following result is for example given by Diestel [4, Lemma 6.5.2] (Diestel talks about circulations rather than flows, a clever notion that does not rely on the orientation of the graph).

Lemma 2.17. Suppose that G = (V, E) is a directed planar graph, and suppose that G∗= (V∗, E∗) is a directed planar dual of G. The map ψ : E → Zk is a k-tension on G

if and only if the map φ : E∗→ Zk defined by φ(e∗) = ψ(e) is a k-flow on G∗.

See Figure 2.9 for an example. As a direct consequence of Lemma 2.17, we have the following:

Corollary 2.18. Suppose that G is a planar graph and let G∗ be a planar dual of G. Then

Tk(G; x) = Fk(G∗; x)

for all x ∈ C.

This means that we can apply Corollary 2.13 to G∗, and use the method from [11] to find a zero-free region for Tk.2

The problem is that this only works for planar graphs: non-planar graphs do not have these duals. However, at the cost of some of its structure, any graph can be turned into what is called a matroid and, as we will see, every matroid does have a ‘dual’. Taking such a dual of a matroid that is based on a planar graph, corresponds with taking the dual of the planar graph first, and then turning this dual graph into a matroid. In some sense, this provides a way to extend the notion of duality to all graphs. In the remainder of this thesis, we will generalize Theorem 2.12 in such a way that it can be applied to matroids (Chapter 4). We then use this extension, in combination with matroid duality, to formulate a corollary similar to Corollary 2.13, but for the k-tension polynomial rather

2

A caveat to this approach is that the zero-free region for Fk(G∗; x) that [11] provides depends on the

maximum degree ∆ of G∗, which is not an invariant of G. In addition to [7, Proposition 8.1] (which is used in the method of [11]), Jackson & Sokal provide other results that rely on what is called the ‘maxmaxflow’ of the graph, rather than its maximum degree. The maxmaxflow depends on the cocycles of the graph, which means that the maxmaxflow of G∗is an invariant of G. It is not clear if these other results from [7] can be used to adapt the method from [11] in order to overcome this problem.

(21)

1

3

3

3

2

3

2

3

3

1

G

G*

Figure 2.9: A 4-tension on G is the same as a 4-flow on G∗.

than the k-flow polynomial (Chapter 5). First, in the the next chapter, we will give a short introduction to matroid theory.

(22)

3 A brief introduction to matroid theory

This chapter mainly consists of definitions and results that are given in the book Matroid Theory by James G. Oxley [9]. Readers that are familiar with matroid theory and consider skipping this chapter, are advised to at least read Lemmas 3.13 and 3.14, and Proposition 3.29. These results, although quite straightforward, are not explicitly mentioned in Oxley’s book.

3.1 Basic notions

Notation. In the remainder of this thesis, when dealing with set operations, we will often be required to add or remove singletons; we will use the notation A ∪ x as shorthand for A ∪ {x} and we write A − x instead of A\{x} or A − {x}.

Definition 3.1. Suppose that E is a finite set, and that C is a collection of subsets of E. The pair M = (E, C) is a matroid on E if the following conditions on C are satisfied: (C1) ∅ /∈ C;

(C2) if C1, C2∈ C and if C1 ⊆ C2, then C1 = C2;

(C3) if C1, C2 ∈ C such that C1 6= C2, and if e ∈ C1∩ C2, then there is a C3 ∈ C such

that C3 ⊆ (C1∪ C2) − e.

We call E the ground set of the matroid M and the members of C are called circuits. When dealing with multiple matroids, or when E and C are not explicitly specified for some matroid M , we will denote these collections by E(M ) and C(M ).

One of the most straightforward classes of matroids arises from graphs: for a graph G = (V, E), take E as the ground set, and let C be the collection of cycles of G (or to be more precise, let every member of C consist of the edges of a cycle of G). It is not difficult to see that (C1) and (C2) are satisfied. By considering some basic examples, it should be intuitive that (C3) also holds, although it requires a small argument, see [9, Proposition 1.1.7]. The resulting matroid is called a cycle matroid and it is denoted by M (G). Note that different graphs may produce the same cycle matroid (see Figure 3.1). The condition (C3) is called the Circuit Elimination Axiom. In this thesis, we will make use of the following variation on (C3) (this is Proposition 1.4.11 of [9]):

Proposition 3.2 (Strong Circuit Elimination Axiom). If C is the collection of circuits of a matroid M , then the following condition is satisfied:

(23)

a

b

c

d

e

f

g

a

b

c

d

e

f

g

Figure 3.1: A pair of non-isomorphic graphs that have the same cycle matroid.

(C1)’ if C1, C2 ∈ C and e ∈ C1∩ C2 and f ∈ C1\C2, then there exists a circuit C3 such

that f ∈ C3⊆ (C1∪ C2) − e.

Because (C3)’ is stronger than (C3), the above proposition implies that we can replace (C3) by (C3)’ in the definition of a matroid.

If M1 and M2 are matroids, then M1 and M2 are isomorphic if there is a bijection

φ : E(M1) → E(M2) such that X ⊆ E(M1) is a circuit of M1 if and only if φ(X) is a

circuit of M2. A matroid M is called graphic if it is isomorphic to the cycle matroid of

some graph. In particular, every graphic matroid is isomorphic to the cycle matroid of a connected graph: after all, if M ∼= M (G), then we can construct a connected graph G0 by selecting a vertex from each component of G and identifying these vertices as a single vertex, so that M (G0) = M (G). For an example of a class of matroids that are not (necessarily) graphic, see the class of uniform matroids as described in Section 1.2 of [9]. Like multigraphs, matroids have loops and parallel elements:

Definition 3.3. For a matroid M , let e ∈ E(M ). If {e} is a circuit, then e is called a loop. If f, g ∈ E(M ) are such that {f, g} is a circuit (of size 2), then f and g are parallel in M .

As a consequence of (C3), the relation of being parallel is a transitive relation.

Definition 3.4. For a matroid M , a set I ⊆ E(M ) is called independent if it does not contain any circuits of M as a subset. The collection of independent sets of M is denoted by I(M ). Subsets that are not in this collection are called dependent.

In a cycle matroid M (G), the independent sets correspond to the forests of G. The concept of a matroid generalizes the notion of linear independence. To illustrate this, we can build a matroid M [A] on a matrix A over some field F, by letting E(M ) be the set that contains all the columns of A. Then any subset of linearly independent columns is an independent set M , and the collection of circuits of M consists of all subsets of columns that are minimally dependent. Such a matroid M [A] is called a vector matroid. Any matroid that is isomorphic to such a matroid M [A] is called F-representable. Proposition 3.5. The collection of independent sets I of a matroid M satisfies the following properties:

(24)

(I1) ∅ ∈ I;

(I2) if I ∈ I and if I0 ⊆ I, then I0∈ I;

(I3) if I, J ∈ I, and if |I| < |J |, then there is some e ∈ J \I such that I ∪ e ∈ I. Here, (I1) and (I2) are trivial. For (I3), see the proof of Theorem 1.1.4 of [9]. It turns out that for a finite set E and a collection of subsets C, the pair (E, C) is a matroid if and only if the collection I = {I ⊆ E : C 6⊆ I for all C ∈ C} satisfies the conditions (I1)–(I3), hence we can also define a matroid by its independent sets rather than by its circuits (see Section 1.1 of [9]). Note that X ⊆ E(M ) is a circuit if and only if X is minimally dependent, that is to say, X − e is independent for every e ∈ X.

Definition 3.6. A basis of a matroid M is a maximal independent subset of E(M ). The collection of bases of M is denoted by B(M ).

In a cycle matroid M (G), the bases correspond to the spanning forests of G.

Proposition 3.7. The collection of bases B of a matroid M satisfies the following prop-erties:

(B1) B is non-empty;

(B2) if B1, B2 ∈ B, then for every element x ∈ B1\B2, there exists a y ∈ B2\B1 such

that (B1− x) ∪ y ∈ B.

Clearly (B1) follows from (I1). For (B2), see Lemma 1.2.2 of [9]. Note that the members of B(M ) are equicardinal : for any two bases B1 and B2, we have |B1| = |B2|

(as a consequence of (B2) and the maximality of each basis). Like with the independent sets, it turns out that for a finite set E and a collection of subsets C, the pair (E, C) is a matroid if and only if the collection

B = {B ⊆ E : C 6( B for all C ∈ C and B is maximal}

satisfies the (B1) and (B2), hence we can also define a matroid by its bases rather than by its circuits (see Section 1.2 of [9]).

Given a matroid M and a subset X, write

C|X = {C ⊆ X : C ∈ C(M )}.

We define M |X := (X, C|X) to be the restriction of M to X; it is easy to check that this is again a matroid. When we are talking about a basis B of M |X, we often just say that B is a basis of X.

Definition 3.8. For a matroid M , the rank r(X) of a subset X ⊆ E(M ) is equal to the size of any basis BX of X. The function r : 2E(M ) → Z≥0: X 7→ r(X) is called the rank

function of M , and we occasionally write rM instead of r to avoid confusion whenever

multiple matroids are involved. The value r(M ) := r(E(M )) is the rank of the matroid M .

(25)

Note that the rank function is well-defined because the members of a basis are equicar-dinal. In a cycle matroid M (G), the rank of a subset of edges X corresponds with the ‘rank’ of the graph G[X] in the graph-theoretical sense: it is equal to |V (G[X])| − κ(G[X]). The following proposition is a collection of properties of the rank function, given in Sections 1.3 and 1.4 of [9].

Proposition 3.9. For a matroid M , let E = E(M ). Then (i) if X ⊆ Y ⊆ E, then r(X) ≤ r(Y );

(ii) if X ⊆ E and x ∈ E, then r(X) ≤ r(X ∪ x) ≤ r(X) + 1;

(iii) if C ⊆ E is a circuit, then r(C − e) = r(C) = |C| − 1 for all e ∈ C.

Definition 3.10. For a matroid M , the closure of a subset X ⊆ E(M ) is defined by cl(X) = {x ∈ E : r(X ∪ x) = r(X)}.

The following proposition is a collection of results from Section 1.4 of [9]. Proposition 3.11. For a matroid M , let E = E(M ). Then

(i) if X ⊆ E, then X ⊆ cl(X);

(ii) if X ⊆ Y ⊆ E, then cl(X) ⊆ cl(Y ); (iii) if X ⊆ E, then cl(cl(X)) = cl(X);

(iv) if X ⊆ E and x ∈ cl(X), then cl(X ∪ x) = cl(X);

(v) if X ⊆ E, x ∈ E, and y ∈ cl(X ∪ x)\ cl(X), then x ∈ cl(X ∪ y); (vi) if X ⊆ E, then

cl(X) = X ∪ {x ∈ E : there is a circuit C ⊆ X ∪ x such that x ∈ C}. By the last property, for a cycle matroid M (G), the closure of a subset of edges X can be constructed by including every edge e that creates a cycle when it is added to G[X]. Definition 3.12. Let X ⊆ E(M ) for some matroid M . We will use the notation Xc= E(M )\X for the complement of X in M .

A flat of a matroid M is any F ⊆ E(M ) for which cl(F ) = F . By Proposition 3.9(ii), if F is a flat, then r(F ∪ x) = r(F ) + 1 whenever x ∈ Fc. We finish this section with two short lemmas about flats that can be deduced from the properties given above.

Lemma 3.13. Suppose that M is a matroid, let F be a flat of M and f ∈ E(M ). Then cl(F ∪ f ) = cl(F ∪ x)

(26)

Proof. Since

x ∈ cl(F ∪ f )\F = cl(F ∪ f )\ cl(F ),

we have f ∈ cl(F ∪ x) by Proposition 3.11(v). By using properties (ii) and (iii) from the same proposition, we then find

cl(F ∪ x) = cl(F ∪ {x, f }) = cl(F ∪ f ).

Lemma 3.14. Let F be a flat of a matroid M , and let ˆF = cl(F ∪ f ) for some f ∈ Fc. Then if x, y ∈ ˆF \F are two distinct elements, there is a circuit C ⊆ F ∪ {x, y} that contains both x and y.

Proof. Suppose that x, y ∈ ˆF \F . Let BF be a basis of F . Then BF ∪ x is a basis of

F ∪ x, because x /∈ F and F is a flat. Now using the properties from Proposition 3.9 and Proposition 3.11, we find

r(F ∪ {x, y}) ≤ r( ˆF ) = r(cl(F ∪ f )) = r(F ∪ f ) = r(F ) + 1,

which implies that BF ∪ {x, y} is dependent, hence there is a circuit C such that

y ∈ C ⊆ BF ∪ {x, y}

(it must contain y because BF ∪ x is independent). But since BF∪ y is independent, C

cannot be contained in BF ∪ y. Hence we must also have that x ∈ C. So C is a circuit

in F ∪ {x, y} that contains both x and y.

3.2 Duality

For a matroid M , define

B∗(M ) = {Bc: B ∈ B(M )}.

It turns out that B∗(M ) satisfies (B1) and (B2). As mentioned above, a matroid can be defined by its collection of bases, so B∗(M ) is the collection of bases of a matroid on E(M ) (see Section 2.1 of [9]).

Definition 3.15. The dual M∗ of a matroid M is the unique matroid with ground set E(M ) and B∗(M ) as its collection of bases. The members of B∗(M ) are called the cobases of M . Similarly, C∗(M ) := C(M∗) is the collection of cocircuits, and I∗(M ) := I(M∗) is the collection of coindependent sets of M .

Naturally, the dual of M∗ is M∗∗= M . The notion of duality for matroids coincides with duality for planar graphs: if G is a planar graph, and G∗ is a planar dual of G, then M∗(G) ∼= M (G∗). See [9, Section 2.3 ] and Figure 3.2: if T = (V, X) is a spanning tree of G (hence X is a basis of M (G)), then T∗ := G∗[Xc] is spanning tree of G∗ (hence Xcis a basis of M (G∗)).

(27)

T

T*

Figure 3.2: The complement of a spanning tree of G is a spanning tree of G∗.

Definition 3.16. The corank function of a matroid M is the rank function of its dual M∗ and it is denoted by r∗.

The rank and corank of a matroid M are related to each other as follows:

Proposition 3.17 (Prop. 2.1.9 of [9]). Let M be a matroid and let X ⊆ E(M ). Then r∗(X) = |X| − r(M ) + r(Xc).

Finally, we will be interested in what a flat of a matroid looks like in the dual of the matroid. For this, we have the following proposition:

Proposition 3.18 (Exercise 2.1.13 of [9]). Suppose that M is a matroid. A subset F ⊆ E(M ) is a flat of M if and only if Fc is a (possibly empty) union of cocircuits of M .

Proof. We have

F is a flat of M ⇐⇒ r(F ∪ x) = r(F ) + 1 for all x ∈ Fc ⇐⇒ r∗((F ∪ x)c) = r∗(Fc) for all x ∈ Fc ⇐⇒ r∗(Fc− x) = r∗(Fc) for all x ∈ Fc

⇐⇒ every x ∈ Fc is contained in a cocircuit C∗ ⊆ Fc ⇐⇒ Fc is a union of cocircuits of M .

Here the second step follows from Proposition 3.17. For the forward implication of the fourth step, suppose that B∗ is a cobasis of Fc− x, then B∗ is a cobasis of Fc since

r(Fc− x) = r(Fc) by assumption. This implies that B ∪ x contains a cocircuit, which

must necessarily contain x. For the other implication, pick any x ∈ Fc, then x is an element of a cocircuit C∗ ⊆ Fc. Since C− x is independent, it is a subset of some

cobasis B∗ of Fc that necessarily does not contain x. Then B∗ is also a basis of Fc− x, which means that r(Fc− x) = r(Fc).

(28)

3.3 Minors

Like in graphs, we can delete and contract elements of a matroid M :

Definition 3.19. If M is a matroid and if X ⊆ E(M ), the deletion of X from M is the matroid given by M \X := M |Xc. The contraction of X from M is the matroid given by M/X := (M∗\X)∗.

So by definition, deletion and contraction are dual operations. Since the deletion of X from M is nothing more than the restriction of M to the complement of X, the collection of circuits of M \X is given by

C(M \X) = {C ∈ C(M ) : C ⊆ Xc}. (3.1) The collection of circuits of M/X on the other hand is the result of Proposition 3.1.26 of [9]:

C(M/X) = {C\X : C ∈ C(M ) and C\X is non-empty and minimal}.

Here, the additional conditions on C\X (non-emptyness and minimality) guarantee that (C1) and (C2) are satisfied. After all, even though for C1, C2 ∈ C(M ) we cannot have

C1 ( C2, it may be that C1\X ( C2\X, however this would mean that C2\X is not

minimal, hence it is not a member of C(M/X) in that case.

By the following proposition, the order in which we delete and contract subsets is irrelevant:

Proposition 3.20 (Prop. 3.1.26 of [9]). Suppose that M is a matroid and let X and Y be disjoint subsets of E(M ). Then

(i) (M \X)\Y = M \(X ∪ Y ) = (M \Y )\X; (ii) (M/X)/Y = M/(X ∪ Y ) = (M/Y )/X; (iii) (M \X)/Y = (M/Y )\X.

This means that any sequence of (disjoint) contractions and deletions from a matroid M can be written in the forms M \X/Y or M/Y \X, where X ⊆ E(M ) is the disjoint union of all subsets that have been deleted, and Y ⊆ E(M ) is the disjoint union of all subsets that have been contracted. Any matroid of this form is called a minor of M . Furthermore, it is important to note that if N = M \X/Y , then N∗= M∗/X\Y .

As explained in Section 3.2 of [9], the deletion/contraction of a subset X in a cycle matroid M (G) corresponds with the deletion/contraction of the subset of edges X in the graph G = (V, E); we have that

M (G)\X/Y = M (G\X/Y ) whenever X and Y are disjoint subsets of E.

(29)

The rank function on a deletion minor M \T of a matroid M is clearly given by rM \T(X) = rM(X) (3.2)

for all X ⊆ Tc. After all, the matroid (M \T )|X is the same as the matroid M |X. The rank function on a contraction minor can be derived from (3.2) by using the fact that deletion and contraction are dual operations:

Proposition 3.21 (Prop. 3.1.6 of [9]). If M/T is a contraction minor of a matroid M , then for all X ⊆ Tc:

rM/T(X) = rM(X ∪ T ) − rM(T ).

3.4 Connectedness

Define the following relation on the elements of a matroid M : for x, y ∈ E(M ), we say x ∼ y if and only if x = y or if there is a circuit of M that contains both x and y. Proposition 3.22 (Prop. 4.1.2 of [9]). The relation ∼ is an equivalence relation on E(M ).

Definition 3.23. The equivalence classes of E(M ) with respect to the relation ∼ are called the components of M . We say that the matroid M is 2-connected, or often just connected, if E(M ) is a component of M . That is to say, M is 2-connected if and only if for any pair of distinct elements x and y of E(M ), there is a circuit that contains both x and y.

Note that a non-empty subset B ⊆ E(M ) is a component of M if and only if M |B is 2-connected and any circuit of M either lies completely inside B or completely outside B. The reason that we talk about 2-connectedness is that it more or less coincides with the notion of 2-connectedness in graph theory:

Proposition 3.24 (Prop. 4.1.1 of [9]). Let G be a loopless graph without isolated vertices. If G has at least three vertices, then G is 2-connected if and only if, for every pair of distinct edges of G, there is a cycle containing both.

Every 2-connected graph is a block of itself because it is connected, maximal, and has no cut-vertices. In combination with the proposition above, this means that a graph G is a block if and only if M (G) is 2-connected (for this, see our definition of a block in Chapter 1; note that we include loops in our definiton). More generally, the (edge sets of the) blocks of G are precisely the components of M (G).

As mentioned before, there is no meaningful interpretation of the (1-)connectedness of a graph in the corresponding cycle matroid: every cycle matroid M (G) of a disconnected graph G is also the cycle matroid of a connected graph G0. This means that there cannot be any ambiguity whenever we talk about a ‘connected matroid’ instead of a ‘2-connected matroid’.

(30)

Proposition 3.25 (Cor. 4.2.8 of [9]). Suppose that M is a matroid. Then M is connected if and only if M∗ is connected.

This result follows directly from the following lemma:

Lemma 3.26 (Prop. 4.2.9 of [9]). If x and y are distinct elements of a circuit C of a matroid M , then M has a cocircuit C∗ such that C ∩ C∗= {x, y}.

We actually want to prove something stronger: that the components of M and M∗ are the same. We will make use of the following definition:

Definition 3.27. A union of components of a matroid M is called a separator. Clearly ∅ and E(M ) are separators. Any other separator is a non-trivial separator.

Alternatively, we could say that T is a separator of M if and only if C(M ) = C(M |T ) ∪ C(M |Tc),

that is, any circuit of M lies either in T or in Tc.

Proposition 3.28 (Exercise 4.2.1 of [9]). For a matroid M , let T ⊆ E(M ). Then the following are equivalent:

(i) T is a separator of M ; (ii) M/T = M \T ;

(iii) M∗/T = M∗\T ;

(iv) T is a separator of M∗.

Proof. It suffices to show that (i) ⇔ (ii), because (ii) ⇔ (iii) follows directly from duality since (M/T )∗ = M∗\T∗, and (iii) ⇔ (iv) is the same as (i) ⇔ (ii), but for the dual matroid.

Assume first that T is a separator of M . We will show that C(M/T ) = C(M \T ). If C ∈ C(M/T ), then C = D\T for some D ∈ C(M ). Since C is nonempty, we must have D ⊆ Tcbecause T is a separator. So C = D ∈ C(M \T ).

Now pick any C ∈ C(M \T ). Then C ∈ C(M ) and C ⊆ Tc. So C\T = C. If there is a circuit D ∈ C(M ) such that D\T ( C (so if C = C\T is not minimal), then D ∩ T 6= ∅ because otherwise D ( C. But then D ⊆ T because T is a separator, hence D\T is empty. This means that C ∈ C(M/T ). We conclude that C(M/T ) = C(M \T ), and hence M/T = M \T .

For the other implication, assume that M/T = M \T . If there is a circuit C that intersects both T and Tc, then C\T is non-empty. So there is a circuit D ∈ C(M ) such that

∅ ( D\T ⊆ C\T,

and such that D\T ∈ C(M/T ). We must have D ∩ T 6= ∅, as otherwise D ( C. But then D\T ( D, so D\T /∈ C(M \T ), which is a contradiction. We conclude that any circuit of M is either a subset of T or a subset of Tc. Hence T is a separator of M . This completes the proof.

(31)

With this, we can prove the final result:

Proposition 3.29. Suppose that M is a matroid and let B ⊆ E(M ). Then B is a component of M if and only if B is component of M∗.

Proof. Assume that B is a component of M . We will show that it is then a component of M∗ as well. Naturally, the other implication will follow by duality. By definition, B is a separator of M . By Proposition 3.28, B is a separator of M∗ as well, in addition to being non-empty. Therefore, to show that B is a component of M∗, it suffices to show that M∗|B is connected. Let x and y be distinct elements of B. Then by definition there is a circuit C ∈ C(M |B) that contains x and y. Using Lemma 3.26, we find that there must be a cocircuit C∗ ∈ C∗(M |B) that contains both x and y. Using duality, we find

C∗(M |B) = C∗(M \Bc) = C∗(M/Bc) = C(M∗\Bc) = C(M∗|B),

where the second equality follows from Proposition 3.28 because Bc is also a separator of M . This means that C∗ is a circuit of M∗|B that contains x and y, hence M∗|B is connected, completing the proof.

(32)

4 Zero-free regions for the component

polynomial of a matroid

In this chapter, we generalize Theorem 2.12 to matroids. The main result is presented in Section 4.3. Most of the work will go into formulating the actual statement of the theorem.

4.1 The component polynomial of a matroid

In Chapter 2 we encountered the block polynomial of a graph. For matroids, this gen-eralizes to the ‘component polynomial’:

Definition 4.1. Suppose that M is a matroid, let E = E(M ) and write R(M ) = {K ⊆ E : K is a component of M |F for some F ⊆ E}. The component polynomial of M is a multivariate polynomial defined by

ZM(w) = X X⊆E Y K cpts. of X wK,

where w = (wK)K∈R(M ) is the array of variables. Note that ‘cpts. of X’ is shorthand

for ‘components of M |X’.

If G is a simple graph, then ZG= ZM (G). Our goal is to extend Theorem 2.12 to ZM

for general matroids M . The main result (Theorem 4.12) will be about loopless matroids, so throughout this chapter we will always assume that a matroid M is loopless. The greatest challenge will be to define an analogue to the collections Bu,v(U ) of block paths

as described in Theorem 2.12. Like in [11], we prove our main theorem by induction. To be able to do this induction, we need to define a slightly different polynomial ZM,F

for subsets F of the matroid M . We first define a collection AF of certain subsets of

E(M ) that do not intersect F :

Definition 4.2. For a matroid M and a subset F ⊆ E(M ), define AF = {A ⊆ Fc: A is a separator of M |(F ∪ A)},

or in other words, AF consists of all subsets A, disjoint from F , such that any circuit

C ⊆ F ∪ A lies either completely inside F or completely inside A. Another way of saying this would be that C meets only F or only A.

(33)

Note that F1 ⊆ F2 implies AF2 ⊆ AF1: if A ∈ AF2, then for any C ⊆ F1∪ A we have

C ⊆ F2∪ A, hence C lies completely inside A, or C ∩ A = ∅ (so C ⊆ F1), meaning that

A ∈ AF1. By the following proposition, we can restrict our attention to subsets F that

are flats:

Proposition 4.3. If M is a (loopless) matroid and if F ⊆ E(M ), then AF = Acl(F ).

Proof. We already know from the discussion above that Acl(F ) ⊆ AF. For the other

inclusion, suppose that A ∈ AF. Then A ⊆ Fcand we want to show that also A ⊆ cl(F )c.

If A contains an element f of cl(F ), then f is contained in a circuit C ⊆ F ∪ f . Because we assume that M has no loops, C must contain more than one element, meaning that C meets both A and F , which contradicts the fact that A ∈ AF. We conclude that

indeed A ⊆ cl(F )c.

Now assume that there exists a circuit C1 ⊆ cl(F ) ∪ A that contains some x ∈ A,

such that C1 also meets cl(F ), and assume that among all such circuits, C1 is such that

|C1∩ (cl(F )\F )| is minimal. We know that C1∩ (cl(F )\F ) 6= ∅ because A ∈ AF. Pick

y ∈ C1∩ (cl(F )\F ), then there is a circuit C2 such that y ∈ C2 ⊆ F ∪ y (this follows

from Proposition 3.11(vi)). Clearly x /∈ C2, so by the Strong Circuit Elimination Axiom (Proposition 3.2) there is a circuit C3 that satisfies x ∈ C3 ⊆ (C1 ∪ C2) − y. Now

C3∩ F 6= ∅, since otherwise C3( C1. So C3 ⊆ cl(F ) ∪ A is a circuit that contains x and

also meets F . But

|C3∩ (cl(F )\F )| < |(C1∪ C2) ∩ (cl(F )\F )| = |C1∩ (cl(F )\F )|,

which is a contradiction because we assumed that |C1∩ (cl(F )\F )| > 0 was minimal

under these conditions. We conclude that such a circuit C does not exist, meaning that A ∈ Acl(F ).

Definition 4.4. For a matroid M and F ⊆ E(M ), define ZM,F(w) = X A∈AF Y K cpts. of A wK.

Since AE = {∅} and A∅= 2E(M ), it follows that ZM,E ≡ 1 and ZM,∅= ZM. The goal

is to use these ZM,F to find a zero-free region inductively. In fact, we will be proving

something stronger: given the conditions as stated in the theorem, we will prove that ZM,F(w) 6= 0 for all F ⊆ E(M ), and we will do this by starting with F = E(M )

(for which the statement trivially holds), and working our way down to F = ∅ by an induction argument on |Fc|.

Since the main theorem is about loopless matroids, we may assume that F is a flat (Proposition 4.3). In order to do the induction, we aim to write ZM,F in terms of certain

ZM,Xi for which |Xi| > |F | (and hence ZM,Xi(w) 6= 0 by induction). We will do this as

(34)

in the definition of ZM,F: ZM,F(w) = X A∈AFˆ Y K cpts. of A wK + X A∈AF\AFˆ Y K cpts. of A wK = ZM, ˆF(w) + X A∈AF\AFˆ Y K cpts. of A wK. (4.1)

The term ZFˆ(w) is already in the desired form, but we still need to deal with the

remaining sum on the right, which will require some work. Note that if A is an element of AF\AFˆ, then A 6= ∅ and either

(P1) A ∩ ˆF 6= ∅; or

(P2) A ∩ ˆF = ∅, but there is a circuit C ⊆ ˆF ∪ A that meets both ˆF and A.

Keep in mind that A ∩ ˆF = A ∩ ( ˆF \F ), since A ∈ AF. For the same reason, the

circuit C from (P2) meets ˆF \F in particular. We can say that (P1) and (P2) are the properties that make A ∈ AF ‘problematic’ in the sense that they ‘prevent’ A from being

an element of AFˆ. In the next section, we will split up each A ∈ AF\AFˆ by removing

the components that make A problematic; these components will form the elements of a collection that is essentially a collection of block paths if M = M (G) is a cycle matroid.

4.2 Splitting the elements of the collection A

F

\A

Fˆ

An element of A is problematic if it satisfies one of the conditions (P1) and (P2) that are stated above. The collection B(F, f ) that we define here is a disjoint union of two collections of subsets: one collection of subsets corresponds to (P1), the other collection corresponds to (P2):

Definition 4.5. Suppose that F is a flat of a matroid M . Let f ∈ Fc and write ˆ

F = cl(F ∪ f ). Define B1(F, f ) to be the collection of all B ∈ AF satisfiying

• B ∩ ˆF 6= ∅; • B is connected,

and let B2(F, f ) denote the collection that contains all B ∈ AF\{∅} that satisfy

• B ∩ ˆF = ∅;

• for every component K of B, there is a circuit C ⊆ ˆF ∪ B that meets both ˆF and K.

Finally, define B(F, f ) = B1(F, f ) ∪ B2(F, f ).

Note that B(F, f ) ⊆ AF\AFˆ: the elements of B1(F, f ) satisfy (P1) and the elements

of B2(F, f ) satisfy (P2). Moreover, since B ∈ AF implies B ∩ F = ∅, we could have

written B ∩ ( ˆF \F ) instead of B ∩ ˆF in the definition above. The second condition on elements of B2(F, f ) can be expressed in multiple different ways:

(35)

F

f

F

f

Figure 4.1: In blue a member of B1(F, f ), in red a member of B2(F, f ).

Lemma 4.6. Let M be a matroid and suppose that F ⊆ E(M ) is a flat. Let f ∈ Fcand write ˆF = cl(F ∪ f ). If B ∈ AF\{∅} and B ∩ ˆF = ∅, then the following are equivalent:

(i) for every component K of B, there is a circuit C ⊆ ˆF ∪ B that meets both ˆF and K;

(ii) for every component K of B, there is a circuit C ⊆ ˆF ∪ B that meets both ˆF \F and K;

(iii) B ∪ ( ˆF \F ) is contained in a single component of ˆF ∪ B;

(iv) for every x ∈ B and g ∈ ˆF \F , there is a circuit C ⊆ ˆF ∪ B that contains both x and g.

Proof. Clearly (i) and (ii) are equivalent because B ∈ AF: any circuit C ⊆ F ∪ B cannot

meet both F and a component K of B.

For (ii) ⇒ (iii), we know by Lemma 3.14 (and by the fact that components are equiv-alence classes) that ˆF \F is contained in a single component L of ˆF ∪ B. Any component K must also be contained in L, since we know by assumption that there is a circuit C ⊆ ˆF ∪ B that meets both K and ˆF . Hence B ∪ ( ˆF \F ) ⊆ L.

The implication (iii) ⇒ (iv) follows by definition of elements being in the same com-ponent. Finally, (iv) ⇒ (ii) is also trivial: pick any x ∈ K, then there must be a circuit C ⊆ ˆF ∪ B that meets K in x and that meets ˆF \F in some g ∈ ˆF \F 6= ∅.

These different versions of the second condition on the elements on B2(F, f ) will be

used throughout.

It turns out that every element of AF\AFˆ can be written as a (disjoint) union of an

element of B(F, f ) and an element of AF ∪A (and vice versa). In Figure 4.1 we show

what this looks like in a cycle matroid M (G) (recall that the components of M (G) are precisely the (edge sets of the) blocks G). The set of blue (resp. red) edges is a member of B1(F, f ) (resp. B2(F, f )). For both illustrations, the set of black edges is a member

of AF ∪A, where A ∈ AF is the union of the colored part and the black part. To prove

this for general matroids, we will make use of a small lemma:

Lemma 4.7. Let M be a matroid and suppose that F ⊆ E(M ) is a flat. Let f ∈ Fcand write ˆF = cl(F ∪ f ). If A ∈ AF, and if x, y ∈ A ∩ ˆF , then x and y are parallel elements.

Referenties

GERELATEERDE DOCUMENTEN

By comparing the activity of UVDE proteins from Schizosaccharomyces pombe (Spo), Bacillus subtilis (Bsu) and Thermus thermophilus (Tth) we show that the

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden.. Downloaded

Based on the similarity of UvrC C-terminal domain to the Rnase H-like enzymes a two metal ions catalysis was proposed for the UvrC 5’ incision, where the binding of the second

As shown before (Paspaleva et al., 2009), although the CPD, the 6-4PP and the abasic site do not cause destacking in the undamaged strand by themselves this is not the case for a

pombe UVDE was seen to require Mn 2+ and Mg 2+ for its function, but the utilization of divalent metals in its cleavage reaction was empirical and without a clear vision of

pombe UVDE voor een deel bekend was, bestond er geen data betreffende welke DNA schades door de bacteriële UVDE- homologen herkend worden en welke metaalionen ze voor hun

Our investigation of the phase diagram of the q = 3 Potts model on the centered triangular lattice in the (K , J) plane shows the existence of three phases: a ferromagnetic