• No results found

Dimer Models on the Square Lattice

N/A
N/A
Protected

Academic year: 2021

Share "Dimer Models on the Square Lattice"

Copied!
44
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Dimer Models on the Square Lattice

Tycho van Hoof

July 8, 2016

Bachelor Thesis Mathematics & Physics

Supervisors: dr. Raf Bocklandt & prof. dr. Bernard Nienhuis

Korteweg-de Vries Instituut voor Wiskunde

(2)

Abstract

The goal of this thesis is to study the dimer model on a square lattice. It begins with a more general introduction of dimer graphs. We then introduce the concept of a dimer model as it is used in statistical physics, along with the concept of a height function. Following this we shift our attention to a square lattice embedded in the torus. We define possible transfer matrices that can create a matching of this lattice row by row, although we limit ourselves to a subset of these with periodic associated edge weights. In an attempt to calculate the partition function for the dimer model we try to diagonalize the transfer matrix. To do this we first show that all the transfer matrices with periodic weights commute pairwise using the Yang-Baxter equation. Making use of this fact we try to find a basis of eigenstates for a transfermatrix that forces most of the matched edges to point in the same direction, in what we call the anisotropic approximation. To find these eigenvectors we make use of the Bethe ansatz. Although we are not successful in showing that these eigenstates are all linearly independent, we do manage to use these to diagonalize part of a more general transfer matrix. Finding the exact expression for the partition function eventually fell outside the scope of this research.

Title: Dimer Models on the Square Lattice

Author: Tycho van Hoof, tycho van hoof@hotmail.com, 10546987 Supervisors: dr. Raf Bocklandt & prof. dr. Bernard Nienhuis Second graders: prof. dr. Jan Wiegerinck & dr. Jasper van Wezel Date: July 8, 2016

Korteweg-de Vries Instituut voor Wiskunde Universiteit van Amsterdam

Science Park 904, 1098 XH Amsterdam http://www.science.uva.nl/math

(3)

Contents

Introduction 1

1 Dimer Graphs 3

1.1 Graph theory . . . 3

1.2 Dimer model . . . 6

1.3 The height function . . . 8

2 Application of the Bethe ansatz 12 2.1 Setup . . . 12

2.2 The Yang-Baxter equation . . . 16

2.3 Commuting transfer matrices . . . 20

2.4 Anisotropic approximation . . . 26

2.5 General eigenstates . . . 35

Conclusion 38

(4)

Introduction

A dimer is a polymer that consists of two atoms. It is interesting to look at dimers that consist of two different kinds of atoms in which atoms of one kind can only bond to atoms of the other kind. This can be viewed as a bipartite graph, with a set of bonds interpreted as a matching on this graph. A dimer graph is a bipartite graph embedded in a surface, and so it models the surface of an object consisting of two kinds of atoms. An example of this us the surface of a crystal consisting of two different ions. It is useful to be able to say something about the physics of such a system. One way to do this is to create a dimer model: a method used in classical statistical physics to look at all perfect matchings of such a dimer graph with the help of a partition function. For some dimer graphs this partition function can be seen as a function of some transfer matrix that ’builds’ matchings. The system we look at in this thesis is one of these.

The goal of this thesis is to gain a better understanding of the dimer model on a square lattice. The main objective originally was to acquire an expression for the partition function. We work on the square lattice as it is a simple, commonly occuring lattice. Because of its simplicity it is reasonable to assume that this is an integrable system.

In order to find the partition function we apply the Bethe ansatz. This is a method originally developed by Hans Bethe to solve the one-dimensional anti-ferromagnetic Heisenberg model in 1931, by finding its exact eigenvectors and eigenvalues [1]. Since then it has been adapted to find the exact solutions to many one-dimensional quantum many-body problems.

We also make use of the Yang-Baxter equation. This is an equation that, if it holds, guarantees integrability of a system. It also tells us that transfer matrices commute, which is what we will use it for in this thesis. Its name is derived from Chen Yang and Rodney Baxter, who both worked on the equation. Baxter used it to solve the eight-vertex model in the zero-field case [2].

In the first chapter we will introduce some general definitions and concepts that we will require. We look at the definitions of both dimer graphs and dimer models, and investigate the construction of the so-called height function for certain types of dimer models. We also introduce transfer matrices as a way to calculate the partition functions for some dimer models.

In the second chapter we move on to the actual dimer model on the square lattice. To do this we define the right lattice, at first embedded in a cylinder but afterwards also embedded in a torus. We introduce a specific family of transfer matrix with certain periodic properties. We also add several other definitions that we will need to work in this system. We then use the Yang-Baxter equation to prove that the different transfer matrices commute . Using this fact we move on to a transfer matrix that heavily favors edges in one specific direction, which we call the anisotropic approximation. We use the

(5)

Bethe ansatz to acquire a set of proposed eigenstates that could diagonalize this transfer matrix and other, more general transfer matrices. We then verify if these are eigenstates and look at some of the corresponding eigenvalues.

Acknowledgements

I would like to thank Raf Bocklandt and Bernard Nienhuis for supervising me in my work. I want to thank them for helping me when I got stuck, even when I was incredibly vague about what I did, and what I did not understand. I am also thankful for all the time and effort they put into supervising my project and giving me feedback.

I would also like to thank my family for supporting me when I thought there was no chance that I would finish my thesis in time.

(6)

1 Dimer Graphs

In this chapter we begin with the general definition of dimer graphs and some of their general properties. We then examine how we can model the statistical physics of perfect matchings of a dimer graph, which is also known as a dimer model.

1.1 Graph theory

In order to explain what a dimer model is, we first have to introduce several graph theoretical concepts. We will also establish some other definitions and theorems that we require later on. The definition and some basic results about the ribbon graph can be found in A Dimer ABC [3].

Ribbon graphs

We want to define the concept of a dimer graph. To do this we first need the notion of a ribbon graph.

Definition 1.1.1 (Ribbon Graph). A ribbon graph G = (H, ν, ) is a set H along with two permutations ν,  : H → H for which the following hold: The permutation  has no fixed points and 2 = id, and all orbits of , ν and ν ◦  are finite.

We define the sets of vertices, edges and faces as V := H/ < ν >, E := H/ <  >, F := H/ < ν ◦  >,

the orbits of the action on H generated by ν,  and ν ◦  respectively.

We say that a vertex v and an edge e are incident to each other if the intersection v ∩ e is nonempty. In much the same way we say that an edge e and a face f are incident if the intersection e ∩ f is nonempty.

Because  has order 2, it is clear that every edge connects precisely two vertices (although these can be the same).

Using a ribbon graph we can create a corresponding surface in the following way: As the orbits of ν ◦  are of finite size, we can take an associated n-gon for every face f , with n < ∞ the size of the orbit f . For each h ∈ f there is a unique edge e with h ∈ e, as  is a permutation. The order in which we identify the sides is induced by the permutation ν ◦ . This process results in the desired surface. We can then embed the graph G consisting of the nodes V and the edges E of this ribbon graph into this surface in a

(7)

natural way. We will often refer to the entire graph by only speaking of this embedded graph as G = (V, E).

Here is an example of a ribbon graph: Take H = {1, 2, 3, 4}, then ν = (1 2)(3 4) and  = (1; 3)(2; 4) satisfy the requirements, as 2 = id, it has no fixed points, and all orbits are finite as H is finite. The different orbits under the action of ν are {1, 2} and {3, 4}, so there are two nodes. The action of  also has two different orbits: {1, 4}, {2, 5} and {3, 6}. These are the two edges. Finally we have the permutation ν ◦  = (1 4)(2 3), which again has two orbits:{1, 4} and {2, 3} So this graph has two faces. If we now attempt to create the associated surface we will see that we get a sphere. This happens because we get one digon for each face, and then we glue them together as if they were two half-spheres. The embedded graph G = (V, E) then consists of two vertices, which we can take to be the north and south pole of the sphere, and two edges which form two different meridians.

Equipped with the definition of a ribbon graph we can now define a dimer graph. Definition 1.1.2 (Dimer Graph). A dimer graph is a bipartite ribbon graph. This means that we can write the set of vertices V as the disjoint union of two sets A and B, with every edge in E being incident to one vertex in A and one vertex in B.

So a dimer graph is a bipartite graph embedded on some surface: the surface corre-sponding to the ribbon graph. This embedding happens in a way that partitions the surface into polygons with the edges of the graph as their boundaries. We will not work directly with the definition, but use this description to say that something is a dimer graph.

Our example ribbon graph is also a dimer graph: We can put one vertex in A, the other in B, and then this gives us a bipartite ribbon graph.

Dual graphs

We can also speak of the dual of a ribbon graph

Definition 1.1.3 (Dual Graph). If G = (H, ν, ) is a ribbon graph, then the dual is defined as the ribbon graph G∨ := (H, ν ◦ , ).

This is again a ribbon graph, as ν ◦ ,  and ν ◦  ◦  = ν all have finite orbits. We will denote the set of vertices, edges and faces of this dual graph with V∨, E∨ and F∨ respectively, where V, E and F are those corresponding to the original. It is clear from the definition that V∨ = V , but E∨ = F and F∨ = E. Because 2 = id, we also have that

G∨∨= (H, ν ◦  ◦ , ) = (H, ν, ) = G,

so taking the dual of a ribbon graph twice gives the original graph once more.

Theorem 1.1.4. A ribbon graph G and its dual G∨ have the same corresponding surface. This means the dual can be interpreted as taking the surface corresponding to G, then drawing the graph on it that has vertices on every face of G, and edges connecting two vertices if the corresponding faces have an edge between them.

(8)

Periodic graphs

Because we want to talk about some properties of periodic bipartite graphs, we will formalize this concept.

Definition 1.1.5 (Periodic bipartite graph). A bipartite planar graph G = (A t B, E) is called periodic in one direction if there exists a non-zero vector v ∈ R2 such that translation of the entire graph by v returns the same graph. We also require that vertices in A are translated onto vertices in A and vertices in B must be translated onto vertices in B.

It is called periodic in two directions if there are two linearly independent vectors v1, v2 ∈ R2 such that translation of the entire graph by either v1 or v2 returns the same

graph. Again we require that this maps vertices in A and B to vertices in the same set. We will look at graphs that are periodic in two directions v1 and v2, although it is

worth noting that these are also periodic graphs that are periodic in one direction. Examples of periodic bipartite graphs are the square, hexagonal and square octagonal lattices.

On such a graph we can define an action of Z2, where (k, l) ∈ Z2 acts by translating the entire graph by kv1+ lv2. By definition of this type of graph this action leaves the

graph invariant.

For a periodic graph we can look at the parallelogram spanned by the vectors v1 and

v2,

D := {a1v1+ a2v2 | a1, a2 ∈ [0, 1)}.

Using the action of Z2 we can then tile the entire plane with translated copies of D. Because the action leaves the graph invariant this means that this tiling of the plane just gives our graph G, and so all information about the graph is contained in D. We call the part of the graph that sits inside D the fundamental domain. In Figure 1.1 there are several examples of fundamental domains: The first is for a hexagonal lattice, and the second and third are both for square lattices.

v2 v1 v2 v1 v2 v1

Figure 1.1: Three different fundamental domains.

Now we take a bipartite planar graph G that is periodic in two directions corresponding to vectors v1and v2. We can define the quotient graph Gk,l(with respect to v1and v2) by

identifying vertices and edges that sit in the same orbit under the action of kZ×lZ ⊆ Z2. This graph can be embedded naturally in the quotient space of R2under the same action, which we will call Sk,l.

(9)

Assuming that for our original planar graph G all faces are incident to only finitely many edges, then we can see that the same will hold for Gk,l as its faces are those of

G modulo the action. In this case the graph Gk,l is a bipartite graph embedded in the

surface Sk,l in such a way that all the faces are polygons with edges of the graph as their

boundary. This is precisely the characterisation of a dimer graph we have given before. So the graphs Gk,l are dimer graphs for a graph G of this kind.

Some simple results from topology are that S0,0 is just the plane, that Sk,0 and S0,k

are homeomorphic to the infinite cylinder for k non-zero, and that Sk,l is homeomorphic

to the torus if both k and l are non-zero.

For the square lattice it is clear that every face is incident to only finitely many edges, and so the square lattice and its quotients will form dimer graphs. This is what we need in order to define a dimer model on the square lattice.

Matchings

Finally we also need to introduce the idea of a matching, as this is what a dimer model looks at.

Definition 1.1.6 ((Perfect) Matching). For a graph G = (V, E) a matching is a subset M ⊆ E such that for any vertex v ∈ V , there is at most one edge in M incident to v. A matching is called a perfect matching if there is exactly one such edge for each vertex v. For a matching M we refer to vertices v ∈ M as matched, whereas vertices v /∈ M are called unmatched. We write M(G) for the set of all perfect matchings of a graph G.

Sometimes we only want to look at a subgraph I of the graph that we have a matching on. Using the matching we can make a distinction between different vertices in the subgraph.

Definition 1.1.7. For a graph G = (V, E), a matching M ⊆ E and a subgraph I ⊆ G, an internally matched vertex or IMV is a vertex in I that is matched to another vertex in I by this matching M .

It is clear that if M is a perfect matching and some vertex in I is not connected to any vertices in G\I then that vertex must be internally matched.

1.2 Dimer model

A dimer model is a classical statistical mechanical model to look at all the perfect matchings on a dimer graph. It involves assigning an energy to each perfect matching of the dimer graph.

To do this we will assign a variable to each edge. Take some dimer graph G = (V, E). Now what we need is a function  : E → R that maps each edge to a value, which we say is the energy of that edge. Sometimes we will speak of the weight of an edge, which is just the function e−. Using this we can define the energy of a matching M as follows

E(M ) := X

e∈M

(10)

so the energy of a matching is just the sum of all the edges in this specific matching. This in turn allows us to define the partition function

Z := X M ∈M(G) e−E(M )= X M ∈M(G) Y e∈M w(e).

Using the partition function we can express the probability of our system being in a specific state (by which we mean a specific perfect matching) in the following way

P (M ) = e

−E(M )

Z ,

where we use the partition function as normalization. Partition function from the transfer matrix

As the goal of this thesis is to calculate the partition function for one specific case: the square lattice. This is an example of a bipartite graph that is periodic in two directions. We will now introduce a way to find the partition function for dimer models on dimer graphs of this kind.

Suppose we have some planar bipartite graph G that is periodic in two directions, v1 and v2. Without loss of generality we can choose these directions to be (1, 0) and

(0, 1), because we can just apply the required linear transformation to the entire plane, including the graph. Assume that the quotient graph GL,P is a finite dimer graph

embedded in the torus for some L, P > 0.

We can tile the graph GL,P with P rows of L copies of the fundamental domain. We

want to look at the concept of building a perfect matching of this graph row by row. For a perfect matching on the entire graph, we can look at an individual row and look which vertices in it are internally matched. Take I ⊆ G to be the entire graph except one of the rows (it does not matter which one, as we can translate one row to get the others), then take R ⊆ I to be the row under the one that was excluded.

Definition 1.2.1. We define a state of R = (V0, E0) to be a subset S ⊆ V0.

A state is called possible if there exists a perfect matching M on GL,P such that S is

the set of IMVs in R, where we view I as the interior. This means that V0\S is exactly the set of vertices in R that are matched ’upward’, meaning to vertices in the row above R.

We write S for the set of all possible states.

We choose one set row, and call the set containing its vertices V1. We also define the

set of vertices for the row above it to be V2. For two states α and β we can now define

an extension from α to β as follows: A perfect matching M of the vertices (V1\α) ∪ β.

This means that we match all ’downward’ matched vertices in the second row with all ’upward’ matched vertices in the first, according to the two given states. The weight of an extension is given by

Y

e∈M

(11)

We define the variable Tβ,α as the sum of the weights of all possible extensions from α

to β.

Now we define the complex vector space with as its basis the possible states, V := C[S]. But now the values Tβ,α give us a matrix T describing a linear transformation of V . We

call this matrix the transfer matrix.

Theorem 1.2.2. For such a transfer matrix on GL,P we have

Z = Tr(TP).

Proof. We choose a row to be the starting row. We can now seperate matchings based on their state in this row. Let Mα ⊆ M(GL,P) be the set of all perfect matchings on

GL,P for which the starting row is in state α. This gives

Z = X M ∈M(G) e−E(M )=X α∈S X M ∈Mα e−E(M ). (1.1)

Now take some state α. By definition of the transfer matrix, the entry Tβ,αP of the P -th power of the transfer matrix is the sum over the weights of all possible extensions in P rows from α to β. In order to get a perfect matching we require that we go from α to α in P rows, as the graph is periodic with period P . So the weight of every perfect matching M ∈ Mα is counted exactly once in Tα,αP . By definition of the transfer matrix

the weight given to a perfect matching M is Y e∈M w(e) = e−E(M ), and so we get X M ∈Mα e−E(M )= X M ∈Mα Y e∈M w(e) = Tα,αP .

Using this and equation 1.1 we obtain

Z =X

α∈S

Tα,αP = Tr(TP).

This means that if we can diagonalize the transfer matrix then we can determine the partition function.

1.3 The height function

It is important to know how much the probability P (M ) to find the system in a given matching depends on the function  that assigns the different energy values to different edges in the graph. One way to represent a part of this change is by introducing variables B1 and B2, the magnetic field in different directions. These then correlate a the energy

(12)

change of a matching with its height. But since we work on a surface there is no concept of height. In this section we will look at one way to introduce height for some bipartite graphs that are periodic in two directions and their quotients. To do this we use the so-called height function. A lot of the definitions and results in this section are sourced from the paper Dimers and Amoebae [4].

Take G a bipartite planar graph, periodic in two directions with corresponding vectors v1and v2. Suppose that Gk,l = (V, E) is finite for some k, l ∈ Z\{0}, and write ˜G := Gk,l.

We set some perfect matching M0 of Gk,l. Now for any perfect matching M of Gk,l we

can look at the subgraph of Gk,l that contains all the edges in M0 and M .

Theorem 1.3.1. If the graph Gk,l is finite then for any perfect matching M of Gk,l the

graph ˜G = (V, M ∪ M0) consists of a disjoint union of cyclic graphs and lone edges.

Proof. Take any vertex v ∈ V . Then v has degree one (in ˜G) if the edges in M and M0

incident to vertex v are the same, or degree two if they are not.

If the vertex has degree one then there is only one edge e which has it as an endpoint. We know that this edge e must sit in both M and M0 as v has degree one. This means

that the other endpoint of e also has degree one, as the edges in both M and M0 that

are incident to it are the same (as they are both e). So the edge e forms a lone edge disjoint from the rest of the graph, and the vertex v is part of it.

Otherwise the vertex v has degree two. By the contraposition of the previous argument we now know that it must be connected to other vertices of degree two. Take one of these vertices and call it v1. We can repeat this argument to see that this vertex must

again be connected to two vertices of degree two, one of which is not the previous vertex. We call this one v2. Using induction, we see that we can repeat this construction, and

so we construct, for all n ∈ N, a set

Vn:= {v, v1, . . . , vn}.

But our graph G is finite and so there is some n for which vn+1∈ Vn. In particular there

exists an m that is the smallest natural number that satisfies this condition. Suppose then that vm+1 = vi for some 1 ≤ i ≤ m. We know that vi−1, vi+1 and vm are all

connected to vm+1, and that vi−16= vi+1. Because vm+1 is of degree two this must mean

that vm ∈ {vi−1, vi+1}. If i < m this means vm ∈ Vm−1, which means that m is not the

smallest natural number satisfying the property, giving a contradiction. If i = m then this means vm = vm+1, but this is also impossible, as there are no vertices connected

to themselves (G is bipartite and therefore also without loops). The only option left is v = vm+1, so v is part of a cyclic graph (of length m).

If we pick a perfect matching M of Gk,l, this means that we get a set of closed

loops and lines in our graph. These separate the torus which Gk,l sits in into several

areas. However, we can also ’lift’ this matching M and the reference matching M0 into

periodic matchings M0and M00 on G (periodic in the sense that they are invariant under translations by kv1 and lv2). Then our theorem tells us that the subgraph of G with

the edges M0 ∪ M0

(13)

that go on forever (as a cyclic graph on the torus is either the quotient of an infinite line graph or of a cyclic graph). We want to interpret crossing one of these lines in the plane as a change in height.

First we take a reference face f0 from the set of faces F of the planar graph G. We say

this face has height 0. Now we want to define the height for some general face f ∈ F . To do this we look at the dual graph G = (F, E0). We pick a path α in this graph from the face f0 to f . Recall that G is a bipartite graph, with disjoint vertex sets which we

will call A and B.

Every edge in this path crosses through exactly one edge of the original graph, with one vertex from A on one side of the path and a vertex from B on the other side. If this crossed edge sits in neither M00 nor M0, then there is no height contribution from this edge. If the edge sits in M00 and the vertex on the right side of the path sits in A, then there is contribution 1, but if it sits in B then there is a contribution −1. If the edge is part of M0 then the opposite holds: if the vertex on the right side sits in A then we have a contribution of −1, and if it is in B then the contribution is 1. These different weights are summarized in Figure 1.2. A similar, but more specific construction, can be found in [5]. However, it uses tilings instead of matchings, which are dual to one another.

e a b α e /∈ M0, M +0 e a b α e ∈ M0 +1 e a b α e ∈ M0 −1 e a b α e ∈ M −1 e a b α e ∈ M +1 Figure 1.2: Different contributions to the height.

If we take the sum of all contributions given by edges in α, this gives us an integer value which we call hM(f ). This notation is partially justified with the following theorem:

Theorem 1.3.2. The value of hM(f ) is independent of the path α from f0 to f used in

the construction (although it still depends on the reference matching M0 and the reference

face f0).

Proof. This follows from Theorem 1.3.1 and the definitions of the contributions.

This allows us to define h : F → Z, which maps a face f to h(f ), which we call its height. We call this function the height function.

We can now define the quantities

s1(M ) := hM(f0+ v1),

(14)

where f0+ v1 and f0+ v2 denote the faces f0 is translated to under the action of (1, 0)

and (0, 1). These are the slopes associated with our perfect matching M of Gk,l relative

to M0 and f0. They get their name from the fact that they measure the change in height

during one period, either v1 or v2, as hM(f0) = 0.

Using these slopes it is possible to define a set of probabilities on the perfect matchings of G by looking at increasingly large k and l for Gk,l.

We have talked about introducing magnetic field variables B1and B2to see how P (M )

relates to the edge energy function . Using these variables and the height we can get the following weights for a matching M :

w(M ) := e−B1s1(M )−B2s2(M ),

which when normalized will give a set of probabilities [4]. This also allows us to split up the partition function into sums over different slopes, which might make it easier to calculate. However, we will not use the height function when treating the dimer model on the square lattice.

(15)

2 Application of the Bethe ansatz

In this chapter we will look at the tilings of a periodic square lattice with 1 × 2 rectangles in a way that is comparable to a many-body problem in one dimension. The goal is to find the partition function for this system. We will attempt to do this by applying the Bethe ansatz. To this end we will also examine whether the Yang-Baxter equation holds for the transfer matrix belonging to this system.

2.1 Setup

We look at a square lattice rotated by 45 degrees, which is a planar bipartite graph G. We define our horizontal axis, which we will call space, in such a way that the diagonal of a square on the lattice has unit length. In the same way we define our vertical axis, which we will call time. This means that the fundamental domain, shown in Figure 2.1 is the size of the unit square. Under the action of Z2 corresponding to this fundamental domain we now look at the graph GL,0. This is the same as choosing the space axis to

be periodic for the original graph G, with integer period L. So we now have a square lattice that forms a dimer graph on the cylinder. Later on we will also choose the time axis to be periodic (by dividing out another quotient and moving to GL,P), and so the

resulting graph will be on the torus.

v2

v1

Figure 2.1: Fundamental domain for the square lattice.

As our square lattice is a dimer graph, we can look at its dual graph G∨. This dual graph is again a square lattice on the cylinder. A (complete) tiling of the original square lattice with rectangles of size 1 × 2 is then equivalent to a (perfect) matching of the dual lattice (and vice versa, giving a one on one correspondence). We will create a perfect matching of the dual lattice and then use this to acquire a complete tiling of the original lattice. From this point on, we will be working exclusively on the dual lattice G∨ and refer to it as the lattice.

Since this lattice G∨ is again bipartite we have a partition of the set of nodes into two sets A and B such that there are no edges between nodes in the same set. For A we take the nodes of the lattice that have an integer time coordinate, and for B the ones with a

(16)

half-integer time coordinate: A = {(x, t) | x, t ∈ Z}, B = {(x + 1 2, t + 1 2) | x, t ∈ Z}.

Now we intend to construct a matching of the lattice row by row. To this end we assume that, for a set time t ∈ Z, we have a matching that satisifes the following: Every node in G∨with time coordinate less than t has to be matched, and every node with time coordinate greater than t has to remain unmatched. In other words it gives a perfect matching of the graph ’up to time t’.

It is clear that if we have such a matching up to time t we can restrict it to a matching up to time t0 < t by removing all edges from the matching which contain a vertex with time coordinate larger than t0.

For a given matching up to time t the nodes at time t are either matched or not matched. This gives us a state of the line at time t.

Definition 2.1.1. A state is a vector of length L with entries in {0, 1}. Here an entry 0 corresponds to an unmatched vertex and 1 to a matched vertex. This gives a total of 2L different states. We denote the set of all states with S.

x t t = t0 =⇒ =⇒ (1, 1, 0, 1, 0)

Figure 2.2: The state corresponding to a perfect matching up until a time t0 is given by the matched and unmatched vertices on the line for time t0.

Take α a state at time t and β a state at time t + 1. If we now take any matching up to time t corresponding to state α we can look at the ways to ’extend’ it to a state corresponding to β. By this we mean the amount of ways that we can add edges to it

(17)

for it to become a matching up to time t + 1 corresponding to β. This does not depend on the matching we choose to represent the state α, as it only depends on the border of the already matched area, which is just given by α. If we assign weights to the different edges in our graph, then we can assign a weight to this extension by taking the sum of all the weights of the edges in the extension. We choose these weights to be periodic in both time and space, with period 1 in both directions. This means that if we specify the weights on one 1 × 1 square as in Figure 2.3 then they are given for the entire lattice. We will denote the vector of these weights as

w = (a, b, c, d).

b d a

c

Figure 2.3: Weights for the edges in a unit square, uniquely defining all weights in the graph.

We now construct the transfer matrix in the same way as in our more general intro-duction, but specialized to the square lattice. For two states α, β ∈ S we can now define Tβ,α to be the sum of the weights of all possible extensions from α to β. This is

well-defined because the weights are periodic in the time direction. We define V := C[S] to be the complex vector space with basis S. Now the Tβ,α give us the transfer matrix. As

this matrix depends on the weights we assign to the different edges, we will sometimes write T (w). We view this as the Hamiltonian of this system (in discrete time). Our goal is to calculate the partition function of the system that is also periodic in the time direction, with period P . We have seen in theorem 1.2.2 that this is the same as taking the trace of TP. To calculate this we want to diagonalize the transfer matrix T .

On this space V we can define another operator N that gives us the amount of un-matched vertices in a state. This means that it maps a state α to nα, where n is the amount of vertices in the state that have not been matched. This is always a finite amount, as the total amount of vertices for an integer time is always L.

We define the subspaces Vn with 0 ≤ n ≤ L to be the eigenspaces of N at the

corresponding eigenvalue n. These are generated by all states with n unmatched vertices, so

dim Vn=

L n 

Because every state has to be in Vn for some n, and together all the states generate V ,

it is clear that V = L M n=0 Vn

(18)

Theorem 2.1.2. The operators T and N commute. In other words, applying the transfer matrix preserves the amount of matched vertices in a state.

Proof. Take some state α ∈ S. We now pick a β such that Tβ,α 6= 0. This coefficient

being nonzero means that there exists some extension from α to β. This means that if we take state β at time t + 1 then there exists a matching M up to time t + 1 that corresponds to state β, and which corresponds to state α if restricted to time t. Suppose now that state α has n unmatched vertices. We now look at the vertices at time t + 1/2. In the matching M there are n of these vertices that must be matched to the unmatched vertices in state α. This leaves L − n vertices that can’t possibly be matched to vertices at time T , and must therefore be matched to vertices at time t. So state β contains exactly L − n matched vertices, and so it has n unmatched vertices.

Using this we can then write

N T α =X β∈S Tβ,αN β = X β∈S Tβ,α6=0 Tβ,αN β = n X β∈S Tβ,α6=0 Tβ,αβ = nT α = T N α,

So the operators commute.

Corollary 2.1.3. The transfer matrix T restricts to an operator T : Vn→ Vn.

As the amount of unmatched vertices is constant in time, we can view these as indis-tinguishable particles moving through space. From this point on we will often refer to the unmatched vertices as particles and the matched vertices as empty sites.

We can now split up the problem of diagonalizing the transfer matrix of the entire sys-tem into diagonalizing the transfer matrix restricted to the space containing n particles, Vn, for each possible amount of particles 0 ≤ n ≤ L.

For a state of n particles we can choose some particle to be the first particle, with spatial coordinate x1. We can then choose coordinates x2, . . . , xn such that

x1 < x2 < · · · < xn< x1+ L,

are coordinates for the other occupied lattice sites. Such a state we write as |x1, . . . , xni.

This way to write a state is not unique, as we can choose any particle to be the first and its coordinate is only unique up to a multiple of L (a particle at x1 is the same thing as

a particle at x1+ L). However, we can always uniquely specify a state by demanding

that 1 ≤ x1< x2< · · · < xn≤ L. The special case of the state with no particles will be

denoted with |0i.

Because the lattice is periodic we have the identity

|x1, x2, . . . , xni = |x2, . . . , xn, x1+ Li .

Any two sets of xi that describe the same state can be acquired by repeatedly applying

this identity. This is because this identity simply cycles through which particle is referred to as the first, and applying it L times simply increases or decreases all coordinates by L:

(19)

2.2 The Yang-Baxter equation

We want to find out for which weights the transfer matrices commute. To do this we will check when the weights satisfy the Yang-Baxter equation as shown in Figure 2.4.

1 2 3 = 1 2 3

Figure 2.4: A graphical representation of the Yang-Baxter equation. Here 1 and 2 rep-resent the different transfer matrices and 3 is an auxiliary operator.

We will now describe what this equation means. We have three numbered rhombi. Each of these contains four vertices on its boundary and one vertex in the center. We will call the vertices on the boundary A vertices, and those in the center B vertices. These names are related to the A and B lattices of the square lattice that we defined before, and the connection will become clear later.

We look at the hexagon, which we call the interior, as part of some external graph. This means that if we have a perfect matching of this external graph then we can specify if the vertices on the boundary of the hexagon are IMVs. For a given state of the boundary, we sum over all compatible matched states of the hexagon on either side of the equation. The equation then says that the sum over all weights on the left hand side will be the same as that on the right hand side, for any possible state of the boundary. Each rhombus in the hexagon contains one B vertex point, which must be matched with one of the four possible vertices on the rhombus. So each rhombus i has four possible states, which we will give the weights ai, bi, ci and di. The ordering of these

weights is such that on the left hand side of the equation the weight ai will correspond

to the line from the center of the rhombus to the center of the hexagon, and then we continue in alphabetical order counter clockwise around the center of the rhombus. This gives us, for example, weights for rhombus 1 displayed in Figure 2.5:

Now we look at the possible states of the boundary of the hexagon. Of the three B vertices contained in the hexagon, we know that exactly one must be matched with the center of the hexagon. It follows that the other two must then be matched with outer vertices of the hexagon, and so these outer vertices are IMVs (the only ones on the boundary). We note that the center vertex in a hexagon is always an IMV. This gives us three kinds of boundary states: The IMVs on the boundary are either next to each other, with one other vertex in between or on opposite ends of the hexagon. These types of boundary states are illustrated in Figure 2.6. The other possible boundary states not illustrated can be acquired from these by rotations over multiples of 60 degrees. There

(20)

b1

d1

a1

c1

Figure 2.5: A diagram showing the weights attached to the different possible edges con-necting the lattice point at the center of rhombus 1 to one of the vertices of the rhombus.

are six of each of the first and second type, and three of the third type. We will look at all three of the cases separately, and see what conditions they give for the weights of rhombus 3.

Figure 2.6: Examples of the three types of possible boundary states

Adjacent IMVs

We first look at the case where the IMVs are adjacent. One of the IMVs must then border on only one rhombus, and must therefore be matched to the B lattice at the center of this rhombus. This then leaves only one B vertex for the other, adjacent IMV to connect to. It follows that the center must be connected to the remaining unmatched B vertex. This means that a boundary state of this type uniquely specifies the state of the interior. Examples of these uniquely specified matchings are shown in Figure 2.7, on either side of the equation.

Each of the six boundaries of this kind gives us a unique matching on either side of the equation. This means we get six equations for our rhombus weights by looking at

(21)

the weights of these matchings, which are: a1c2b3 = a1c2b3, d1c2a3 = d1c2a3, c1b2a3 = c1b2a3, (2.1) c1a2d3 = c1a2d3, b1a2c3 = b1a2c3, a1d2c3 = a1d2c3,

but it is clear that these are empty constraints on our weights, because these all trivially hold.

=

Figure 2.7: The only possible matchings on either side of the equation if the top two vertices are internally matched

IMVs separated by one vertex

We know there are six different boundary states where the IMVs are separated by an-other vertex. When looking at possible matchings associated with such a boundary, we distinguish between two cases.

First there is the case where the IMVs both border only one rhombus, which means that they must both be matched with the center of their respective rhombi. The third B lattice vertex must then be matched to the center of the hexagon, which means the matching is uniquely defined. We see one of these matchings in Figure 2.8.

Figure 2.8: The only possible kind of matching if the internally matched vertices are separated by another vertex and each border only one rhombus.

In the second case we have both IMVs bordering on two rhombi each, one of which they share. For the B lattice vertex in this shared rhombus we have three options: it can

(22)

be connected to the center of the hexagon or to either of the IMVs. It is easily verified that each possible choice gives a unique and valid matching on the hexagon. Figure 2.9 displays the three possible matchings for one of these boundary matchings.

Figure 2.9: The three possible kinds of matchings if the internally matched vertices are separated by another vertex and each border two rhombi.

If the configuration is of the first type on the left hand side of the equation, then it is of the second type on the right hand side and vice versa. So on one side of the equation we will always have three different matchings, whereas the other side will have only one. So the six equations that we get for the weights are:

d1a2b3+ d1d2a3+ a1b2b3= a1c2a3, a1b2d3+ d1a2d3+ b1b2a3= c1a2a3, b1d2a3+ a1d2d3+ b1a2b3= a1a2c3, (2.2) c1c2a3= d1b2c3+ c1b2b3+ d1c2d3, c1a2c3= b1c2d3+ b1b2c3+ c1d2d3, a1c2c1= c1d2b3+ b1c2b3+ d1d2c3. Opposite IMVs

Finally we have the boundary states where the two IMVs sit on opposite ends of the hexagon. There are three of these, as there are three pairs of opposing vertices. Each of these states will have one vertex bordering on one rhombus, and one bordering on two. There is only one option for the vertex that only touches one rhombus, but the other IMV can be matched to the center of either adjacent rhombus. Since both of these choices give a valid matching, we get two different matchings per boundary state, as displayed in Figure 2.10.

The three different states give us the equations:

c1a2b3+ c1d2a3 = a1b2c3+ a1c2d3,

a1b2c3+ d1a2c3 = b1c2a3+ c1d2a3, (2.3)

b1c2a3+ a1c2d3 = c1a2b3+ d1a2c3.

Solving the Yang-Baxter equation

Combining equations 2.1, 2.2 and 2.3 gives us a total of nine non-trivial equations on the weights, as all equations in 2.1 are trivially true.

(23)

Figure 2.10: The two kinds of possible matchings if the internally matched vertices sit on opposite ends of the hexagon.

We view these nine equations as linear equations in a3, b3, c3 and d3, and we bring all

terms to one side of the equation. This means that our weights satisfy the Yang-Baxter equation if and only if the following matrix equation holds:

              −a1c2+ d1d2 a1b2+ d1a2 0 0 b1b2− c1a2 0 0 a1b2+ d1a2 b1d2 b1a2 −a1a2 a1d2 −c1c2 c1b2 d1b2 d1c2 0 0 b1b2− c1a2 b1c2+ c1d2 0 b1c2+ c1d2 −a1c2+ d1d2 0 c1d2 c1a2 −a1b2 −a1c2 −b1c2− c1d2 0 a1b2+ d1a2 0 b1c2 −c1a2 −d1a2 a1c2                   a3 b3 c3 d3     = 0.

Plugging this into Mathematica tells us that     a3 b3 c3 d3     =     a1b2+ d1a2 a1c2− d1d2 b1c2+ c1d2 −b1b2+ c1a2     (2.4)

is always a solution for this equation, regardless of the values for the weights w1 and w2.

So there are no requirements on the weights for the Yang-Baxter equation to hold.

2.3 Commuting transfer matrices

We choose the weights for our transfer matrices to always be periodic in the spatial direction with period 1. Because of this we can, using the weights of a transfer matrix, look at the operator that maps the state of two adjacent vertices to the state of the two adjacent vertices in the row above by picking which of the four vertices is attached to the B lattice vertex in the center of the enclosed square. We can view the transfer matrix as being composed of L copies of this local operator, put together in some compatible way (such that every B lattice vertex is assigned exactly one edge), as illustrated in Figure 2.11.

(24)

T

=

Figure 2.11: The transfer matrix decomposed into local operators.

We take two transfer matrices T (w1) and T (w2) with weights

w1 = (a1, b1, c1, d1),

w2 = (d2, a2, b2, c2).

Now we want to see under which conditions they commute.

If we now apply the transfer matrix T (w2) after T (w1) we can display this as in Figure

2.12, where we tilt the squares belonging to different transfer matrices to make it clear they contain different weights.

T (w1)

T (w2)

Figure 2.12: Visual representation of applying two different transfer matrices. This gives us an operator ˜T := T (w2)T (w1) from V to itself. The entry ˜Tβ,α in this

matrix is

X

γ∈S

T (w2)β,γT (w1)γ,α,

which corresponds to the sum of the weights of all possible extensions from some match-ing correspondmatch-ing to α at time t to some matchmatch-ing correspondmatch-ing to β at time t + 2 (summing over all possible states that can occur at time t + 1).

We can add one vertex to each lattice, both at time t + 1, along with changing some edges, as in Figure 2.13. We give these four edges connected to the new B lattice vertex the four weights

w3 = (a3, b3, c3, d3),

assigning these in a counter-clockwise fashion around the center of the rhombus (as shown in 2.14) starting with the edge to the right.

If we want to find an extension from α to β on this new graph we only have to specify which edge we use for each B lattice vertex between times t and t + 2. First we label

(25)

Figure 2.13: Adding one vertex to either lattice in the shape of a rhombus. The thick lines represent the fact that the connected vertices are actually the same vertex. c3 a3 b3 d3 p q r s

Figure 2.14: The new rhombus with weights on the different edges and labels associated to its vertices.

the vertices of the new rhombus with p, q, r and s as in Figure 2.14. We also write i for the corresponding index of q and s in α and β respectively.

In creating an extension from α to β we start by picking some edges for the two B vertices that form the centers of the two rhombi to the right of the new rhombus, compatible with our states α and β. Now we know whether vertex p is matched to a vertex on its right or not. We call this the state of p. Depending on the state of q given by α and the state of s given by β we can now see what edges can be matched in the rhombus. This then tells us if vertex r is matched to a vertex on its right or not. Just like for p we call this the state of r.

We write |1vi for the state where a vertex v is matched to another vertex on its right

and |0vi for the state where it is not. We define Wv := C[{|1vi , |0vi}] to be the vector

space with these states as its basis. For some set α and β, we can now see the rhombus as a linear operator

R(αi, βi) : Wp → Wr,

where the matrix coefficients are the sum of the weights of all possible edges that satisfy the corresponding states for both p and r. This is a function of αi and βi as the only

part of the states α and β that this operator depends on is whether these vertices are matched or unmatched.

If vertex p is not matched to a vertex on its right, it must be matched to the vertex at the center of the rhombus, with weight a3. This then means that it is impossible for

(26)

vertex r to be matched with a vertex to its right. So we will, independently of αi and

βi, have

R(αi, βi) |0pi = a3|0ri .

Now we look at the possibilities when p is matched to a vertex on its right. We know that it is not possible for the edge with weight a3 to be used, as it matches p to a vertex

on its left. The possibilities for this state do depend on αi and βi, so we separate some

cases.

Vertex q matched and vertex s unmatched (αi= 1, βi = 0)

This means that s must be matched to something above it, and q to something below it. Because of this neither of them will be connected to the center of the rhombus. So the only possible edge in this situation is the one corresponding to c3, and r will always

end up connected to a vertex to its right. We get R(1, 0) |1pi = c3|1ri ,

and so the matrix representation of f (1, 0) becomes R(1, 0) =a3 0

0 c3

 .

Vertex q unmatched and vertex s matched (αi= 0, βi = 1)

Now s must be matched to a vertex below it and q to a vertex above it. So the possible edges are those with weights b3, c3 and d3. Again we see that if c3 is a part of the

matching that r is matched with a vertex on its right. However, if it is b3 or d3 then r

will not be. We have

R(0, 1) |1pi = b3|0ri + c3|1ri + d3|0ri ,

which gives us the matrix

R(0, 1) =a3 b3+ d3

0 c3

 .

Both vertices matched (αi = 1, βi = 1)

In this case both s and q have to be matched to a vertex below them. This means that the only possible edges are those with weights c3 and d3. If the one with weight c3 sits

in the matching then r is matched to a vertex on its right, and otherwise the matching includes the one with weight d3 and r is not matched with a vertex on its right. In

matrix form this gives

R(1, 1) =a3 b3 0 c3

 .

(27)

Both vertices unmatched (αi = 0, βi= 0)

Here both s and q have to be matched to a vertex above them. This is the same thing as the last case, but flipped upside down. If we change the weights correspondingly, this means that the matrix for this case becomes

R(0, 0) =a3 d3 0 c3

 .

Conditions for commuting

It is clear that these are all invertible if and only if

a3 6= 0, (2.5)

c3 6= 0.

If we now add another rhombus right after the first one with weights w4= 1 a3 , − b3 a3c3 , 1 c3 , − d3 a3c3 

then we see that if we define the corresponding matrices in the same way, labeled R0α, β, that

R(αi, βi)R0(αi, βi) = I = R0(αi, βi)R(αi, βi)

for every state α and β. We conclude that R0 = R−1. So if we add a rhombus with weights w3 followed by a rhombus with weights w4 the extension will have the same

weight as if the rhombi were not part of it. Now if we take T0, the transfer matrix corresponding to the two rows with both rhombi added, we can see that it is the same as ˜T with the following calculation:

˜ Tβ,α= X γ∈S T (w2)β,γ1,...,γLT (w1)γ1,...,γL,α =X γ∈S 1 X γ0=0 T (w2)β,γ1,...,γi...γLδγi,γ0T (w1)γ1,...,γ0,...γL,α =X γ∈S 1 X γ000=0 T (w2)β,γ1,...,γi...γLRγi,γ00(R −1) γ000T (w1)γ 1,...,γ0,...γL,α =Tβ,α

where the indices γ0 and γ00 can be interpreted as the different states of the two vertices that are added to the graph. This last expression is equal to the transfer matrix of the changed graph because we take some external states α and β and sum over all the possible internal configurations, which are encoded in T (w1), T (w2), R and R−1.

(28)

We want this inverse rhombus to exist, and for the Yang-Baxter equation to hold. Combining the conditions 2.4 and 2.5 for these two things, we require

a1b2+ d1a26= 0,

b1c2+ c1d26= 0.

If these conditions hold then we can add the two rhombi into our two rows. Because the Yang-Baxter equation holds we can replace any occurrence of the hexagon on the left hand side of the equation with the one on the right hand side (for any boundary state). Applying this L times and using the periodicity of our lattice we can then show that our transfer matrices commute. If we represent rhombi corresponding to the weights wi

with the number i then this is done as follows:

1 2 1 2 1 2 1 2 1 2 = 1 2 1 2 1 2 1 2 1 2 3 4 = 1 2 1 2 1 2 1 2 2 1 3 4 = . . . = 2 1 2 1 2 1 2 1 2 1 3 4 = 2 1 2 1 2 1 2 1 2 1

(29)

In the diagrams we again use thick lines to represent that connected vertices are the same.

Orginally we chose our second transfer matrix T (w2) to have weights (d2, a2, b2, c3) to

make it easier to work with the Yang-Baxter equation. If we choose the more intuitive w2= (a2, b2, c2, d2) our conditions for commuting transfer matrices become

a1c2+ d1b2 6= 0,

b1d2+ c1a2 6= 0,

which we can turn into the matrix equation

 0 d1 a1 0 c1 0 0 b1      a2 b2 c2 d2     6= 0. (2.6)

This means the transfer matrices T (w1) and T (w2) commute if the vector w2 is not in

the null space of the matrix in equation 2.6. Suppose now that w2 is in this null space.

If the null space has dimension 4 then the matrix must be the zero matrix, which means w1= 0, and this in turn gives T (w1) = 0. We get

[T (w1), T (w2)] = [0, T (w2)] = 0,

so the transfer matrices commute. However, if the null space has dimension less than 4 (and at least 2 as the matrix has rank at most 2) then the null space is a strict subspace of R4. Because of this there is a sequence of non-zero vectors {vn}n∈N ⊆ R4 such that

none of the vnsit in the nullspace of the matrix and that the limit of the sequence is 0.

We see that  0 d1 a1 0 c1 0 0 b1  (wT2 + vn) =  0 d1 a1 0 c1 0 0 b1  vn6= 0,

for each vnin the sequence, and so T (w1) and T (w2+vTn) commute. As the entries of the

transfer matrix are polynomials in the weights, the transfer matrices vary continuously with respect to a change in the weights. Because taking the commutator of two matrices is also continuous we see that

[T (w1), T (w2)] = h T (w1), T (w2) + lim n→∞v T n i = lim n→∞[T (w1), T (w2+ v T n)] = limn→∞0 = 0,

and so the transfer matrices will also commute in this case.

We conclude that all possible transfer matrices of the kind we look at (with only four weights and periodic) commute with one another.

2.4 Anisotropic approximation

Now we know that two transfer matrices T (w1) and T (w2) corresponding to different

(30)

matrices are both diagonalizable then, because they commute, they are simultaenously diagonalizable. We want to now look at a transfer matrix with specific weight and diagonalize this one. Then we want to see if the eigenstates that we find also diagonalize other, more general transfer matrices.

The transfer matrix we look at in this section is T = T (w) with w = (u, 1, u, 1),

as its weights, where u  1 is infinitesimally small. Because of this we will work in first order in u. These weights only distinguish between the different directions that the edges can have. If an edge goes towards the right in increasing time, which we refer to as ’towards the right’, then we give it weight 1. Otherwise it will go to the left in increasing time, or ’towards the left’, and have weight u. We work in first order in u, so we only have to look at extensions from one state to another that contain at most one edge towards the left. Every extension now either has weight 1 or weight u, depending on the amount of edges towards the left. We call this process the anisotropic approximation, as it heavily favors the edges in the matching all being in the same direction.

We now look at the transfer matrix T restricted to the different Vn.

No particles (n = 0)

First we consider what this transfer matrix does to the state without any particles. As T leaves V0 invariant we know T |0i = λ |0i, so |0i is an eigenstate. Because we allow

at most one edge to be towards the left, the only allowed extension is the one shown in Figure 2.15. If L = 1 then there would be another possible extension, but we disregard this case as it is very easy to diagonalize the transfer matrix if L = 1 and so it is not interesting to use our approximation. The only possible extension has weight 1L= 1, so λ = 1.

|0i |0i

Figure 2.15: The only possible extension (in blue) from |0i to itself with at most one leftward edge.

A single particle (n = 1)

The next step is to look at the states with one particle. It is sufficient to look at the state |1i with one particle at x = 1, as the system is translationally invariant. There are three possible extensions from |1i: one each to |1i , |2i and |3i. These are shown in 2.16. To find all possible extensions we use the fact that a part of the lattice without particles can only have edges matched towards the right. This happens because if there

(31)

is an edge towards the left not near a particle, then the B vertex to its left must also be matched towards the left. However, we only allow one edge matched towards the left, so these extensions are excluded.

|1i |1i |1i |2i |1i |3i

Figure 2.16: The three possible extensions (in blue) starting from |1i with at most one edge towards the left.

The weights of these extensions are, in order, u, 1 and u. For the more general single particle state |xi we now get the expression

T |xi = u |xi + |x + 1i + u |x + 2i .

It is clear that the single particle states do not form eigenstates on their own. In an attempt to find the eigenstates we apply the Bethe ansatz, as in [6]. We assume that the eigenstates are some superposition of the states in the following way:

X

|xi

eikx|xi ,

where k is some complex number which can be interpreted as the momentum of the particle. Now we introduce z := eik, the exponentiated momentum. Accordingly we define

|zi :=X

|xi

zx|xi .

We want these expressions to be well-defined. It is sufficient for the coefficients of the states on either side of equation 2.1 to be the same in order for periodicity to hold. This is because given one set of coordinates for a state, repeated application of 2.1 is able to give us every other set of coordinates for that state. For our case where n = 1 these states are |xi and |L + xi. This gives us the requirement

(32)

which means that z is any L-th root of unity. The Bethe ansatz only proposes possible eigenstates. So the next step is checking if they are eigenstates of T :

T |zi =X |xi zxT |xi =X |xi zx(u |xi + |x + 1i + u |x + 2i) =X |xi uzx|xi +X |xi zx−1|xi +X |xi uzx−2|xi = (u + z−1+ uz−2)X |xi zx|xi = (u + z−1+ uz−2) |zi ,

which shows that |zi is an eigenstate, with eigenvalue (u + z−1+ uz−2).

We also want to show that the L eigenstates we get in this way (as there are L different L-th roots of unity) form a basis for V1. To do this we define the shift operator

S : V1 → V1 by requiring

S |xi = |x + 1i

and extending this linearly. If we now apply this to one of our eigenstates we get

S |zi =X |xi zxS |xi =X |xi zx|x + 1i =X |xi zx−1|xi = z−1|zi .

Now we see that all the different |zi are eigenstates of the shift operator at different eigenvalues, which means they must be linearly independent. Now we have L linearly independent states, and the dimension of V1 is L, so they form a basis. We have now

diagonalized T : V1→ V1.

Two particles (n = 2)

Now we look at states that contain two particles. We want to distinguish two cases: The case where there is at least one site in between them, and the case where the particles occupy adjacent lattice sites.

First we choose the particles to be separated by an empty site. Now because of translational invariance we can take the state to be |1, ii with 3 ≤ i ≤ L − 1. Because of the limited possible edges towards the left that can be in the matching (as most of them require another edge towards the left to be in the matching), it is possible to look at all possible extensions from any such state. It turns out that there are five different extensions. These are shown in Figure 2.17 for the case where i = 3. For larger i the extensions remain mostly the same, although there will be additional edges towards the right in between the particles.

(33)

|1, 3i |1, 4i |1, 3i |2, 3i |1, 3i |2, 4i |1, 3i |2, 5i |1, 3i |3, 4i

Figure 2.17: The five possible extensions (in blue) starting from |1, 3i with at most one edge towards the left.

If we take the weight of all these extensions then we get, for the more general state of two particles separated by an empty site, that

T |x1, x2i = u |x1, x2+ 1i + u |x1+ 1, x2i

+ |x1+ 1, x2+ 1i + u |x1+ 1, x2+ 2i + u |x1+ 2, x2+ 1i .

The particles ’move’ independently from each other: These are the exact results we would get if the two particles both moved like in the n = 1 case, followed by removing all terms that were quadratic in u.

Next we take the state with two adjacent particles. Again the possible extensions are extremely limited by the possible edges towards the left that can be in the matching. Because the particles are adjacent there are only three possibilities. We get

T |x1, x2i = u |x1, x2+ 1i + |x1+ 1, x2+ 1i + u |x1+ 1, x2+ 2i .

If there are two indistinguishable particles then the Bethe ansatz suggests to look at states of the form

X |x1,x2i   X σ∈S2 Aσeikσ(1)x1eikσ(2)x2  |x1, x2i .

Here we use the notation where Sn is the symmetric group on n elements. Because S2

contains only 2 elements and again introducing exponentiated momenta zj := eikj we

get the expression

|z1, z2i := X |x1,x2i Aidz1x1z x2 2 + A(12)z2x1z x2 1  |x1, x2i .

Before we worry about for which zithis expression is well-defined, we want to make sure

(34)

now. To make notation easier we will also define v(x1, x2) to be the coefficient of |z1, z2i

corresponding to this state.

Now we take some state |x1, x2i where the particles are not adjacent. We also define an

inner product on the vector space V by simply saying the states S form an orthonormal basis. Now we can calculate the component of T |z1, z2i corresponding to our state

|x1, x2i, by using the formulas we derived above:

hx1, x2| T |z1, z2i = X 1≤y1<y2≤L v(y1, y2) hx1, x2| T |y1, y2i =u(v(x1, x2− 1) + v(x1− 1, x2) + v(x1− 1, x2− 2) + v(x1− 2, x2− 1)) + v(x1− 1, x2− 1) =(u(z1−1+ z2−1+ z1−1z−22 + z1−2z2−1) + z1−1z2−1)v(x1, x2) =: λv(x1, x2).

Apparently the coefficient belonging to |x1, x2i in T |z1, z2i is λ times the coefficient in

|z1, z2i. This looks a lot like an eigenstate. For this we would want the exact same

equation to hold in the case where the particles are adjacent. So we now look at the state |x, x + 1i. Then

hx, x + 1| T |z1, z2i =

X

1≤y1<y2≤L

v(y1, y2) hx, x + 1| T |y1, y2i

= u(v(x − 2, x) + v(x − 1, x + 1)) + v(x − 1, x)

We want this to be λv(x, x + 1). So we calculate the difference between these two things and set it to zero:

0 = λv(x, x + 1) − u(v(x − 2, x) + v(x − 1, x + 1)) + v(x − 1, x) = uAid(zx1z2x+ z1x−1zx−12 ) + uA(12)(zx1z2x+ z1x−1zx−12 ),

as most terms cancel out. These terms are caused by the fact that a state |x1, x2i

where the particles are not adjacent can be extended from the states |x1, x2− 1i and

|x1− 1, x2− 2i. However, in this case these states do not exist, as two particles would

be in the same lattice site.

This equation will hold if we take A(12) = −Aid. Under these conditions the state

|z1, z2i becomes an eigenstate, as we now get T |z1, z2i = λ |z1, z2i. With this condition

we can see that if z1 = z2, the state vanishes. To make notation easy we also choose

Aid= 1, so A(12)= −1.

Earlier on we decided to forget about the periodicity of the lattice, so we look at this now. From equation 2.1 we see that in this case the coefficients v(x1, x2) and v(x2, x1+L)

must be the same for the formula to satisfy the periodicity. So v(x1, x2) = v(x2, x1+ L) zx1 1 z x2 2 − z x1 2 z x2 1 = z x2 1 z x1+L 2 − z x2 2 z x1+L 1 zx1 1 z x2 2 (1 + z L 1)) = z2x1z x2 1 (1 + z L 2)

(35)

must hold for all possible x1, x2. One wayto satisfy this condition is if we choose z1= z2,

but then |z1, z2i = 0. Instead we take both zi to be L-th roots of −1, as then both sides

of the equation are 0.

We now have eigenstates in V2 with eigenvalues (u(z1−1+ z −1 2 + z −1 1 z −2 2 + z −2 1 z −1 2 ) +

z−11 z2−1). It is worth noting that this is the product of two eigenvalues for the single particle system, up to first order in u, as

(u + z−11 + uz1−2)(u + z−12 + uz2−2) ≈ z1−1z2−1+ uz1−1+ uz2−1+ uz1−1z2−2+ uz1−2z2−1.

Because we have to choose two different L-th roots of −1 and there are L such roots, we get L2 eigenstates in this way. This is the same as the dimension of V2, so if we

could show that they are all linearly independent then they would form a basis.

Similarly to the one particle case we can define a shift operator S : V2 → V2 by

demanding

S |x1, x2i = |x1+ 1, x2+ 1i

and then extending linearly. We can then see that S |z1, z2i = z−11 z2−1|z1, z2i ,

so the eigenstates are also eigenstates of the shift operator. This tells us that there are at least L linearly independent eigenstates of the type that we are looking at, as there are L different products of L-th roots of −1 (as these form the different L-th roots of 1). In the end I was not successful in showing that they actually form a basis for the entire space V2.

More particles (n ≥ 3)

When there are three or more particles the solutions are still very similar to the solution for two particles. Again we distinguish between two different kinds of states: those where no two particles are adjacent, and those where there are some adjacent particles.

The Bethe ansatz suggests the following eigenstates for n particles:

X |x1,...,xni   X σ∈Sn Aσ n Y j=1 eikσ(j)xj  |x1, . . . , xni .

Introducing the exponentiated momenta zj := eikj will again make our notation easier.

We define |z1, . . . , zni := X |x1,...,xni X σ∈Sn Aσ n Y i=1 zxi σ(i) ! |x1, . . . , xni .

We define v(x1, . . . , xn) to be the component of this state corresponding to the state

(36)

look at some state without adjacent particles |x1, . . . , xni we find that T |x1, . . . , xni = |x1+ 1, . . . , xn+ 1i (2.7) + u n X i=1 |x1+ 1, . . . , xi, . . . , xn+ 1i + |x1+ 1, . . . , xi+ 2, . . . , xn+ 1i

This shows that the particles still move independently (except we drop all the terms of higher order in u).

If a state has two particles adjacent at locations xk and xk+1 then we get the same

result as above, except the term corresponding to k − 1,

u(|x1+ 1, . . . , xk−1+ 1, xk, . . . , xn+ 1i + |x1+ 1, . . . , xk−1+ 2, xk, . . . , xn+ 1i)

drops from the sum. This is not so strange, as these transitions are impossible: if xk = xk−1+ 1 then both of these states would have two particles on the same lattice

site. If a state has multiple particles that are adjacent, then the terms all drop from the sum according to this rule, and nothing else. This means that multiple particles being adjacent, which is a many-body interaction, can be interpreted as the sum of two body interactions.

We take a state |x1, . . . , xni with no adjacent particles. We can now, in much the

same way as for two particles, look at the component for this state in our suggested eigenstate. To do this we use equation 2.7 and the rule we described about adjacent particles: hx1, . . . , xn| T |z1, . . . , zni = v(x1− 1, . . . , xn− 1) + u n X i=1 v(x1− 1, . . . , xi, . . . , xn− 1) + v(x1− 1, . . . , xi− 2, . . . , xn− 1)  =     n Y i=1 zi−1+ u n X i=1 Y j=1 j6=i z−1j + u n X i=1 zi−2Y j=1 j6=i zj−1     v(x1, . . . , xn) =: λv(x1, . . . , xn).

So the possible eigenvalue that |z1, z2i has is

λ = n Y i=1 zi−1+ u n X i=1 Y j=1 j6=i zj−1+ u n X i=1 z−2i Y j=1 j6=i zj−1

Just like before we now want to use the Aσ to make sure that the components of

(37)

between these two things: λv(x1, . . . , xn) − hx1, . . . , xn| T |z1, . . . , zni = u X k: xk+1=xk+1 X σ∈Sn Aσzσ(k)zσ(k+1) n Y i=1 zxi−1 σ(i) + Aσ n Y i=1 zxi−1 σ(i) = u X k: xk+1=xk+1 X σ∈Sn Aσ(zσ(k)zσ(k+1)+ 1) n Y i=1 zxi−1 σ(i)

So we get one term for every two adjacent particles. We also count k = 1 in the sum if x1 = xn+ 1 − L, because of periodicity. We will now look closer at one of these terms

for adjacent particles at xk and xk+ 1.

For n ≥ 2, we know that the alternating subgroup An ⊆ Sn has index 2, and so we

can write Sn = Ant An(k k + 1) because the cycle (k k + 1) does not sit in An. It is

also clear that zσ(k)zσ(k+1) = zσ(k k+1)(k+1)zσ(k k+1)(k). If we now take Aσ = (σ) to be

the sign of the permutation then the term for any k becomes X σ∈Sn (σ)(zσ(k)zσ(k+1)+ 1) n Y i=1 zxi−1 σ(i) = X σ∈An (zσ(k)zσ(k+1)+ 1) n Y i=1 zxi−1 σ(i) − X σ∈An(k k+1) (zσ(k)zσ(k+1)+ 1) n Y i=1 zxi−1 σ(i) = X σ∈An (zσ(k)zσ(k+1)− zσ(k k+1)(k)zσ(k k+1)(k+1)) n Y i=1 zxi−1 σ(i) = X σ∈An 0 n Y i=1 zxi−1 σ(i) = 0.

So under the condition that Aσ = (σ) we will have eigenstates |z1, . . . , zni.

If we have a state where there are two exponentiated momenta zj and zk that are the

same, then using this condition we can conclude that it must be zero: In such a case we could use the permutation (j k) to split all the coefficients into

X σ∈An n Y i=1 zxi σ(i)− X σ∈(j k)An n Y i=1 zxi σ(i)= X σ∈An ( n Y i=1 zxi σ(i)− n Y i=1 zxi σ(j k)(i)) = 0.

The difference between these products is zero because we know zj = zk. As all its

coefficients are zero such a state will vanish.

Periodicity now demands (see equation 2.1) that the coefficients in |z1, . . . , zni

(38)

τ := (1 2 . . . n) the cyclic permutation of all indices. Using this we obtain X σ∈Sn (σ) n Y i=1 zxi σ(i)= X σ∈Sn (σ)zσ(n)L n Y i=1 zρτ (i)xτ (i) = X ρ∈Sn (ρτ )zLσ(n) n Y i=1 zxτ (i) ρτ (i) = X ρ∈Sn (ρ)(τ )zρτ (n)L n Y i=1 zxi ρ(i).

Here we substitute ρ = στ−1 and we use the fact that if we take a product over all τ (i) then this is the same as a product over all indices i. As τ is a cycle of length n it has sign (−1)n−1. Subtracting the right hand side from the left hand side gives

X σ∈Sn (1 + (−1)nzστ (n)L )(σ) n Y i=1 zxi σ(i)= 0.

We see that this equality holds if we take all our zi to be L-th roots of (−1)n+1, because

then the term in parentheses becomes 1 + (−1)nzLστ (n)= 1 − 1 = 0.

One way to find independent eigenstates is to define shift operators for arbitrary amounts of particles, and then looking at eigenstates at different eigenvalues. If we manage to show that all the eigenvectors are linearly independent, then we have diag-onalized the transfer matrix. This is because there are Ln different eigenvectors in Vn,

and this is precisely the dimension of the space. So all the eigenvectors for all n would diagonalize the entire transfer matrix T in the anisotropic approximation.

Showing that all these eigenstates are linearly independent eventually fell outside the scope of my project due to the complexity involved and the time available.

2.5 General eigenstates

In this section we shift our attention to a more general transfer matrix with weights

w = u v u v , u, v ∈ R,

which means that we say that matched edges pointing in the same direction have the same weight. Those edges that go towards the left have weight u and those that go towards the right have weight v. We want to verify if the eigenstates that we found in the anisotropic approximation will also be eigenstates for tranfer matrices of this type. Just like in the anisotropic approximation it helps to split this into several cases for different amounts of particles.

No particles (n = 0)

For the space generated by the states with no particles, V0, there was only one eigenstate

Referenties

GERELATEERDE DOCUMENTEN

nullo cellarurn conclaviumque vestigio ). D'un point de vue archéologique, la comparaison entre les vestiges épars de la parcelle 1424 a et les substructions de la

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

HPTN 071 (PopART) will measure the impact of the PopART combination prevention intervention package on HIV incidence at population level by means of a cluster- randomised trial

Criteria for inclusion in this study were: (i) FFPE tissue samples from patients with a diagnosis of vulvar intraepithelial neoplasia (VIN) or invasive vulvar squamous cell

Zulke afspraken zijn echter niet altijd zinvol voor het toetsen en monitoren van de gegevens, omdat ze tussen twee partijen gemaakt worden en de afspraken dus niet generiek zijn..

Opname van voedingsstoffen door de planten tot week 27 van Salvia staan in tabel 12 en in tabel 13 voor Delphinium geoogst in week 29 in 2006.. Tabel 12 Opname van voedingsstoffen pe

Lengte van mosselen per netmaas op twee nabijgelegen locaties: Scheurrak 30 met vier lijnen en Scheurrak 32 met één lijn.. Op week 5 en 7 is het gemiddelde met standaard

In reactie daarop heeft de minister van Infrastructuur en Milieu (IenM) in het overleg met de Tweede Kamer van 9 maart 2011 over het CBR, toegezegd om bij de SWOV advies te