• No results found

Percolation and the Ising model in the hyperbolic plane

N/A
N/A
Protected

Academic year: 2021

Share "Percolation and the Ising model in the hyperbolic plane"

Copied!
79
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Track: Theoretical Physics

M

ASTER

T

HESIS

Percolation and the Ising model

in the hyperbolic plane

by

Daan Mulder

10275037

July 2017

60 ECTS

Supervisor:

Prof. dr. Bernard Nienhuis

Second examiner:

Dr. Edan Lerner

(2)
(3)
(4)
(5)

We investigate percolation and the Ising model on the hyperbolic plane through the means of Monte Carlo simulations and the Corner Transfer Matrix Renormalization Group (CTMRG), respectively.

Percolation on Euclidean lattices has a single critical point pc. At the critical point,

the mean cluster size scales with the system size N as a power law. For example, for a two dimensional lattice, the mean cluster size scales as N43/48. The exponent is not

dependent on the type of lattice (hexagonal, square, triangular), only on the dimension of the lattice. This is an example of universality. Other quantities have the same scaling behaviour, albeit with different critical exponents. These exponents are related through scaling laws. In the hyperbolic plane, there are two dual critical points, pc and pu. In

between, the exponent of the mean cluster size continuously varies from 0 to 1. We have tried to find other quantities that scale as a power of N , to establish scaling relations in the hyperbolic plane. The results are not as simile to Euclidean case as expected.

We review and numerically evaluate a framework presented in [47]. It gives an expla-nation for the occurrence of two transition points from the exponential growth rate of the hyperbolic plane.

Since percolation can be seen as the limit case of a class of models called Potts models, it seems natural to extend the framework of [47] to Potts models in general. We have studied the 2-state Potts model, which corresponds to the Ising model, by using the CTMRG-algorithm for hyperbolic lattices, as presented in [60]. CTMRG is a numerical method that approaches the partition sum and expectation values by iteratively extending the lattice, while keeping the dimension of the configuration space fixed. This is done by projecting on the eigenvectors of the Corner Transfer Matrix that correspond to the largest eigenvalues, i.e. the states with largest probability. We introduce this method in the most elementary way we could conceive of. The critical temperature Tc is determined by fixing

the boundary in one of possible states, and calculating at which temperature this starts influencing a spin in the bulk. Using the analysis of [47], we were able to determine a second critical temperature Tu, which is found by fixing only a single spin on the boundary. Using

the duality relation for temperatures on the Potts model, we verify that the value of Tu is

(6)

Contents

Introduction 2

1 The hyperbolic plane 6

1.1 Fundamental properties . . . 6 1.2 Metrics . . . 7 1.3 Regular tilings . . . 10 1.4 Lattice size . . . 13 1.5 Construction in C++ . . . 14 2 Graph topology 17 3 Introduction to percolation theory 20 3.1 Mathematical definitions . . . 20

3.2 Duality . . . 22

3.3 Critical exponents . . . 24

3.4 Physics approaches . . . 25

3.5 Percolation on the Cayley tree . . . 28

4 Phase transitions, renormalization and scaling 31 4.1 Renormalization . . . 31

4.2 Scaling . . . 33

5 Fundamental mechanism of the double phase transition 35 5.1 Theoretical framework . . . 35

5.2 Numerical results . . . 37

(7)

6 More critical exponents in the hyperbolic plane 40

6.1 Relation between ψ and ¯ψ . . . 40

6.2 Perimeter . . . 42

6.3 Paths . . . 42

6.4 Radius of gyration . . . 46

7 Percolation and the Potts model 49 7.1 The q-state Potts model . . . 49

7.2 Duality in the Potts model . . . 50

8 Hyperbolic lattices as tensor networks 53 8.1 Building blocks of tensor networks . . . 53

8.2 Constructing a hyperbolic tensor network . . . 56

8.3 First phase transition for the 2-state Potts model . . . 58

8.4 A second phase transition in the 2-state Potts model . . . 62

8.5 Duality of the phase transitions . . . 63

(8)
(9)

In this thesis, we investigate percolation on the hyperbolic plane. It turns out that there are both intriguing differences and similarities between percolation on the hyperbolic plane and percolation in Euclidean spaces. A model for percolation can be obtained as the limit of the q-state Potts model for q → 1. The Ising model is another example, with q = 2. We will compare the the behaviour of the percolation model on to that of the Ising model, in order to better understand the latter.

The problem of percolation is as easy to describe as it is rich in nature. A typical question is stated as follows. Given a lattice, we consider an edge as open with a chance p, and closed with a chance 1 − p. Clusters are then formed by the vertices that are connected by open edges. The question we would like to answer is the following: if we pick a vertex at random, how big is the cluster it is in? How does the cluster size depend on p? And how does it scale if we change the system size N ? As an illustration, we have drawn this set up for a square lattice with p = 0.5 in figure 1: only open bonds are drawn. The cluster around a random point is drawn in red. There are more forms of percolation, which we have only visited as sideways in this thesis. We introduce percolation in greater detail in Chapter 3.

It is no coincidence that this problem has not only attracted the attention of mathe-maticians, but of statistical physicists as well. In most of the cases where percolation is studied, there is a sharp transition from a phase where the system consists of small clus-ters, which have a typical size not depending on the size of the system, to another phase, where a single cluster emerges, that grows linearly with the system size. In the latter case, the system is said to percolate. The point where this transition happens is called the critical point and can be studied by means of the so-called Renormalization Group (RG), much used in statistical physics. This is not a group in the mathematical sense of the word, but consists of various approaches to gain insight in systems that show scale invariance. This means that the partition sum remains invariant under renormalization

(10)

Introduction 3

Figure 1: Percolation on the square lat-tice with p = 0.5. A randomly picked cluster is drawn in red.

Figure 2: A drawing of the hyperbolic plane by M.C. Escher. The properties of the hyperbolic plane become clear: all angels and demons are actually of the same size.

transformations, which zoom out on the system by regarding multiple sites as one. On the critical point, physical quantities like mean cluster size s0 or the correlation length

ξ become power-laws of the size of the system, e.g. s0 ∼ N43/48. This type of relation

immediately follows from the existence of renormalization transformations, but the values of the exponents (i.e. 43/48), are model-specific. The multiple exponents of a model are always related through so-called scaling relations, which are universal. We will review this theory in Chapter 4.

If we consider percolation on the hyperbolic plane, the description given above requires alteration. The hyperbolic plane is a space of constant negative curvature, famously drawn by M.C. Escher (figure 2). The curvature radically changes the geometry: if we look at figure 2, the number of demons that can be reached within k steps (measured in number of demons) from the center grows exponentially with k, instead of as k2. We derive these,

and more, properties in Chapter 1.

When we perform percolation on these lattices, we do not find a single critical point. Instead, Benjamini and Schramm [10] prove that there are two distinct critical points, called pc and pu: for p ∈ (0, pc], there are no infinite clusters, for p ∈ (pc, pu), there

(11)

unique. M¨uller, Broman and Tykesson have studied this problem further and found that exponent of the mean cluster size is a continuous function that ranges from 0 at pc to 1

at pu (M¨uller, personal communication, 24th November 2016). Instead of a single critical

point, this leaves us with a critical phase, which has pc and pu as endpoints.

From the point of view of renormalization theory, this is an extremely intriguing find. On the one hand, the existence of a critical exponent is a scaling-law, which we normally find from scale-invariance. On the other hand, the hyperbolic plane is itself not scale invariant: there is a length scale associated to the exponential growth of the surface. This means we can not simply use the methods of renormalization. This thus raises questions: are the exponents still related through scaling relations? Do the exponents and scaling relations even exist? And if so, where do the scaling relations stem from, if not from the renormalization transformation? We will give some attempts to answer these questions in Chapter 6.

The interest in percolation on hyperbolic backgrounds is further sparked by the results of network theorists. Hyperbolic lattices share a property with the complex networks (e.g. the internet, infrastructure, social networks): the shortest path between 2 points grows as the logarithm of the system size. A break-through in network theory is the realization that complex networks can be described geometrically using an underlying ‘hidden’ hyperbolic metric [35]. Using this insight, various complex systems as trade markets, the internet and cell metabolism have already been studied [24] [14] [1] . The research of interacting systems on complex networks is a very likely candidate to answer important questions about the phenomena that emerge in these structures [11]. Percolation on hyperbolic lattices could be a benchmark for further study of these systems.

Some research has already been done on percolation in the hyperbolic plane. As men-tioned earlier, mathematicians Benjamini and Schramm rigorously proved some properties percolation in the hyperbolic plane, notably the existence of 3 distinguishable phases and the relation between critical points in a system and its dual lattice (pc+ ¯pu = 1). Duality

is explained in Chapter 3. Given this duality relation, it is rather strange that according to Baek, Minnhagen and Kim [3], it does not hold. Based on Monte-Carlo simulations, they instead propose an inequality: pc+ ¯pu < 1, for hyperbolic lattices [4]. Nogawa and

Hasegawa [46] criticized their method to determine pu and subsequently used simulations

to argue in favour of Benjamini and Schramm’s equality. We will examine both lines of arguments in Chapter 3 and conclude that Nogawa and Hasegawa are indeed right.

(12)

Introduction 5

Nogawa and Hasegawa have also presented a theoretical framework to explain the dou-ble phase transition, which we will go over and endorse with our own Monte Carlo simula-tions in Chapter 5. We will argue that this framework can be used for a more general class of statistical physical models on the hyperbolic plane, namely q-state Potts models. Perco-lation can be seen as the limit q → 1 of these models. The Ising model, which corresponds to q = 2, is another example. Like in percolation, a duality relations holds in general Potts models. The relation between percolation and Potts models, and this duality relation, will be the subject of Chapter 7.

Research of Potts models on the hyperbolic plane have already been performed, most notably by Rietman, Oitmaa and Nienhuis [52] and Ueda, Krcmar, Gendiar and Nishino [60]. The former have found that the critical temperature of the Potts model Tc on a

self-dual lattice does not agree with the self-dual value. This means there must be an-other critical point Tu. However, the authors were not able to find it. This also holds

for the authors of [60], which do not seem aware of this fact. Combining the numerical method of [60], called the Corner Transfer Matrix Renormalization Group (CTMRG), with the analysis of [47], we are able to find a second critical point Tu, dual to Tc. In Chapter

8, we will explain both the numerical method as the way we found the second critical point.

I want to thank my supervisor, prof. dr. Bernard Nienhuis for our lively weekly discussion sessions. I have learned very much from his fundamental and sincere approach to problems.

(13)

The hyperbolic plane

This chapter reviews the main properties of the hyperbolic plane. The hyperbolic plane the two-dimensional space of constant negative curvature. An easy way of familiarizing oneself with the concept of curvature is by thinking of a sheet of paper with a cut in it. If we make the two sides of this cut overlap, the paper will fold inward. This amounts to positive curvature. If we were able to do the opposite, adding more paper so that the two sides move further apart, the paper would have folded outwards. This is a negative curvature.

1.1

Fundamental properties

A natural way for physicists to encounter hyperbolic coordinates is in special relativity: an hyperboloid describes all points in space-time an event can be, if the observer at the origin would change his or her speed. Also, an object traveling with constant proper acceleration moves on a hyperbolic curve in Minkovski space-time. The curved plane we get by rotating the hyperbolic curve around the time (in our case, z) axis, obeys the following constraint:

x2+ y2− z2 = −R2, (1.1)

where R is the radius of the hyperbolic plane, and the metric is

ds2 = dx2+ dy2 − dz2. (1.2)

Unlike a positively curved sphere, the hyperbolic plane is not finite. Instead, the radius plays a role like the focal length of a parabola and serves a measure of the curvature. The smaller the value of R, the “more curved” the space is.

(14)

Chapter 1. The hyperbolic plane 7

To gain insight in the properties of the hyperbolic plane, we parametrize the movement on the hyperboloid along y = 0 as p(t) = (x(t), z(t)). Differentiating the constraint, we find x(t)x0(t) − z(t)z0(t) = 0, where the apostrophe denotes differentation with respect to time. This is nothing else than the Minkovskian inner product between p(t) and p0(t), the velocity, from which we derive they must be orthogonal: (x0(t), z0(t)) = k(t)(z(t), x(t)), where k(t) is some time-dependent prefactor. Notice that x(t) and z(t) are swapped on the right-hand side. To determine the length along our arc, we impose that the speed of p(t) is one, which implies |p0(t)| = |k(t)|pz(t)2− x(t)2 = |k(t)|R = 1, which means |k(t)| = ±R.

We can solve (z0(t), x0(t)) = (x(t), z(t))/R for constraint (1.2) as x(t) = R sinh(t/R) and z(t) = R cosh(t/R), from which we tell that

x = R sinh(t/R) cos(φ),

y = R sinh(t/R) sin(φ), (1.3)

z = R cosh(t/R) is the positive solution for all three coordinates [17].

From this analysis, we learn two important properties of the hyperbolic plane. If we move radially from the origin, we see t = r is the actually travelled distance, since our parametrization has a unitary speed. From equation (1.3), we thus learn that cir-cles of radius r do not have a circumference of 2πr, as in the Euclidean case, but of 2πR sinh(r/R) ∼ er/R. Instead of as a polynomial, the space grows exponentially. By

integrating over circumferences, we see that the area of a circle on the hyperbolic plane is equal to 2πR2(cosh(r/R) − 1), which scales as ∼ er/R as well. This means the area and

the circumference grow equally fast. The coordinates that use this actual length scale r as a radial coordinate are called the native coordinates, introduced in [35]. For Euclidean d-dimensional systems, the boundary scales as the bulk to the power 1 − 1d. For the hy-perbolic plane d = ∞, so we would say it is infinite dimensional, or has infinite Hausdorff dimension.

1.2

Metrics

Multiple coordinate systems are in use to map the hyperbolic plane. Since they are all related by conformal mappings, we pick the so-called Poincar´e disk model to learn more

(15)

the symmetries of the hyperbolic plane, and give a representation of the symmetry group of hyperbolic lattices. To calculate the geodesics, we will go to the Poincar´e half-plane model. The coordinates systems are related by conformal transformations.

Figure 1.1: Projection of the point (x1, y1, z1)

in Minkovski space to the point (u1, v1) on

the unit disk using the point (0, 0, −R) . Taken from [37]

The cone described in 3D by the solution of equation (1.2) falls within the the cone de-scribed by x2+ y2 = z2. This means we can

project any point p = (x1, y1, z1) of the

hy-perboloid on the disk DR= {(u, v, 0) | u2+

v2 < R} by checking where the line be-tween p and (0, 0, −R) crosses DR. One

can verify by basic geometry that (u1, v1) =

( x1

R+z1,

y1

R+z1). Writing this in terms of the

solution we found at (1.3), we get u1 =

R cos(φ)r and v1 = R sin(φ)r, where r =

tanh(t/2R) [37]. Clearly, 0 ≤ r < 1. We can map these coordinates to the unit disk by scaling the coordinates by 1/R. We then write z = reiφ. Using the formula for metric transformations, gµν0 = dx α dx0µ dxβ dx0νgαβ, (1.4)

we see that the metric transforms as follows:

   1 0 0 0 1 0 0 0 −1    (x,y,z) → 1 0 0 R2sinh(t/R) ! (t,φ) → 4R2 (1−r2)2 0 0 (1−r4R2r22)2 ! (r,φ) → 0 2R2 (1−z ¯z)2 2R2 (1−z ¯z)2 0 ! (z,¯z) . (1.5)

Working in the (z, ¯z)-coordinates, called the Poincar´e disk model, gives a lot of insight in the symmetries of the hyperbolic plane. Explicitly calculating the geodesics is easier in the Poincar´e half-plane model. We therefore perform a new transformation w = f (z) = i(1+z1−z), mapping the disk to the upper-half plane. The inverse of f is z = f−1(w) = w−iw+i. This

(16)

Chapter 1. The hyperbolic plane 9

means f−1( ¯w) = (w − i)(/w + i) = w + i/w − i = 1/¯z. Since these are conformal trans-formation, we know they map straight lines and circles to either straight lines or circles. Under these mappings, the metric transforms once again. We the complex coordinate w as a + ib. The metric transforms as

0 (1−z ¯2R2z)2 2R2 (1−z ¯z)2 0 ! (z,¯z) → 0 R2 2 Im(w)2 R2 2 Im(w)2 0 ! (w, ¯w) → R2 b2 0 0 R2 b2 ! (a,b) . (1.6)

From this last metric, we can calculate the Christoffel symbols. The only ones that are non-zero are Γaba = Γaab = −1 b Γbaa = 1 b (1.7) Γbbb= −1 b. This means the geodesic equation

d2xµ dt2 + Γ µ αβ dxα dt dxβ dt (1.8) becomes ¨ ab = 2 ˙a˙b ¨bb = ˙b2− ˙a2. (1.9)

Since the equations do not depend on a, we notice that if a, b is a solution to the geodesic equation, so is a → a + c, b → b, where c is a real constant. If we assume a is constant, we find that ¨bb = ˙b2, which is easily solved by b = c

1ec2t where c1 and c2 are integration

constants. This parametrizes a line running up from the a-axis to infinity. For c2 = 1,

gµνUµUν = 1, so the path is travelled at unit velocity in that case. The b-coordinate thus

blows up as we move further from the x-axis, as a function of the actually traveled distance. If we want to find other geodesics, we expect them to cross the a-axis 2 times, not only at t = −∞ This means that the b-coordinate is 0 for t = ∞ and t = −∞. A function that has

(17)

this property (and does not blow up for any value of t) is b = sech(t) = 1/ cosh(t). If we are lucky, this parametrizes the other geodesics. Plugging this assumption into equation (1.9), we find a general solution of the form a = r tanh(t) + c, b = r sech(t). These describe circle arcs in the upper half plane with radius r and center c [39].

The line a = 0 in the half plane corresponds to the line that goes from −1 to 1 in the disk model, other values of a yield circles tangential to this line in the point z = 1. Straight lines that cross the origin in the disk are mapped to circles that cross the point w = i in the upper-half-plane.

Keeping things simple, we can transform the solution of the form a = r tanh(t), b = r sech(t) back to the unit disk. In the unit disk, these are circle arcs that cut the unit circle at right angles and can be mirrored in the real axis. For r < 1, they lie to the left of the imaginary axis, for r > 1, they lie to the right of it.

This gives the following coordinates on the plane z = x + iy

(x, y) = ( (r

2− 1) cosh(t)

(r2+ 1) cosh(t) + 2r,

−2r sinh(t)

(r2+ 1) cosh(t) + 2r). (1.10)

By some algebra, we can see this describes the circle with center rr22+1−1 and radius

2

r2−1. We

can use this information to draw figures like figure 1.2 and figure 1.3: the straight lines between two points are circle arcs of circles with center rr22+1−1 and radius

2

r2−1. These circles

have a straight angle with the horizon |z| = 1

1.3

Regular tilings

The hyperbolic plane can be tiled by placing polygons on it. The regular lattices can be characterized by two integers: p, the number of corners of each polygon has, and q, the number of polygons that meet in every vertex. Each tiling is referred to by its Schl¨afli symbol {p, q}. If (p − 2)(q − 2) is larger than 4, the lattice is hyperbolic. To be able to determine the complex coordinates of these vertices, projected on the unit disk, we construct the group of rotational symmetries of the lattice. We follow [52]. We notice that we can rotate the plane α = 2πq around one of the vertices, and rotate it β = 2πp around the center of a polygon. The symmetry group is generated by these two actions, denoted a and b respectively. In addition, we notice that we can swap two adjacent vertices by rotating around one of the polygons they are both on, and then rotate around one of the

(18)

Chapter 1. The hyperbolic plane 11

Figure 1.2: The {7, 3}-lattice, confor-mally projected with a site as center.

Figure 1.3: The {4, 5}-lattice, confor-mally projected with a face as center.

vertices. Since performing this action twice must give the identity, we have the following three rules for the symmetry group:

aq= e (1.11)

bp = e (1.12)

(ab)2 = e, (1.13)

where a is rotating around a vertex, b is a rotation around an adjacent face and e is the identity. These rotations form a discrete subgroup of the group of rotations of the unit disk P SU (1, 1) = SU (1, 1)/−I, I. The action of any element of this group can be written as M (z) = s t t∗ s∗ ! (z) = sz + t t∗z + s∗, |s| 2− |t|2 = 1. (1.14)

(19)

M and −M perform the same action, another way of expressing that we consider P SU (1, 1) rather than SU (1, 1). From this continuous case, we can infer the discrete symmetry group of the {p, q}-lattice. The cleanest way to do so is by considering a face-centered lattice, with an adjacent vertex on the real axis at z = r, as in figure 1.3. In that case, we find the representation ρ(b) = ± e iβ/2 0 0 e−iβ/2 ! (1.15) and ρ(a) = ± 1 1 − r2

eiα/2− r2e−iα/2 −r(eiα/2− e−ıα/2)

r(eiα/2− e−ıα/2) e−iα/2− r2eiα/2

!

. (1.16)

This expression for a can be deduced by translating the vertex at r to the origin using

ρ(T ) = ± 1 1 − r2 1 −r −r 1 ! , (1.17)

which is in P SU (1, 1), rotating over an angle α and inverting the translation [52]. To satisfy equation (1.13), we must have

r2 = cos((α + β)/2)

cos((α − β)/2). (1.18)

Symmetry groups of other orientations of the lattice, like the site-centered one of figure 1.2, can be found by translating and rotating the representations of a and b.

Since the boundary of an area on the hyperbolic lattice grows linearly with the area’s size, we expect that boundary effects play an important role in the thermodynamic limit. One could get rid of these effects by closing the hyperbolic plane into a Riemann surface, comparable to the way that periodic boundary conditions on the Euclidean plane create a torus. Due to the negative curvature, we would expect to find a Riemann surface of higher genus in the hyperbolic case. Since this must mean that the symmetry group becomes finite, we expect to find an extra group rule next to equation (1.11)-(1.13). We did not find a systematic way to implement this, that enables to take the thermodynamic limit.

(20)

Chapter 1. The hyperbolic plane 13

1.4

Lattice size

From drawings like figure (1.2) and (1.3), we see that the hyperbolic lattice is built up from layers. We call each layer a generations. We can count either the sites or the faces. In the case we count the sites, we think of the center site as the zeroth layer. The sites that share a face with the center site then form the first generation. The second generation is then formed the sites that share a face with the first layer. If the lattice does not have a center site, we think of the sites on the center face as the first generation. The same line of thought holds mutatis mutandis for counting of faces.

To calculate the growth rate of the hyperbolic lattice in terms of the number of sites, we notice that the hyperbolic lattice has a recursive structure. To make this explicit, we will count the number of faces. The number of sites can be counted in the same manner. We will first assume q > 3, and come back to the case q = 3 later. If we would count sites instead of faces, the case p = 3 (triangular tiling) is the exception.

We notice from figure 1.3 that, apart from the center face, there are 2 types of faces: faces that share an edge with the previous generation, and faces that do not. We will denote such faces in the n’th generation by In and En, respectively. It is clear that an In

-face generates p − 3 In+1-faces. An En has an extra edge on the exterior and thus generates

p − 2 In+1 faces. A In-face generates (p − 3)(q − 3) − 1 En+1-faces, q − 3 for every exterior

vertex we can associate to it. Notice that this is not p − 2, since every exterior vertex is only associated to 1 face. Only one of the p − 3 vertices is connected to the previous generation, so we have to substract 1 since this one only generates q − 4 En+1-faces. By the

same argument, a En-face generates (p − 2)(q − 3) − 1 En+1-faces. This can be summarized

by the set of equations

In+1 En+1 ! = p − 3 p − 2 (p − 3)(q − 3) − 1 (p − 2)(q − 3) − 1 ! In En ! . (1.19)

If a site is centered, I1 = 0, E1 = q, if a face is centered I1 = p and E1 = p(q − 3).

By calculating the largest eigenvalue λmax of this matrix, we find the growth rate of the

number of sites:

(21)

where λmax= 1 2(p − 2)(q − 2) − 1 + n 1 2(p − 2)(q − 2) − 1 2 − 1o 1 2 . (1.21)

Notice that this formula is symmetric in p and q. It should be, since a lattice should grow as fast as its so-called dual lattice. We will explain this relation in the next chapter. In the case that q = 3, as in figure 1.2, there are also two types of faces: In, that is

connected to the previous generation by two edges, and En, that is only connected by one

edge. Once again, we can enumerate the offspring of every kind of face, which leads to

In+1 En+1 ! = 1 1 p − 6 p − 5 ! In En ! , (1.22) which leads to λmax= 1 2(p − 4) + 1 2 (p − 4) 2 − 412 . (1.23)

Notice that this is equal to equation (1.21) with q = 3.

1.5

Construction in C++

In C++, we construct the hyperbolic lattice in a spiral-like fashion, working inside out. Every vertex of the lattice is numbered, starting from the center vertex. The lattice will be stored as a vector of vectors called lattice, where the i’th input of the lattice-vector represents the i’th vertex. This i’th input is itself a vector, containing the q (or less, if vertex i is on the boundary) integers that correspond to the vertices that vertex i is connected to. Vectors have the advantage over arrays that their length need not be declared in advantage. At first, the lattice-vector is empty. We add (push back) an empty vector to this vector, that represents the center site. The first face is constructed by adding p − 1 more sites, and connecting them by pushing back the right integers to each vector. The algorithm uses 2 counting integers: the pointer ptr and the total number of points N , which is the length of the vertex lattice. We build the lattice by constructing faces around the vertex ptr, which is 1 when we start. N is equal to p and increases by 1 every time we add a vertex. The algorithm then constructs faces by starting from vertex N , the last vertex we have added to the lattice, adding a chain of p − 2 new vertices to vertex N , and connecting the last

(22)

Chapter 1. The hyperbolic plane 15

vertex of this chain to vertex ptr. This continues until the number of neighbours of ptr is equal to q. Then, we connect a chain of p − 3 vertices to ptr + 1 and move to ptr + 1 by declaring ptr = ptr + 1. Here, the procedure starts again.

There are a few things we must pay attention to: if p = 3, we connect the vertex N directly to ptr + 1 if the number of neighbours of ptr equals q. If q = 3, it might be the case that ptr + 1 already has 3 neighbours when we want to connect a chain to it (see figure 1.2). In that case, we connect a chain of p − 4 vertices to ptr + 2 and move to ptr + 2.

Since we want to construct a lattice based on the number of generations, and not the number of lattice points, we need to count the number of generations. We do so tracking the generation marker. When the algorithm starts, the second vertex is the generation marker, since it is the first point in the first generation. When the number of neighbours of this vertex is q, we have completed the first generation. The next generation marker then becomes the last vertex that was connected to this vertex, plus p − 3. One can see that this point has the same role in next generation. Once again, we have to pay attention to the case that q = 3. It can happen that the generation marker already will have 3 neighbours, while the generation is not completed. Stronger yet, this will happen all the time if we choose the next generation marker p − 3 steps from the vertex last added to the former generation marker, since this is a vertex that connects to the previous generation, not to the next. We solve this problem by constantly choosing the vertex p − 4 edges from this last-added vertex as generation marker if q = 3.

Having built the lattice, we can now close edges with a chance 1 − p. We visit each vertex i once and only look at the edges that connect vertices to vertices j with j > i. This way we visit each edge only once. We generate a random number between 0 and 1. If it is larger than p, we delete i from the vector j, and we delete the integer j from the vector i in the vector lattice

To determine the size of the root cluster, we use a search algorithm that is described in [29]. We define a new vector rootcluster that will contain the root cluster when the algorithm finishes. We add the integer 1 to the root cluster, since it corresponds to the root point. We then take a look at the vector lattice and add all the neighbours of the root point. We then mark the root point as visited and proceed to the next integer in the vector rootcluster. We add the neighbours of this vertex that not yet have been visited to rootcluster. We mark these as visited and move to the next vertex in the vector. The algorithms stops when there is no next vertex in rootcluster to check: that means all the neighbours of all the points in the root cluster have been added.

(23)

It should be noticed that this procedure can be made more efficient by deleting edges while the search algorithm runs. If the root cluster becomes isolated, the algorithm finishes. This averts that the status of all edges is calculated, even though most of them are not used in calculating the size of the root cluster. We have not implemented this procedure.

(24)

Chapter 2

Graph topology

In studying the properties of lattices, some jargon in use. We will summarize it briefly in this chapter.

We want to notice that we will use the term vertex and site as synonyms, as we do with edges and bonds.

A graph is called tree-like if it has no loops. In graph-theory, assuming a graph is tree-like is equivalent to using a mean-field approximation in physics: the correlation be-tween neighbours is neglected. Two examples of tree-like graphs are the Cayley tree and the Bethe lattice. A Bethe lattice of degree q is a lattice where every vertex has degree q and there are no loops. Starting from a random point, every neighbour has q − 1 other neighbours, and so on ad infinitum. An impression of the Bethe lattice is given in figure 2.1. The Cayley tree of degree q and depth L is simply the Bethe lattice, starting from a random point, and discarding all vertices that are further than L edges from the random point. The Bethe lattice is not the thermodynamic limit of the Cayley tree, as is pointed out in [50]: in the thermodynamic limit, a finite fraction of the Cayley consists of boundary points, whereas the Bethe lattice has no boundary points by definition.

Consider a lattice or graph G with vertices V and edges E. This edge-isoperimetric constant or the Cheeger constant h(G) is defined as

h(G) = inf{∂S

S |S ⊂ V, S < ∞}, (2.1)

(25)

Figure 2.1: Impression of the Bethe lat-tice of degree 3.

Figure 2.2: The {7, 3}-lattice in black, and its dual lattice, the {3, 7}-lattice, in red.

where ∂S denotes the boundary of the subset S, the vertices in S that are connected by one or more edges to V \ S. If h(G) = 0, G is said to be amenable. This means we can choose a limit such that the boundary can be neglected. This is the case for Euclidean lattices: if the dimension of our lattice is d,we can choose our subsets such that the bound-ary scales as Ld−1, whereas the size of the subset scales as Ld, where d is the dimension of

the lattice. If we take L to infinity, the Cheeger constant becomes 0. Hyperbolic graphs have a positive Cheeger constant, since both the number of points in the boundary as the number of points in the set scale as the exponent of the set’s diameter. The same holds for any Cayley tree, as well as its thermodynamic limit. It also holds for the Bethe lattice, contrary to what Baek, Minnhagen and Kim have stated in [3], where they conclude that a Bethe lattice is amenable since “it lacks boundary points”.

A graph is called transitive if the automorphism groups acts transitively on its vertices: for any two vertices v1 and v2 there is an automorphism such that f (v1) = v2. In physics

language, this means there is a action (reflection, rotation, translation) we can do on the graph so that v1 will move to v2. Globally speaking, this means every vertex in the graph

(26)

Chapter 2. Graph topology 19

Another way of distinguishing graphs is by looking at their number of ends. This could be mathematically formalized as the number of equivalence classes of infinite paths. In this thesis, we will come across two cases: either the graph is tree-like (e.g. the Bethe lattice), which means that the number of ends is infinite, or becomes infinite in the thermodynamic limit. In the other case, like a regular hyperbolic lattice, the vertices in a generation are mutually connected, so that the number of ends is 1. Another example of this kind of lattice is the Extented Binary Tree (EBT): it is a Cayley tree of degree 3, where the sites in the same generation that share a ‘face’ (which can be defined after projecting on a plane) are connected (notice in figure 2.1 that these faces are in fact infinite-sided). The number of points N for an EBT with L generations is 2L− 1, so that L ∼ log(N ), like in regular hyperbolic lattices. However, it is not amenable, since the central site plays a special role.

We call a graph planar if we can draw it on a plane without any of the edges crossing. These graphs have an important property that will be exploited in this thesis: for a planar lattice G = (V, E), we can define a dual lattice ¯G = ( ¯V , ¯E). The vertices of ¯G are located on the faces of graph G, and and the dual edges cross the edges of the original graphs, as shown in figure 2.2. This means the faces of the dual lattice are located on the vertices of the original lattice. It is clear a {p, q} lattice becomes a {q, p} lattice after a duality transformation. This means lattice of the form {4, 4}, {5, 5}, {6, 6}, etc, have a special property: they are called self-dual.

(27)

Introduction to percolation theory

An easy way to state the problem of percolation is the following. Consider a graph G(V, E) consisting of a set of vertices V and a set of edges E between these vertices. In most of the cases we study percolation, this graph has some regularity, so we can properly speak of enlarging or shrinking the graph’s size N . We could think of the Euclidean square lattice, or some hyperbolic {p, q}-lattice. In bond percolation, each of the edges is open with a chance p, and closed with a chance 1 − p. We call p the percolation chance and think of neighbouring vertices as connected if the edge that links them is open. A cluster of vertices is formed by all vertices that are connected to each other by paths of open edges. Percolation theory tries to answer questions concerning the properties of these clusters, as a function of p and of the lattice size N . In site percolation, the role of vertices and edges is swapped. Both forms of percolation are also referred to as Bernoulli(p)-percolation, since the amount of edges/vertices that is open is Bernoulli(p)-distributed. This simply means that all bonds are independently identically distributed with a chance p. We will be mainly be concerned with bond percolation, and pay a short visit to site percolation in Chapter 6.

3.1

Mathematical definitions

In order to be able to study percolation, we will need a definition. We will review how different authors have defined the critical percolation point and how these points, that coincide in the Euclidean case, become distinct for the hyperbolic plane. We start by the definitions as given in [25]. A principal quantity is the percolation probability θ(p), defined as

(28)

Chapter 3. Introduction to percolation theory 21

θ(p) = Pp(s0 = ∞), (3.1)

the probability that s0, a cluster that originates from a random vertex v0, is unbounded in

size. In practice, we take the origin of our lattice as the random point, therefore s0 is often

referred to as the root cluster. Contrary to the common mathematical practice of writing the size of a cluster C as |C|, we will leave out the absolute value signs for brevity. Hence, we assume that C refers to both the cluster itself and its size. We can define the critical value pc such that

θ(p) = (

= 0 if p < pc

> 0 if p ≥ pc

(3.2)

We could also take a look at the mean cluster size, defined as

χ(p) = Ep(s0). (3.3)

Notice that this is the mean of the cluster size, averages over the vertices, not over the clusters. This can also be made clear by using a definition from [18]. We write ns(p) for

the number of clusters containing exactly s sites, divided by the total number of sites of the lattice. We divide by the number of sites to avoid infinities. Then P

sns it the total

number of clusters per site, P

ssns would then be the mean of the cluster sizes, average

over the clusters, whereas P

ss 2n

s is the mean cluster size, since a cluster with s points

has a s times larger chance of providing the root of the cluster. Notice that we assume the number of clusters to be an extensive quantity.

Clearly, if p > pc, χ(p) = ∞. It is not so obvious that p < pc implies χ(p) < ∞, but

this is the case [25]. Even further abusing notation, we will simply write s0 for the mean

cluster size χ(p), if p is fixed. When s0 is infinite, we say that the system percolates. It

was proven by Burton and Keane that in the Euclidean case (a Zd-lattice), the percolating

cluster is unique [15]. We could also define pu as the value where this unique percolating

cluster emerges:

pu = inf



p : Pp{there exists exactly one percolating cluster} = 1



(29)

and say that in the case of Euclidean lattices pc= pu.

The fundamental difference between the Euclidean and the hyperbolic case is that in the latter case, pc < pu holds. This is clearly true for tree-like lattices. For a Bethe lattice

of degree z, pc= 1/(z − 1), whereas the infinite cluster is unique only if pu = 1. We can get

an intuition for this by noting that closing a single edge cuts the Bethe lattice in 2 infinite clusters, which does not happen in Euclidean lattice. For regular hyperbolic {p, q}-lattices, the case is even different: Benjamini and Schramm proved that for any transitive, planar, non-amenable graph with one end, we have 0 < pc< pu < 1 [10]. In this situation, there is

no infinite cluster for p = pc, and there is a unique infinite open cluster for p = pu [9]. In

between, there is an infinite amount of infinite clusters. Nogawa and Hasegawa [47] have proposed a mechanism to explain this from a physical point of view. We will review that framework in Chapter 5.

We can also define percolation without using a lattice. This is the field of continuum percolation theory. There are several forms, but the core idea is the same: a background is ‘sprinkled’ at a certain density with disks of radius d. The density of this Poisson process is ρ. The disks can have 2 colors, say black or white. Cluster are formed by the areas of the same color. Monte Carlo simulations make plausible that continuum percolation is in the same universality class as percolation on lattices [22]. In the Euclidean case, due to scale invariance, one can fix ρ and choose d as a variable or vice versa. In the hyperbolic case, due to the curvature radius R of the plane, this is not the case: the the ratio d/R should stay fixed, and one has to choose ρ as a variable. Tykesson [59] proved a result equivalent to that of Benjamini and Schramm for Bernoulli(p) percolation: in the hyperbolic plane, there are two distinct critical densities ρc and ρu. For ρ < ρc, there is no infinite cluster,

for ρc < ρ < ρu, there are infinitely many infinite clusters, and for ρu < ρ, there is only

one infinite cluster.

3.2

Duality

In [10], Benjamini and Schramm also prove a duality relation for percolation on the hy-perbolic lattice: if there is a unique percolating cluster in a lattice for some value of p, there is no percolating cluster in its dual for the value 1 − p. This is based on the fact that configuration of bonds and open bonds admits both a description and a dual description: a dual bond is open when the bond it crosses is closed and vice versa. This also holds for

(30)

Chapter 3. Introduction to percolation theory 23

Figure 3.1: Percolation on the square lat-tice with p = 0.47. A random cluster in red, a random dual cluster in yellow.

Figure 3.2: Percolation on the square lat-tice with p = 0.53. A random cluster in red, a random dual cluster in yellow.

Euclidean lattices. We illustrate this in figure 3.1: we draw the open bonds in gray and leave the closed bonds blank. Connected vertices in the dual lattice can be seen as faces of the original lattice that are not separated by a gray line. Since the non-percolating phase is dual to the unique-percolating cluster phase, this immediately fixes the critical value for the square lattice as pc = 12, since it is self-dual and Euclidean. In the hyperbolic case,

there is a phase in between the 2 critical points, where there are infinitely many percolating clusters, in both the graph as the dual. There is a duality relation pc+ ¯pu = 1, where the

bar denotes the percolation probability on the dual lattice.

For hyperbolic lattices, we can sum up the situation as follows

(k, ¯k) =      (0, 1) if p ≤ pc (∞, ∞) if pc < p < pu (1, 0) if p ≥ pu , (3.5)

where k denotes the number of unbounded clusters in the graph, and ¯k the dual graph. In chapter 6, we will introduce a way of thinking about clusters that makes the duality explicit.

(31)

3.3

Critical exponents

In the Euclidean case, the behaviour of functions like equation (3.1) and (3.3) is studied using scaling theory. The fundamental observation is that the functions behave as a power law in |pc− p| around the the critical point, and as a power law in the system size on the

critical point. We can express the system size in terms of the total number of sites N , or the length of the system L. Since N = Ld, where d is the dimension of the system, the scaling assumption holds independent of choosing either L or N . This immediately lays bare the difficulty of this assumption in the hyperbolic case, where L ∼ log(N ). We will thus have to be careful what elements of this scaling theory hold in the hyperbolic plane, and why. We will come back to this in Chapter 4 and 5.

For now, we notice that in the Euclidean case, the percolation probability θ(p) scales as |p − pc|β for p ↓ pc , where β = 365. The mean cluster size χ(p) scales as |p − pc|−γ,

where γ = 4318 for p ↑ pc. In the supercritical phase, we can condition the cluster size on

the cluster being finite and obtain a quantity χf, which has the same critical exponent for

p ↓ pc. For the Euclidean plane, these exponents were first calculated by Nienhuis [41] and

Den Nijs [44] by means of the so called the Coulomb Gas representation of the percolation problem. A review can be found in [42]. Some mathematically rigorous proofs of critical exponents can be found in [58].

We will derive the exponents for the Cayley tree in section 3.5 and find that γ = β = 1. Another important quantity is the correlation length ξ(p): in the Euclidean plane, the chance that two point at a distance r are connected decays as e−r/ξ(p) for p < pc. The

length scale ξ(p) diverges as |p − pc|−ν, with ν = 43 for p ↑ pc. For p > pc, we can use the

size of the dual clusters to define a correlation length. Only at p = pc is the correlation

length infinite.

Furthermore, we can determine how s0 behaves as a function of the lattice size. In the

Euclidean case, s0 ∼ Nψ(p), where

ψ(p) =      = 0 if p < pc = 4348 if p = pc = 1 if p ≥ pc . (3.6)

It is common to define this exponent in terms of L, but to show the analogy with the hyperbolic lattice (equation (3.7)), we choose to work with N .

(32)

Chapter 3. Introduction to percolation theory 25

It is rather remarkable that we can capture this behaviour so easily. The value of the critical point pcdepends on whether we choose a triangular, square or hexagonal hyperbolic

lattice, but the critical exponents do not. This is a phenomenon referred to as universality. The critical exponents only change if we would change the dimension of our system, or change from percolation to e.g. an Ising model. We are then in a different universality class. For every universality class, there are is a multitude of exponents we can define. The exponents turn out to be related by so called scaling relations, which hold for all universality classes. This means that for a given universality class, there is only a small number of independent critical exponents. We will go into these relations in Chapter 4. The exponents still obey the scaling relations. It is one of the major triumphs of 20’th century physics that the different critical behaviours of many types of models has been determined and classified.

In the hyperbolic case, the critical exponents work differently, since we have two crit-ical points pc and pu. From personal communication with dr. Tobias M¨uller of Utrecht

University (24th November 2016), we know that s0 ∼ Nψ(p), with

ψ(p) =      = 0 if p < pc = f (p) if pc≤ p ≤ pu = 1 if p > pu , (3.7)

where f (p) is a continuous, strictly increasing function with f (pc) = 0 and f (pu) = 1.

It was proven by Schonmann that the exponents β and γ for the critical behaviour around pc are equal to those in the Cayley tree [54]. This implies that this is a a mean

field transition, where we can neglect the role loops play. We will examine the Cayley tree in section 3.5. It wil turn out that the role of the correlation length is rather different.

3.4

Physics approaches

More physically inclined authors have used other definitions to distinguish between the three phases that percolation exhibits on the hyperbolic plane.

In [3], Baek, Minnhagen and Kim define the quantities b, b/B and s2/s1 to measure

percolation on hyperbolic lattices using Monte Carlo simulations. They work with a finite system size N and take the thermodynamic limit N → ∞. In their definition, b is the number of points on the boundary that is connected to the central point of the lattice. B is

(33)

the total number of boundary points. s1 is the size of the largest cluster in the system, and

s2 the size of the second largest. Clearly, if b is positive in the thermodynamic limit, the

cluster reaches the boundary. This order parameter would give the first transition point pc. If b/B is positive in the thermodynamic limit, the cluster reaches a finite fraction of

the boundary, which would be an order parameter for pu.

Baek, Minnhagen and Kim argue that if s2/s1 = 0 in the thermodynamic limit, the

largest cluster is unique. This claim seems dubious, since one can easily conceive of a counterexample: s1 and s1 could both grow as a unbounded function of the system size. If

s1 grows faster than s2, the limit of their ratio might be 0. This does not imply that the

unbounded cluster is unique, since both s1 and s2 are infinitely large. This means that pu

might be greater than Baek, Minnhagen and Kim suppose.

The simulations in [3] lead to the values pc = 0.53, pu = 0.72 for the {7, 3}-lattice, and

pc= 0.2 and pu = 0.37 for its dual, the {3, 7}-lattice. These values do not fit the relation

proven by Benjamini and Schramm. Instead of reviewing their argument, the authors hypothesize that the two critical point should be related through an inequality: pc+ ¯pu < 1.

This is nonsensical, since the proof of Benjamini and Schramm is mathematically rigorous, whereas their work is just based on numerical simulations. Therefore, it seems plausible that the argument of Baek, Minnhagen and Kim is flawed. Due to their definition of pu,

it is likely that they estimate pu too low, and that their estimation of pc is correct.

We provide some extra numerical indications that s2/s1 is not an order parameter. If

it would be, then so must s3/s1, the size of the third cluster divided by that of the first.

However, by taking an average of 1000000 Monte Carlo simulations, we see that its system size invariant point does not agree with that of s2/s1. It is hard to see in figure 3.3, so

we plotted the difference between s3/s1 for L = 5 and L = 6, and the difference between

s2/s1 for L = 5 and L = 6. We see that the zeros of the graph do not agree. An extra

disadvantage of this method of calculation is that it is numerically much heavier than finding the size of the average cluster s0, since one has to find the size of every cluster in

the system.

Baek, Minnhagen and Kim also argued that if b/B becomes positive, the infinite cluster is unique. However, they made use of a scaling argument that was rebutted by Nogawa and Hasegawa in [46].

In [47], Nogawa and Hasegawa have proposed their own method of understanding the phase transition, which we will also review in Chapter 5. They take a look how the size of the average cluster s0 grows with the lattice size N and find a relation like equation (3.7).

(34)

Chapter 3. Introduction to percolation theory 27

Figure 3.3: Average over 1000000 sim-ulations of both s2/s1 and s3/s1, which

shows that these do not have the same fixed value p.

Figure 3.4: Average over 1000000 simu-lations of both s2/s1 and s3/s1. Plotting

the difference between 2 sizes shows the fixed point of s3/s1 arises for higher p.

Using this definition, they find values for pcand pu that do meet Benjamini and Schramm’s

equality. They made use of the EBT-lattice, instead of a regular hyperbolic tiling. This lattice is not exactly transitive, but it works well for physical purposes.

In [4], responding to this result of Nogawa and Hasegawa, Baek, Minnhagen and Kim have tried to argue that percolation in the EBT-lattice also obeys an inequality relation, as they had argued for regular hyperbolic lattices. They argue that the two percolation tresholds of the EBT do not obey Benjamini and Schramm’s equality, since the EBT is not transitive. It is very odd that they use this argument, since, according to Baek, Minnhagen and Kim, this inequality must also hold for regular hyperbolic lattices, which are transitive. It seems safe to conclude that Baek, Minnhagen and Kim are in error.

Another way to define whether a system percolates is through crossing probabilities. One chooses segments of the system’s boundary, and if these stay connected by an open cluster with positive probability in the thermodynamic limit, the system is said to percolate. In the case of percolation on the Euclidean lattice, this probability is 0 if the system is subcritical, 1 if the system is supercritical. Cardy [19] has derived expressions for the case that the system is exactly critical by using methods from conformal field theory. In the hyperbolic case, Gu and Ziff [26] have used Monte Carlo simulations to find the critical probabilities for the hyperbolic plane. Indeed, they find two distinct points: a value

(35)

pc where the probability becomes non-zero and another point pu where the probability

becomes one. These match the values found in [47].

3.5

Percolation on the Cayley tree

We will shortly discuss percolation on the Cayley tree (see figure 2.1), since it can serve as a reference for percolation on the hyperbolic plane. First off, we consider a Cayley tree where each vertex has z bonds. If we follow a path from a random point, there are z − 1 ways to continue the path. If the average of paths we continue to follow is smaller than 1, the cluster dies out. If it is larger than 1, the cluster becomes infinitely large. This determines the critical probability pc:

pc=

1

z − 1. (3.8)

We now turn to the case of a Cayley tree with z = 3 (pc= 12) and determine how the

cluster size diverges as p approaches pc from below. We use the following argument: the

average size of the root cluster χ(p) is equal to the origin, plus p times the average size of a branch, denoted T . From self-similarity (a branch is built from 2 branches) it follows that

T = 1 + 2pT, (3.9) so T = (1 − 2p)−1. We now write χ(p) = 1 + 3p 1 − 2p = 1 + p 1 − 2p = 1 + p 2(pc− p) . (3.10)

This means the average cluster size diverges as (pc− p)−1, so that γ = 1 in the case of the

Cayley tree.

We can use the same similarity argument to determine the percolation probability θ(p). We denote the E the probability that a branch dies out. Clearly E = 1 for p < pc Then

1−θ(p), the chance that the cluster remains finite, also known as the extinction probability, must be the solution of

(36)

Chapter 3. Introduction to percolation theory 29

We can solve this polynomial to find that the solution we are looking for equals 1−2(p−12)

2p ,

for p ≥ pc. This means

θ(p) = 1 − E = 4(p − pc)

2p . (3.12)

From this we read off that β = 1 in the case of the Cayley tree.

If we look at the definition of the correlation length from section 3.3, we see that the correlation length does not diverge at this critical point: the chance that the root is connected to a point in layer l at p = pc decays as (12)l, so that ξ(pc) = 1/ log(2). In

general, ξ(p) = −1/ log(p), and hence diverges at p = 1. This diverges as (1 − p)−1 for p ↑ 1. We will revisited this definition of correlation length in chapter 5, where it turns out that on regular hyperbolic graphs, it diverges for pc< p < 1.

In [25], a different definition for the correlation length is given for the case of trees, so that it does diverge at p = pc. It based on the average number of layers that separate point

in the cluster from the root:

ξtree(p) = s P∞ l=0lnl(p) P∞ l=0nl(p) = s P∞ l=0lnl(p) χ(p) (3.13)

where nl(p) denotes the number of points connected to the root cluster in generation l at a

percolation chance p. In the case we want to calculate these quantities using Monte Carlo simulations, it is essential to notice that we measure the ratio of the expectation values, and not the expectation value of the ratios. We can also use definition of correlation length in the super critical phase by conditioning on the fact that the path that connects to a vertex in generation l does not have an infinite branch.

We calculate the behaviour around the critical point, using nl(p) ∼ (2p)l, so that the

sum in equation (3.13) is the derivative with respect to p of 1/(2p−1), and χ(p) ∼ (12−p)−1:

ξtree(p) ∼ s (12 − p)−2 (12 − p)−1 ∼ (pc− p) −1 2, (3.14)

so that the exponent ν = 12. We will see in Chapter 4 and 5 that this value of ν does not make sense if we want determine the scaling behaviour. We then find ν = 1. Apparently, the correlation length as defined in (3.13) is not the natural length scale of the system.

(37)

We can see this by evaluating equation (3.13) up to a certain generation L on the critical point, so that nl(p) = 1 ξtree(pc, L) = s PL l=0l PL l=01 = s 1 2L(L + 1) L ∼ L 1/2. (3.15)

Since we want our definition of correlation length to connect to the notion of the natural length scale of the system, we will redefine it. There are two obvious ways to do so

ξnatural(p) = ξtree(p)2 = P∞ l=0lnl(p) χ(p) . (3.16) and ξgyration(p) = s P∞ l=0l2nl(p) P∞ l=0nl(p) . (3.17) .

We see that both of these quantities scale as L and have ν = 1. We will get back to these quantities in Chapter 6.

Another discussion of these correlation lengths and their exponents can be found in a paper by Havlin and Nossal [28]. However, they use arguments that rely on a finite dimensionality of the system. They identify ξtree as a radius of gyration. This probably

stems from a definition of a distance function d(0, vl) =

l on trees, where vl is an vertex

l edges away. Then the definition of ξtree is comparable to that of the radius of gyration

Rg in Euclidean case: Rg2 = 1/N N X i=1 (ri− rmean)2, (3.18)

under the assumption that the origin is the mean of the cluster. Notice that it is hard to determine a ‘mean’ point on a tree or hyperbolic plane, since these are not regular vector spaces. Havlin and Nossal then find that the natural length scale of the system diverges with exponent 2ν in the mean-field case.

(38)

Chapter 4

Phase transitions, renormalization

and scaling

In this chapter, we will explain how the ideas of renormalization naturally lead to the critical exponents and scaling relations of section 3.3. This analysis only holds in the Euclidean plane and is based on the notion that at the critical point, the correlation length diverges. This leads to scale invariance: we can zoom in or zoom out on the system, and it will have the same properties, provided the various observables are all scaled with the appropriate exponent. We will make this notion exact using renormalization transformations.

4.1

Renormalization

We summarize the basics of renormalization theory necessary to understand the assumption of scale invariance, closely following [43].

Consider a system described by a Hamiltonian with N degrees of freedom s, dependent on some parameters g. If we write HN(g, s) for its Hamiltonian, we can write the partition

sum as follows

ZN(g) =

X

s

e−HN(g,s). (4.1)

We assume we can do a renormalization transformation. This means we coarse grain the system: we regroup multiple degrees of freedom as a single one. This leads to a system of N0 degrees of freedom. In the case of percolation, this means we regroup multiple sites

(39)

and consider them as a single site, as in figure 4.1. We assume we can alter the parameters (i.e. the value p) so that the partition sum stays the same:

ZN(g) = X s e−HN(g,s) =X s0 e−HN(g0,s0) = Z0 N(g 0 ). (4.2)

Figure 4.1: Example of a renormalization transforma-tion: nine sites in the left lattice are regrouped as 1 site in the right lattice. Figure taken from [25].

We do not need an exact for-mula for g0(g), only the fact that this transformation exists and is continuously differentiable. This renormalization transformation alters the scale of the system. If the system is d-dimensional, N = `dN0 for some linear scale

factor `. Clearly, the correlation length is altered as well:

ξ(g) = `ξ(g0(g)). (4.3)

Since a critical value of g means that the correlation length is infinite, it follows that ξ(g0(g)) is infinite. This means that g0(g) is a critical point as well. We thus find that critical points are fixed points of the renormalization transformation.

Using this fact, we can now linearize the renormalization transformation around the fixed point by looking at the matrix of derivatives ∂g0(g)/∂g. This matrix can be diag-onalized, with eigenvalues λi and its eigenvectors ui. By the nature of the

renormaliza-tion transform, we see that applying a renormalizarenormaliza-tion by a length scale `, combined by a renormalization over a length scale `0 must be equivalent to a renormalization over a length scale ``0. This implies that every eigenvalue of the derivative matrix must be a power of `, λi = `yi. We rewrite equation (4.3) as

ξ(u1, u2, ..., un) = `ξ(`y1u1, `y2u2, ..., `ynun). (4.4)

In the case of a one-dimensional parameter space, this immediately leads to the con-clusion that, around the critical point, the correlation length should behave like

(40)

Chapter 4. Phase transitions, renormalization and scaling 33

ξ(pc− p) = (pc− p) −1

y1. (4.5)

In general, equation (4.4) yields this power law behaviour. To analyze it, one has to account whether the eigenvalues are larger, equal to or smaller than 1, i.e. whether yi

is larger, equal to or smaller than 0. This determines whether a parameter is relevant, marginal or irrelevant, respectively.

It is nice that we can retrieve the power-law behaviour from two basic assumption assumption: the existence of a renormalization transformation, and the fact that the cor-relation length diverges at the critical point.

4.2

Scaling

The fact that quantities exhibit power-law behaviour can also be used in numerical sim-ulations. In this case, the system is finite. We can explicitly insert this finitude in the equations in the following way. Assume we have a quantity A with an associated exponent ζ, while the correlation length ξ diverges with exponent −ν. We rewrite the behaviour around the critical point as follows:

A ∼ |p − pc|−ζ ∼ ξζ/ν. (4.6)

The main idea of scaling (or scaling ansatz ) is that if the system has a finite length L, it replaces the role of the correlation length, if the correlation length is larger than system length:

A ∼ Lζ/ν for L  ξ. (4.7)

We can capture the behaviour of both equation (4.6) and (4.7) when we assume

A = Lζ/νf (L/ξ), (4.8)

(41)

f (x) (

= const. if |x|  1(L  ξ),

∼ x−ζ/ν if x  1(L  ξ). (4.9)

We can rewrite equation (4.9) directly in terms of |p − pc| instead of ξ, which defines

the the scaling function for A, ˜A:

AL(p) = Lζ/νA(L˜ 1/ν(p − pc)), (4.10) where ˜ A(x) ( = const. if |x|  1, ∼ x−ζ if x  1. (4.11)

(42)

Chapter 5

Fundamental mechanism of the

double phase transition

It is proven in [10] that percolation on the hyperbolic plane has two critical points, pc and

pu, as described in section 3.1. In [47], Nogawa and Hasegawa give a physical argument

for the appearance of this phenomenon, using the Enhanced Binary Tree (EBT) as an example. We will see that we can use their argument for a general hyperbolic {p, q} lattice, and underline their results using Monte Carlo simulations.

5.1

Theoretical framework

To describe the two transition point in the hyperbolic plane, we start by choosing an arbitrary vertex as the root of the cluster. We make the assumption that, for small p, the probability that a vertex in the l’th generation is connected to the root point v0 decays

exponentially. This probability can be interpret as a correlation function C0:

C0(l) ∝ e−l/ξ(p) (5.1)

where ξ(p) is a correlation length that depends on p. In the case of a percolation on a tree, as we saw in section 3.5, the correlation length diverges for p = 1. On the EBT or a hyperbolic {p, q}-lattice, the correlation length diverges for some value p < 1, due to the loops in the lattice. Notice that for a {p, q}-lattice, not every vertex in generation l is equidistant to the root point, in contrast to the EBT. The correlation function is thus seen as an average over the generation.

To describe the double phase transition, we take a look at the number of vertices in the root cluster s0, and see how it scales with the number of generations L. The number

(43)

Figure 5.1: Exponent φ in the {7, 3}-lattice, determined by averaging over 100000 simulations. p 0.17 0.18 0.19 0.2 0.21 0.22 0.23 φ 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 L=6 L=7 L=8 L=9

Figure 5.2: Exponent φ in the {3, 7}, de-termined by averaging over 100000 sim-ulations.

of vertices in the l’th generation grows as ecl, where c is a lattice-dependent constant. In fact, we see from equation (1.20) that for a {p, q}-lattice, c = log(λmax), with λmax defined

as in equation (1.21). s0 ∝ L−1 X l=0 ecle−l/ξ = e tL− 1 et− 1 where t = c − 1 ξ(p). (5.2)

Clearly, if t is smaller than 0, which happens if the correlation length is small enough, s0 will converge to a finite value in the thermodynamic limit. This corresponds to the

non-percolating phase. The first transition point pc is reached when t = 0. At this point,

s0 ∝ L ∝ log(N ). (5.3)

It was proven by Schonmann that this phase transition is mean field, which means it has the same critical exponents as the Cayley tree [54].

The scaling behaviour might seem to contradict Benjamini and Schramm’s result that there exist no infinite cluster at p = pc (equation 3.5). Indeed, a more careful analysis of

branching processes like these, as provided in e.g. [29], will show that the branch does die out in the limit L → ∞: the expectation value of s0 is infinite, while the actual probability

(44)

Chapter 5. Fundamental mechanism of the double phase transition 37 |pc-p|L 10-1 100 s0 /L 100 101 L=5 L=6 L=7 L=8

Figure 5.3: Finite size scaling plot of s0

for the {7, 3}-lattice, for pc= 0.53, using

γ = 1 and ν = 1. Grey line is a function f such that f (x) ∼ x−1. We see the limit behaviour for large x-values is correct. Average over 1000000 samples.

Figure 5.4: Exponent ψ in the {7, 3}-lattice and the dual 3}-lattice. Chance ¯p plotted as 1 − ¯p to show the duality re-lation. Average over 1000000 samples.

of the cluster being infinite is 0. This is an example of the St. Petersburg paradox in statistics.

For t > 0, the root cluster diverges as

s0 ∝ etL ∝ Nψ, where ψ = 1 −

1

cξ(p). (5.4)

It is clear that ξ increases as p does, which means ψ is an increasing function. Like in the Euclidean case, ξ(p) diverges for some value of p. At this point, ψ = 1 and the root cluster scales linearly with the system size. This value of p is associated with pu.

5.2

Numerical results

We can support this analysis of the problem using numerical simulations. First, we will verify that the root cluster scales as L for p = pc. For both the {7, 3}-lattice as its dual,

the {3, 7}-lattice, we calculate the exponent φ, defined through s0 ∼ Lφ. We do so by

calculating s0 for different number of generations L and L0 and averaging over 100.000

(45)

Figure 5.5: C0(l) for different values of

p. Average over 2000 samples.

Figure 5.6: 1/ξ as a function of p. Ver-tical lines are drawn at pc = 0.53 and

pu = 0.8. Horizontal line drawn at

log((3 +√5)/2) to show that 1/ξ(p) = log((3 + √5)/2), as predicted by equa-tion (5.2).

In all cases, we use L and L0 = L − 1 to find the exponent. In figure 5.1 and 5.2, we see that there is only one value where this exponent is size independent (and equal to 1). This value determines pc. For the other values of p, the scaling relation s0 ∼ Lφ does not

hold, since φ is ill-defined. We read off that pc= 0.53 in the case of the {7, 3}-lattice and

pc = 0.2 in the case of the {3, 7}-lattice. These values are in agreement with the values

for pc as determined in [3], and have an error bar of 0.01 From our results, we would thus

expect to find pu = 0.8 for the {7, 3}-lattice and pu = 0.47 in the case of the {3, 7}-lattice.

We determine the finite size scaling function around pc, as defined in equation (4.11).

We use the exponent γ = ν = 1 and pc = 0.53. The result is shown in figure 5.3. We

see that a so-called data collapse happens: the data for different system sizes all fall on the same line. This line is the scaling function from equation equation (4.11). This only happens if the values of γ and pc are correct. We want to bring to mind the ambiguity in

the definition of correlation length, as mentioned at the end of section 3.5. A scaling with ν = 12 does not yield a scaling function. We have drawn the line of exponent −1 in the plot, to verify that the limit behaviour is as expected.

We can also determine ψ as log(s0/s00)/ log(N/N

0). In this case, we use the system sizes

Referenties

GERELATEERDE DOCUMENTEN

We compare connec- tivity properties of the origin’s invaded region to those of (a) the critical per- colation cluster of the origin and (b) the incipient infinite cluster.. To

Het vlak dient zo ver mogelijk naar het zuiden te reiken aangezien vaak uit archeologisch onderzoek blijkt dat de bewoning zich op de drogere en dus hoger gelegen zones

Noting that Indilinga was conceptualised as a medium through which African scholars could reveal their indigenous scholarship and contribute to the emancipation of

Two cases of traumatic ventricular septal defects, one case of traumatic aortic incompetence and sinus of Val- salva fistulae with rupture into the right ventricle and right atrium,

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

In order to better characterize the cation distribution in the spinel lattice of Al and Ga substituted magnetite and its influence on the hyperfine fields HF(A) and HF(B) at

A special slicing structure with increased depth, used in HECTIC to solve the area allocation problems for higher level tasks including t(O) (chip floorplan),

In the current paper we relate the realization theory for overdetermined autonomous multidimensional systems to the problem of solving a system of polynomial