• No results found

Area Dependence of Scalar Field Entanglement Entropy

N/A
N/A
Protected

Academic year: 2021

Share "Area Dependence of Scalar Field Entanglement Entropy"

Copied!
54
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Area Dependence of Scalar Field

Entanglement Entropy

Bachelor’s Thesis, 15EC

Author : Tim Bakker

Supervisor : Dr. J.P. van der Schaar

Second assessor : Dr. D.M. Hofman

Institute of Theoretical Physics

Faculty of Natural Sciences, Mathematics and Computer Science University of Amsterdam

Conducted between: 10/04/2014 and 01/07/2014

Date of submission: 01/07/2014

(2)
(3)

Populairwetenschappelijke samenvatting

De quantumfysica bevat een tal aan fenomenen, die naar menselijke intu¨ıtie ronduit vreemd te noemen zijn. In de klassieke natuurkunde kan een systeem - zeg een deeltje - zich in een bepaalde staat bevinden (een state). Deze state beschrijft alles wat we weten van het deeltje, zoals de positie, snelheid en energie. In principe zijn al deze eigenschappen meetbaar en heeft het meten zelf geen invloed op de eigenschappen van het deeltje. In de quantummechanica ligt dit anders: deeltjes hebben geen eenduidige staat van zijn, maar bevinden zich in een combinatie van twee of meer mogelijkheden: een superposi-tie. Elke mogelijkheid in deze superpositie heeft een bepaalde kans gemeten te worden. Wanneer we de state van een dergelijk deeltje proberen te meten, krijgen we een van deze mogelijkheden als het resultaat van de meting. Prepareren we vervolgens een ander deeltje in exact dezelfde superpositie, dan kan dezelfde meting bij dit deeltje een totaal ander re-sultaat geven! Doen we echter dezelfde meting op een deeltje dat al gemeten is, dan krijgen we het resultaat van eerder terug! Het deeltje lijkt door de meting een mogelijkheid te hebben gekozen en hierbij te blijven. We zeggen dat een meting de superpositie breekt. Uiteraard kiest het deeltje niet daadwerkelijk in menselijke zin en ligt er aan dit fenomeen een hoop wiskunde ten grondslag, die ons precies vertelt wat de kansen zijn om een bepaald resultaat te meten. Echter blijft de fysische interpretatie van dit quantumgedrag tot op de dag van vandaag een punt van discussie. Overigens kwalificeert elke interactie met de omgeving als meting: mensen zijn hiervoor niet nodig.

Dit is op zich al een vreemd concept, maar we zijn pas net begonnen. Een van de meest tot de verbeelding sprekende voorbeelden van vreemde quantummechanica is verstrengeling. Verstrengeling houdt in dat twee (of meer) quantumdeeltjes door onderlinge interacties met elkaar verbonden zijn geraakt. Als je weet hoe deze deeltjes verstrengeld zijn, kun je er soms achter komen in welke state het ene deeltje zit, door enkel naar het andere deeltje te kijken. Wanneer je een van de twee meet, vervalt deze naar een van de mogelijke states en is die state je meetresultaat. Door de verstrengeling weet je nu ook in welke state het andere deeltje is. Het lijkt alsof het andere deeltje onmiddellijk kiest om te vervallen naar de goede state, zodra zijn partner gemeten is; zelfs als de deeltjes lichtjaren uit elkaar zijn geplaatst. Dit lijkt de speciale relativiteitstheorie te schenden, die zegt dat informatie niet sneller dan het licht kan reizen. Einstein noemde dit al: “Spooky action at a distance”. Achteraf blijkt verstrengeling zich toch netjes te gedragen. Desalniettemin is het een zeer onintu¨ıtief process.

In deze thesis wordt quantum verstrengeling aangehaald als een van de verklaringen voor het bestaan van de entropie van zwarte gaten. Entropie is een begrip uit de thermodynam-ica en uit informatietheorie, waar amper een eenduidige interpretatie van bestaat. Vaak wordt het aangehaald als een maat voor de wanorde in een systeem. Informatietechnisch is een nuttigere beschrijving wellicht dat het een maat is voor missende informatie over een systeem.

We kunnen zien dat de entropie van zwarte gaten grotendeels te reproduceren is door de ruimte te vullen met verstrengelde deeltjes en vervolgens een gebied in de ruimte als ontoegankelijk te nemen. Dit betekent simpelweg dat we doen alsof we niets weten van wat er zich binnen dit gebied afspeelt. Voor een zwart gat is de entropie recht evenredig met zijn oppervlak. We willen laten zien dat dit geen toeval is: in drie dimensies zou het waar moeten zijn voor een kubus en in twee dimensies voor de omtrek van een vierkant. We proberen te laten zien dat dit een gevolg is van het feit dat deze regio’s de enige gedeelde gebieden zijn tussen de binnen- en buitenkant van onze configuraties, net als de ‘event horizon’ dat is bij een zwart gat.

(4)

constructed, and traced over the degrees of freedom residing within an imaginary region of space. The resulting entanglement entropy was meant to be shown to be proportional to the size of the boundary between the two resulting territories. However, calculating such an entropy is shown to be too computationally complex to do explicitly beyond one-dimensional space. Some simple one-one-dimensional systems are considered analytically and numerically. It is found that the entropy is independent of whether the degrees of freedom traced over are located on oscillators inside or outside the chosen region. Finally, reasons for expecting area dependent entropy are discussed in further detail.

(5)

Contents

1 Introduction 5

2 Introductory theory 6

2.1 Qubits . . . 6

2.2 Pure and mixed states . . . 6

2.3 A two qubit density operator . . . 7

2.4 The density matrix formalism . . . 8

2.5 Entanglement entropy . . . 12

3 Entanglement of harmonic oscillators 16 3.1 Two entangled oscillators . . . 16

3.2 N entangled oscillators . . . 20

4 From scalar fields to oscillators 24 4.1 Equations of motion . . . 24

4.2 Modes of a free scalar field . . . 25

4.3 The Hamiltonian and quantisation . . . 26

4.4 Regularisation of infinities in momentum space . . . 26

5 Regularisation of the field in real space 28 5.1 Regularisation in (1 + 1)-spacetime . . . 28

5.2 Regularisation in (2 + 1)-spacetime . . . 29

5.3 Regularisation in (3 + 1)-spacetime . . . 30

6 Entropy of a scalar field 31 6.1 The (1 + 1)-spacetime K matrix . . . 31

6.2 Eigenvalues of Km for general m . . . 32

6.3 Eigenvalues of Km for specific small m . . . 33

6.4 The need for a mass term . . . 34

6.5 The (2 + 1)- and (3 + 1)-spacetime configurations . . . 37

7 Discussion 39 8 Conclusions 40 A Mathematical Concepts 42 A.1 The tensor product . . . 42

A.2 The trace . . . 44

A.3 Schmidt decomposition . . . 45

A.4 The Cauchy-Schwarz inequality . . . 47

A.5 The Legendre transform . . . 48

(6)

Introduction

It has long been known that the entropy of a black hole is directly proportional to the area of its boundary (Bekenstein 1973). Indeed, this discovery and related ones directly lead to the formulation of the holographic principle (Bousso 2002). Physical interpretations of this result are varying, but seem to agree that black hole entropy can be explained as a result of the black hole obscuring information (Hawking 1977) (Bekenstein 1980) (Bekenstein 2004).

Interestingly, it has been shown that the entropy resulting from the counting of microstates of a black hole, can be interpreted as arising from entanglement interactions (Brustein, Einhorn, and Yarom 2006). Building on this idea, Mark Srednicki of the University of California has written a paper detailing how these entanglement interactions reproduce area-proportional entropies, even in the absence of a physical black hole: it is sufficient that there be an inaccessible region in space (Srednicki 1993).

To show this, Srednicki took a massless scalar field and established that such a field is equivalent to an infinite set of coupled harmonic oscillators of varying frequencies. He constructed the field’s ground state density matrix from the pure field ground state and imagined a sphere of radius R enclosing some of the oscillators. These oscillators were said to be inaccessible. A reduced density operator describing the oscillators located outside of the sphere was then calculated by taking the trace over all the oscillators on the inside. Finally, the entanglement entropy of this configuration was calculated, and shown to be proportional to the surface area of the sphere.

In this thesis I shall attempt to thoroughly explain Srednicki’s methods. However, I will omit an explicit entropy calculation of the spherical configuration, in favour of one which a priori seemed mathematically cleaner. That is, I will attempt to calculate the entanglement entropy of oscillators located on a line, in a square, and in a cube in respectively (1 + 1), (2 + 1), and (3 + 1)-spacetime.

First, the required background information is provided in chapter 2. Chapter 3 will derive a procedure for calculating entanglement entropy for any system described by a harmonic oscillator Hamiltonian. In chapter 4 the assertion that a massless scalar field can be interpreted as a collection of harmonic oscillators will be justified. Chapter 5 is dedicated to reducing the field Hamiltonians of our varied-dimensional configurations to the form required by the methods of chapter 3. This is done by a procedure known as regularisation. Finally, an attempt will be made to calculate the corresponding entanglement entropy in chapter 6. However, we will encounter several technical difficulties along this attempt, which are discussed in the conclusion.

(7)

Chapter 2

Introductory theory

2.1

Qubits

Arguably the most important concept in quantum information theory, is that of a qubit. A qubit in a unit of quantum information, and it is the quantum equivalent of a classical bit. Whereas a classical bit can only be in two distinct orthogonal states - usually called 0 and 1 - a qubit is allowed to be in a superposition of both states. If we denote the state of the qubit by |χi, then we can mathematically write this superposition as:

|χi = α|0i + β|1i (2.1)

Where α and β are the (complex) probability amplitudes. This means that if we measure the qubit in this basis, the measurement outcome has probability |α|2 of being |0i and |β|2 of being |1i. Since the total probability of a qubit being in any state has to be 1, it follows that these probability amplitudes are subject to the following constraint:

α∗α + β∗β = |α|2+ |β|2= 1 (2.2)

Physically, a qubit can be any two-state quantum mechanical system, such as the spin of an electron (e.g. up and down), or the polarisation of a photon (e.g.vertical and horizontal).

2.2

Pure and mixed states

Now consider two separate qubits |χi and |φi. We write their states as follows:

|χi = α|0i + β|1i

|φi = γ|0i + δ|1i (2.3)

These states are called pure. A pure state is essentially any state that can be represented by a state vector in Hilbert Space (e.g. |χi). Thus, the fact that we can write these states explicitly this way means they are pure states. Since we know the explicit superposition of states in a pure system, there is no classical uncertainty present. Therefore, a pure state represents the maximum amount of knowledge about a quantum system (Porter 2004, chapter 4).

(8)

separate states (see Appendix A.1):

|χi ⊗ |φi = (α|0i + β|1i) ⊗ (γ|0i + δ|1i)

= αγ|0i ⊗ |0i + αδ|0i ⊗ |1i + βγ|1i ⊗ |0i + βδ|1i ⊗ |1i (2.4) In the rest of this thesis |ai ⊗ |bi shall often be abbreviated as either |ai|bi, or |abi. This state is itself a pure state. Furthermore, it is a product state, since it’s the product of two pure states. Note that this system is now in a linear superposition of four different states, and can again be separated into its constituent qubits. Consider now the following composite pure state:

|ψi = √1

2(|0i|0i + |1i|1i) = 1 √

2(|00i + |11i) (2.5)

This state has been prepared in such a way that each qubits takes the value of the other. We want to separate this combined state into its constituent qubits. Comparing this state with equation 2.4 yields the following constraints:

           αγ = √1 2 βδ = √1 2 αδ = 0 βγ = 0 (2.6)

Clearly, these constraints cannot be satisfied simultaneously. The conclusion is that the two-qubit system |ψi cannot be split into two pure states that describe its constituent qubits separately: such a state is said to be entangled. More specifically, the qubits are correlated, and this correlation means that measuring one also gives information about the other. Operators that describe a entangled qubits are represented by a statistical ensemble of multiple state vectors (Macris 2009-2010, chapter 5.1). Such a system is called mixed, and cannot be represented by a single state vector in a Hilbert space.

2.3

A two qubit density operator

Consider the example above. Both qubits are in a mixed state: a statistical ensemble of pure states. For a system described by a pure state, there exists a quantum uncertainty as to what any measurement outcome will be (equation 2.1, 2.2). For a mixed state, there exists a classical uncertainty as to what pure state the system is in, represented by the statistical ensemble.

Both mixed and pure states can be described by an appropriate density matrix, or density operator. For any system in a pure state |ψi, the density operator ρ of that system is defined as (Porter 2004, chapter 3):

ρ ≡ |ψihψ| (2.7)

(9)

Chapter 2 Section 2.4

If |ψi is a composite state, the states of its constituent qubits can be described by a reduced density operator. Section 2.4 will present a justification for this statement. The reduced density operator is defined as the partial trace - over all qubits, except the one that is to be considered - of the full density operator (see Appendix A.2 for a more in-depth discussion of the trace). For example, the density operator of the state |ψi in the example above can explicitly be written:

ρ = 1

2  |00ih00| + |00ih11| + |11ih00| + |11ih11|  (2.8) The reduced density operator ρAthat describes the first qubit A is then obtained by taking the partial trace of ρ over the degrees of freedom of the second qubit B (i.e. |0i and |1i). All information we could theoretically find about the qubit A without also looking at B is contained in this operator, since this trace is equivalent to throwing away all information about B (Macris 2009-2010, chapter 5.3).

ρA= TrB[ ρ ] = 1

2 TrB |00ih00| + |00ih11| + |11ih00| + |11ih11|  = 1

2 |0ih0| + |1ih1| 

(2.9)

Or as a matrix in the {|0i, |1i} basis:

ρA= 1 2  1 0 0 1  (2.10) Hence density operators are often called density matrices. The mixed state density oper-ator in equation 2.9 is a normalised sum of projection operoper-ators. It shall be shown in the next section that mixed states in general can be described by such a sum of projectors. Note what has happened here: the full system is written as a pure state, which represents the maximum possible knowledge about a quantum state. Classically, full knowledge about a composite system means full knowledge of its constituent parts. Quantum mechanically however, this is not true, since the reduced density matrix ρA doesn’t describe the qubit A as a pure state, but rather as a mix of pure states: a mixed state. Thus, in describing A alone, a classical uncertainty has been introduced into the system. The same is true for the qubit B, as can easily be seen by taking the trace of ρ over A. Interestingly, both reduced density matrices have the same eigenvalues, which is the basis for an important argument made later on. This property is no coincidence, as is proven in Appendix A.3.

2.4

The density matrix formalism

Thus far every concept has been described in terms of qubits: two-state systems. In general, a system can be in a superposition of many states and any orthogonal, complete basis {|uni} may be used to describe it. Consider now a system in an arbitrary state |ψi. Its expansion in the chosen basis is:

|ψi =X n

(10)

density operator for this state is:

ρ = |ψihψ| =X i,j

a∗jai|uiihuj| (2.12)

And its matrix elements are:

ρnm= hun| ρ |umi = X

i,j

a∗jaihun|uiihuj|umi = a∗man (2.13)

|ψi is normalised, so we obtain:

1 = hψ|ψi =X n |an|2 =X n a∗nan= X n ρnn = Tr[ ρ ] (2.14)

Furthermore, the expectation value of any observable (hermitian operator) Q on this state is: hQi = hψ|Q|ψi =X m,n a∗manhum|Q|uni =X m,n ρnmQmn =X n [ ρ Q ]nn = Tr [ ρ Q ] (2.15)

Here it becomes apparent why the density operator is defined as it is: the expectation value of any observable on a pure state is now equal to the trace of the product of the density operator and that observable. Note that density operators of pure states are idempotent:

ρ2= |ψihψ||ψihψ| = |ψihψ| = ρ (2.16)

The real power of the density operator, however, lies in its ability to describe statistical mixtures of states. Consider a system that is in state |ψ1i with probability p1, in state |ψ2i with probability p2, etcetera, up to state |ψni with probability pn. The probability of finding the system in any state whatsoever is 1, so the values of the pi are constrained by P

ipi = 1, and since the pi are probabilities, also by 0 ≤ pi≤ 1.

As before, the expectation value of Q in any state |ψii is hQii = hψi|Q|ψii = Tr [ ρiQ ]. The overall expectation value on the full statistical ensemble hQi is equal to the sum of the expectation value in each state, weighed by the probability of the mixture being in that state (Porter 2004, chapter 3). Mathematically:

hQi =X i pihQii = X i pi Tr [ ρiQ ] = Tr " X i piρiQ # = Tr [ ρ Q ] (2.17) Where:

(11)

Chapter 2 Section 2.4

ρ ≡X

i

piρi (2.18)

This is the density operator of the system: a linear combination of the pure density operators (projection operators), weighed by their probabilities in the statistical mixture. Furthermore, this definition shows why the reduced density operator defined in section 2.3 is indeed the density operator that describes the separate parts A or B of an entangled system AB. For let’s say we wish to compute the expectation value of an operator R = RA⊗ IB that doesn’t do anything with part B (I is the identity operator). If the full system is described by a pure state ψ =P

a,bψ(a, b)|abi, then we obtain (Samani 2014):

hψ|R|ψi = hψ|RA⊗ IB|ψi = X a,a0,b,b0 ψ∗(a0, b0) ψ(a, b) ha0b0|RA⊗ IB|abi = X a,a0,b,b0

ψ∗(a0, b0) ψ(a, b) ha0|RA|aihb0|IB|bi

= X

a,a0,b

ψ∗(a0, b) ψ(a, b) ha0|RA|ai

(2.19)

But the reduced density operator of A is the partial trace over B of the full density operator:

ρA= TrB[ ρ ]

= X

a,a0,b,b0b

ψ∗(a0, b0) ψ(a, b) h˜b|abiha0b0|˜bi

= X

a,a0,b

ψ∗(a0, b) ψ(a, b) |aiha0|

(2.20)

To prove that the reduced density operator ρA plays the role of the density operator for system A, we have to show that equation 2.17 is satisfied, with Q → R and ρ → ρA (Susskind 2013, chapter 7). First of all, note that:

Tr[ ρAR ] = Tr[ ρA(RA⊗ IB) ] = Tr[ ρARA] Tr[ ρAIB] = Tr[ ρARA] (2.21) Where we have used that Tr[ X ⊗ Y ] = Tr[ X ] Tr[ Y ], and that density matrices have unit trace. Now we may write:

Tr[ ρAR ] = Tr[ ρARA] = X a,a0,b,˜a ψ∗(a0, b) ψ(a, b) h˜a|aiha0|RA|˜ai = X a,a0,b ψ∗(a0, b) ψ(a, b) ha0|RA|ai = hψ|R|ψi (2.22)

(12)

proof that the reduced density operator of A - defined as the operator that is the partial trace over B of the full density operator - has all the required properties to function as a density operator for system A alone.

Let us now go back to equation 2.18 to determine some general properties of mixed state operators. Firstly, such an operator is Hermitian, since pi are real and ρi are Hermitian. Secondly, its trace is equal to 1:

Tr [ ρ ] = Tr " X i piρi # =X i piTr[ ρi] =X i pi = 1 (2.23)

But it is not idempotent:

ρ2 =X i,j piρipjρj = X i,j pipj|ψiihψi|ψjihψj| 6= ρ (2.24)

Now, using the complete basis {|uni}, we may write:

ii =X n

(αi)n|uni, (2.25)

Plugging this into equation 2.24, we obtain:

ρ2 =X i,j pipj X n (αi)n|uni X m (αi)∗mhum| X k (αj)k|uki X l (αj)∗lhul| =X i,j pipj   X n,l (αj)∗l(αi)n|unihul|     X m,k (αi)∗m(αj)khum|uki   =X i,j pipj   X n,l (αj)∗l(αi)n|unihul|   X m (αi)∗m(αj)m ! = X i,j,l,m,n pipj(αj)∗l(αi)n(αi)∗m(αj)m|unihul| (2.26)

The trace of this operator can be evaluated by noticing that Tr [ |unihul| ] = δnl, and that hψiji =P n(αi)∗n(αj)n (Porter 2004, chapter 3). Tr[ ρ2] = X i,j,m,n pipj(αj)∗n(αi)n(αi)∗m(αj)m =X i,j pipjhψj|ψiihψi|ψji =X i,j pipj|hψi|ψji|2 ≤X i,j pipjhψi|ψiihψj|ψji ≤X i pi X j pj = 1 (2.27)

(13)

Chapter 2 Section 2.5

Where the Cauchy-Schwartz-inequality: hx|yi ≤ hx|xihy|yi has been used in line four. Appendix A.4 contains the proof for this inequality.

If Tr[ ρ2] = 1, then |ψii = |ψji, ∀i, j. Then ρ is simply a projection operator that describes the pure state |ψi. Conversely, ρ describes a mixed state if Tr[ ρ2] < 1. Both of these implications are readily understood by picturing ρ as a matrix: ρ is Hermitian, and hence diagonalisable. The trace is basis independent, so we choose ρ to be in its diagonal form. Now ρ2 is the product of two diagonal matrices, and is therefore diagonal as well (Porter 2004, chapter 8). The entries of ρ2 are then simply the squares of the entries of ρ. Suppose ρ describes a pure state, then it has a single entry that is one, and zeros in all other places; the same will be true for its square. Now suppose ρ describes a mixed state: it must have more than one entry that is greater than zero, but all of them should be less than one, since the trace of a density matrix is always equal to one. The square of any number smaller than one is smaller still, thus it follows that Tr[ ρ2] < 1.

The diagonal elements of ρ have an interesting physical interpretation:

ρnn =X i pi(ρi)nn =X i pihun|ρi|uni =X i pihun|ψiihψi|uni =X i pi|(αi)n|2 (2.28)

Namely, the probability that corresponds to finding the system in a state |uni is given by such an element (Porter 2004, chapter 3).

2.5

Entanglement entropy

Reduced density operators were introduced in section 2.3. It was shown that information about parts of an entangled system is lost by considering these parts separately. We now want to introduce a quantitative measure of the amount of information we have about any system ρ. We define the Von Neumann entropy S = S(ρ) as this measure:

S(ρ) = − Tr[ ρ ln(ρ) ] (2.29)

Note that with an expansion into eigenstates:

ρ =X i piρi = X i pi|ψiihψi| (2.30) We can simplify: ln(ρ) = ln X i pi|ψiihψi| ! =X i ln(pi) |ψiihψi| (2.31)

(14)

ln(D) = ln    d1 0 . . . 0 d2 . . . .. . ... . ..   =    ln(d1) 0 . . . 0 ln(d2) . . . .. . ... . ..   = X i

ln(di) |δiihδi| (2.32)

Where |δii is the eigenvector of D corresponding to the i-th eigenvalue di. Using equa-tion 2.31 the entropy can now be written:

S(ρ) = − Tr [ ln(ρ) ρ ] = − Tr

" X

i

ln(pi) |ψiihψi| ρ # = −X i ln(pi) Tr [ |ψiihψi| ρ ] = −X i

ln(pi) Tr [ pj|ψiihψi| |ψjihψj| ]

= −X i ln(pi) Tr [ pi|ψiihψi| ] = −X i pi ln(pi) Tr [ ρi] = −X i pi ln(pi) (2.33)

Thus we see that the Von Neumann entropy is the quantum mechanical extension of the classical Gibbs entropy: SG= −kBPipi ln(pi). If a system is in a pure state there exists a single pi that is one for this system, with all others zero. S(ρ) is simply zero in this case, since ln(1) = 0, and since:

lim x→0[ x ln(x) ] = limx→0  ln(x) 1/x  = lim x→0  1/x −1/x2  = lim x→0[ −x ] = 0 (2.34)

Where l’Hˆopital’s rule has been used in the second step. Thus a pure state has zero entropy, as expected, since we know the quantum state of such a system with classical probability 1.

Consider now a density matrix ρ on a Hilbert Space H, describing a statistical ensemble of a two-fold system. The subsystems A, B live on their respective Hilbert Spaces HA, HB. A reduced density operator can be defined for each subsystem, by taking the trace over all the other systems. As before, these operator contain all the information about their subsystems alone (Macris 2009-2010, chapter 5.3).

ρA= TrB[ ρ ] ρB = TrA[ ρ ] (2.35)

These two subsystems can be recombined into a density operator ρABon H, which contains the same information as ρAand ρB together:

(15)

Chapter 2 Section 2.5

Note that ρAB = ρ if and only if systems A and B are not entangled. That is, if |ψi is separable such that:

|ψi = |Ai ⊗ |Bi (2.37)

Then the density operator is:

ρ = |ψihψ| = (|Ai ⊗ |Bi) (hA| ⊗ hB|) = |AihA| ⊗ |BihB| (2.38)

With reduced density operators:

ρA= |AihA| ρB = |BihB| (2.39)

And it follows that:

ρAB = ρA⊗ ρB= |AihA| ⊗ |BihB| = ρ (2.40)

If, however, A and B are correlated, then ρ contains information about those correlations, while ρAB does not. Hence, ρ has the least entropy of the two (Porter 2004, chapter 8). Mathematically:

S(ρ) ≤ S(ρAB) (2.41)

Where the equality only holds if there are no correlations. Furthermore, the entropy has the interesting property that:

S(ρAB) = S(ρA⊗ ρB) = S(ρA) + S(ρB) (2.42)

Which can be proven by choosing a basis in which ρA and ρB are diagonal: in this basis ρAB will be diagonal as well (Porter 2004, chapter 8). Denote the diagonal elements of ρA and ρB by ai and bj respectively, where i ∈ {1, 2, ..., nA} and j ∈ {1, 2, ..., nB}, where nA, nB are the respective dimensions of HA and HB. The diagonal elements of ρAB are now given by aibj. Thus:

S(ρAB) = − Tr[ ρAB ln(ρAB) ] = − n1 X i n2 X j aibj ln(aibj) (2.43)

(16)

S(ρA) + S(ρB) = − ( Tr [ ρA ln(ρA) ] + Tr [ ρB ln(ρB) ] ) = −   n1 X i ai ln(ai) + n2 X j bj ln(bj)   = −   n1 X i n2 X j aibj ln(ai) + n1 X i n2 X j aibj ln(bj)   = − n1 X i n2 X j aibj ln(aibj) = S(ρAB) (2.44)

(17)

Chapter 3

Entanglement of harmonic

oscillators

An interesting exercise is to model a real, free, massless, scalar, quantum field as an infinite collection of simple harmonic oscillators (SHO), entangled with one another. Further discussion of this idea will come in chapter 4. If we trace over the field degrees of freedom located inside an imaginary region of space, we may calculate the associated entanglement entropy S using our obtained reduced density matrix.

Of main interest is the dependence of S on the region’s dimensions: due to the conceptual similarities between this configuration and a black hole, we might expect entropy to go as the surface area. Furthermore, tracing over the degrees of freedom outside of this region gives the complementary density matrix, and according to the statement made at the end of section 2.3 both matrices should have the same eigenvalues (equation A.29), and thus the same entropy (equation 2.33). Hence, we might expect the entropy to be dependent on a region shared by both spaces. The only candidate for such a region is the surface separating them, which is another reason to expect this dependence on area (Srednicki 1993, page 1).

In this chapter we will derive the methods required to calculate entanglement entropy for configurations of SHO. Our goal is to find an expression for the entropy in terms of the Hamiltonian’s coupling matrix (refer to equation 3.22). These SHO are taken to be in their ground state, as this simplifies the required calculations (Das, Shankaranarayanan, and Sur 2008).

3.1

Two entangled oscillators

Let’s first look at a system of two entangled SHO 1 and 2, described by the following Hamiltonian: H = 1 2p 2 1+ p22+ k0(x21+ x22) + k1(x1− x2)2  (3.1) With k0 and k1 constants. The first three terms represent simply the Hamiltonian of two uncoupled SHO. The final term represents the coupling between the two oscillators, modelled as another oscillator with a different coupling constant. Note that the constants k0, k1 can be interpreted as mass times the squared angular frequency of each oscillator.

(18)

x±= √1

2(x1± x2) (3.2)

Then we see that:

x2± = 1 2(x

2

1+ x22± 2x1x2) (3.3)

And we may write:

x21+ x22= x2++ x2

(x1− x2)2= x21+ x22− 2x1x2 = 2x2−

(3.4)

Furthermore, since p = ∂x∂ (ignoring constants), we have:

p1,2 = ∂ ∂x+ ∂x+ ∂x1,2 + ∂ ∂x− ∂x− ∂x1,2 = √1 2  ∂ ∂x+ ± ∂ ∂x−  (3.5)

Where the + and − sign correspond to p1 and p2 respectively. Squaring both sides yields:

p21,2 = 1 2  ∂2 ∂x2+ + ∂2 ∂x2− ± 2 ∂ ∂x+ ∂ ∂x−  (3.6)

And we find that:

p21+ p22= ∂ 2 ∂x2+ + ∂2 ∂x2 = p 2 ++ p2− (3.7)

Thus, these new variables simplify the Hamiltonian as follows:

H = 1 2[p 2 ++ p2−+ k0(x2++ x2−) + 2k1x2−] = 1 2[p 2 ++ p2−+ k0x2++ (k0+ 2k1) x2−] = 1 2[p 2 ++ p2−+ ω+2 x2++ ω−2 x2−] (3.8)

Where we have defined ω+ = k01/2 and ω− = (k0+ 2k1)1/2, as the angular frequencies of the new, uncoupled oscillators. The Hamiltonian has been diagonalised. Since the Hamiltonian is now a sum of two SHO Hamiltonians, the wave functions describing this system are a product of two SHO wave functions of the appropriate frequencies. Thus, the normalised ground state wave function is given by (Griffiths 2005, chapter 2):

(19)

Chapter 3 Section 3.1 ψ0(x1, x2) = ψ(x+) ψ(x−) =  π−1/4ω1/4+ exp 1 2ω 2 +x2+   π−1/4ω1/4exp 1 2ω 2 −x2−  = π−1/2(ω+ω−)1/4exp  1 2(ω 2 +x2++ ω−2x2−)  (3.9)

Our idea is now to imagine an inaccessible region, which encloses one of our oscillators, but not the other. Mathematically, we construct a density matrix and trace over the ‘inside’ oscillator 1 to obtain a reduced density matrix for the ‘outside’ oscillator 2. First, write ψ0(x1, x2) as a state vector in Hilbert space. Choosing the position basis:

0i = Z +∞

−∞

dx1dx2 ψ0(x1, x2) |x1i|x2i (3.10) An expression for the ground state density matrix is:

ρ = |ψ0ihψ0| = Z dx1dx2dx01dx02ψ0(x1, x2) ψ0∗(x01, x02) |x1i|x2ihx01|hx 0 2| (3.11)

And the reduced density matrix of the outside oscillator is given by (Samani 2014):

ρout= Trin[ ρ ] =

Z

dq dx1dx2dx01dx02ψ0(x1, x2) ψ0∗(x01, x02) hq|x1ihx01|qi |x2ihx02| = Z dq dx1dx2dx01dx02ψ0(x1, x2) ψ∗0(x10, x02) δ(x1− q) δ(x01− q)|x2ihx02| = Z dx1dx2dx02 ψ0(x1, x2) ψ∗0(x1, x02) |x2ihx02| (3.12)

Thus, a matrix element ρout(x2, x02) is represented as:

ρout(x2, x02) = hx2|ρout|x02i = Z dx1d˜x2d˜x02 ψ0(x1, ˜x2) ψ0∗(x1, ˜x02) hx2|˜x2ih˜x02|x02i = Z dx1ψ0(x1, x2) ψ∗0(x1, x02) (3.13)

The reduced density matrix elements are labelled by the same variables on which they are dependent, so equation 3.13 gives a complete description of the oscillators on the outside. To solve this integral, put in equation 3.9 and expand in terms of x1, and x2. Then separate terms where possible, and finally note that the integral is over x1:

ρout(x2, x02) = 1 π(ω+ω−) 1/2 Z +∞ −∞ dx1 exp  −1 4(A x 2 1+ B x1+ C)  (3.14)

(20)

     A = 2 ω++ 2 ω− B = 2 ω+(x2+ x02) − 2 ω−(x2+ x02) C = ω+(x22+ x022) + ω−(x22+ x022) (3.15)

The integral can be calculated using Mathematica. The frequencies ω+and ω−are positive, so Re(A) > 0, and the integral evaluates to:

Z +∞ −∞ dx exp  −1 4(A x 2+ B x + C)  = 2 π A 1/2 exp  −1 4  B2 4A − C  (3.16)

With some algebra then, equation 3.14 becomes (Srednicki 1993, page 2):

ρout(x2, x02) = π−1/2(γ − β)1/2exp h −γ 2(x 2 2+ x022) + βx2x02 i (3.17)

The constants are:

γ − β = 2 ω+ω− ω++ ω− β = 1 4 (ω+− ω−)2 ω++ ω− γ = 1 2(ω++ ω−) − 1 4 (ω+− ω−)2 ω++ ω− (3.18)

According to equation 2.33 the entropy of ρout is most easily calculated when its eigen-values pn are known. To find these eigenvalues, we have to solve the following continuous eigensystem equation simultaneously for all n (which runs from zero to infinity):

Z +∞

−∞

dx0ρout(x, x0)fn(x0) = pnfn(x) (3.19)

The solution is (Srednicki 1993, page 3):

pn= (1 − ξ) ξn fn(x) = Hn  α1/2xexp  −1 2α x 2  (3.20)

Where Hn is a (physicist’s) Hermite polynomial, α = (γ2 − β2)1/2 = (ω+ω−)1/2, and ξ = β/(γ + α). In accord with equation 2.33 the entropy is now:

(21)

Chapter 3 Section 3.2 S(ξ) = − ∞ X n=0 pnln(pn) = − ∞ X n=0 (1 − ξ) ξnln((1 − ξ) ξn) = − ∞ X n=0 (1 − ξ) ξn(ln(1 − ξ) + n ln(ξ)) = − (1 − ξ) ln(1 − ξ) ∞ X n=0 ξn− (1 − ξ) ln(ξ) ∞ X n=0 n ξn = − (1 − ξ) ln(1 − ξ) ξ 1 − ξ − (1 − ξ) ln(ξ) ξ (ξ − 1)2 = − ξ ln(1 − ξ) − ξ 1 − ξ ln(ξ) (3.21)

Which is easily evaluated when ξ is known.

3.2

N entangled oscillators

We may generalise this system to one of N entangled oscillators with Hamiltonian:

H = 1 2 N X i=1 p2i +1 2 N X i,j xiKijxj (3.22)

Kij is the coupling matrix, and therefore has to be real, since coupling is in principle a measurable quantity. Furthermore, it is symmetric, because coupling between i and j equals coupling between j and i. Finally, it must have positive eigenvalues, since its eigenvectors form the basis in which H is uncoupled, and are thus associated with the frequencies of these uncoupled oscillators (compare equation 3.8) (Lay 2012, chapter 7). The normalised ground state is again a product of the N uncoupled wavefunctions. De-fine the matrix Ω = UTKD1/2U , where KD = U KUT is diagonal and U is orthogonal (Lay 2012, chapter 5). Then det(Ω) is the product of all uncoupled angular frequencies (compare ω+ω− from the two-oscillator case), which is readily understood by viewing Ω in its diagonal form. Thus, the generalisation of 3.9 is (Srednicki 1993, page 4):

ψ0(x1, x2, . . . , xN) = π−N/4det(Ω)1/4exp  −1 2x T · Ω · x  (3.23) Where x is the N -vector with components x1, x2, . . . , xN. Choosing the position basis, the density operator is (compare equation 3.11):

ρ =

Z N

Y

i=1

dxidx0iψ0(x1, x2, . . . , xN) ψ∗0(x01, x20, . . . , x0N) |x1i . . . |xNihx01| . . . hx0N| (3.24)

Tracing over the first n ‘inside’ oscillators, the matrix elements of the reduced density matrix are (compare equation 3.13):

(22)

ρout(xn+1, . . . , xN; x0n+1, . . . , x0N) = Z n Y i=1 dxiψ0(x1, . . . xn; xn+1, . . . , xN) ×ψ0∗(x1, . . . xn; x0n+1, . . . , x 0 N) (3.25)

Ω is symmetric, since K is symmetric. Splitting Ω into matrices describing the inside and outside oscillators separately, we have:

Ω =  A B BT C  (3.26)

Where A is n × n, B is n × (N − n), and C is (N − n) × (N − n). Note that A and C are also symmetric. This allows us to evaluate the integrals in 3.25 explicitly. First, we rewrite the vectors x and x0 as follows:

x =           x1 .. . xn xn+1 .. . xN           =  u v  x0 =           x1 .. . xn x0n+1 .. . x0N           =  u v0  (3.27)

Using this notation, we find that:

xT · Ω · x = uT · A · u + uT · B · v + vT · BT · u + vT · C · v

x0T · Ω · x0 = uT · A · u + uT · B · v0+ v0T · BT · u + v0T· C · v0 (3.28) Furthermore, we know that:

xT · MT · y =X i,j xiMijTyj = X i,j xiMjiyj = X i,j yjMjixi= yT · M · x (3.29)

So we can simplify 3.28 to:

xT · Ω · x = uT · A · u + vT · C · v + 2 uT · B · v

x0T · Ω · x0= uT · A · u + v0T· C · v0+ 2 uT · B · v0 (3.30) Plugging 3.23 into 3.25 gives our integrand. The constant factors will be ignored for now, since normalisation requires that the eigenvalues sum to one.

ρout(v ; v0) = Z +∞ −∞ dnu exp  −1 2  xT · Ω · x + x0T · Ω · x0  (3.31)

(23)

Chapter 3 Section 3.2 exp  −  uT · A · u + uT · B · v + uT · B · v0+1 2  vT · C · v + v0T · C · v0  (3.32)

We’ll use Wolfram Mathematica to evaluate the integral. The result is:

ρout(v ; v0) ∼ exp  1 2 v · β · v + v 0· β · v0+ v · β · v0+ v0· β · v − v · C · v − v0· C · v0  ∼ exp 1 2 v · β · v + v 0· β · v0− v · C · v − v0· C · v0 + v · β · v0  ∼ exp  v · β · v0−1 2 v · γ · v + v 0· γ · v0  (3.33) Where β = 12BTA−1B, γ = C − β, and v is an (N − n)-vector. For convenience we rename v to x. Note that this vector only consists of the last N − n components of the original x (see 3.27). The reduced density matrix is then:

ρout(x, x0) ∼ exp  −1 2(x · γ · x + x 0· γ · x0) + x · β · x0  (3.34) To find the associated entanglement entropy, we need to find the eigenvalues of the reduced density matrix. First, we need to generalise equation 3.19 to apply to vectors x and x0. Note that det(G) ρout(Gx, Gx0) has the same eigenvalues as ρout(x, x0). This can be shown explicitly by computing the Jacobian (Adams and Essex 2010, chapters 3-4):

∂((Gx)1, (Gx)2, . . . , (Gx)m) ∂(x1, x2, . . . , xm) = det     ∂(Gx)1 ∂x1 . . . ∂(Gx)1 ∂xm .. . . .. ... ∂(Gx)m ∂x1 . . . ∂(Gx)m ∂xm     = det(G) (3.35)

Which is true for any diagonalisable matrix G, since the determinant of a matrix is inde-pendent of the chosen basis. It now follows that:

Z

dx0det(G) ρout(Gx, Gx0)fn(Gx0) = Z

d(Gx0)

det(G) det(G) ρout(Gx, Gx 0)f n(Gx0) = Z d(Gx0) ρout(Gx, Gx0)fn(Gx0) = pnfn(Gx) (3.36)

Thus the eigenvalues are the same as before. Now diagonalise γ using the linear transfor-mation γD = V γ VT, where V is orthogonal. Let x = VT γD−1/2y, so that:

x · γ · x =VT γD−1/2yT VTγDV VT γ −1/2 D y = y γD−1/2V VTγDV VT γ −1/2 D y = y γD−1/2γDγ −1/2 D y = y · y (3.37)

(24)

Defining β = γD V βV γD , equation 3.34 may be written: ρout(y, y0) ∼ exp  −1 2(y · y + y 0· y0 ) + y · β0· y0  (3.38)

Now, find the orthogonal matrix W such that βD0 = WTβ0W is diagonal, and set y = W z to obtain: ρout(z, z0) ∼ exp  −1 2(z W TW z + z0WTW z0) + z WTβ0W z0  ∼ exp  −1 2(z · z + z 0· z0) + z · β0 D · z 0  ∼ N Y i=n+1 exp  −1 2(z 2 i + z02i) + βi0ziz0i  (3.39)

Where zi is the i-th component of the vector z, and βi0 is the corresponding i-th eigenvalue of β0. Note that this density operator is simply a product of density operators identical to equation 3.17. These systems have entropy S(ξ), given by equation 3.21. Equation 2.44 is the recipe for calculating the entropy of an uncorrelated combined system from the entropies of its constituent parts. Thus, the total entropy is:

S = N X

n+1

S(ξi) (3.40)

With ξi = βi0/[ 1 + (1 − β02i)1/2], as obtained by a comparison with ξ = β/(γ + α) = β/[ γ + (γ2− β2)1/2] from the two-oscillator case. This is the general result for the entropy of N entangled SHO, ignoring any information about the first n (Srednicki 1993, page 5). Important is to note that this entropy is fully dependent on the form of the coupling matrix K in the Hamiltonian.

(25)

Chapter 4

From scalar fields to oscillators

In chapter 3 it was mentioned that a scalar field can be described as a collection of simple harmonic oscillators. In this chapter that concept will be discussed more quantitatively.

4.1

Equations of motion

In classical physics, the trajectory q(t) of a system can be determined by extremising the action S (Fowles and Cassiday 2005, chapter 10):

S[q(t)] = Z t2

t1

L(t, q(t), ˙q(t), ¨q(t), . . . ) dt (4.1)

Where L(t, q(t), ˙q(t), ¨q(t), . . . ) is the Lagrangian of the system in question. Here q is a generalised coordinate, and t the time. From this the Euler-Lagrange equations are derived, which give the equations of motion for the system when solved:

∂L ∂q = ∂ ∂t ∂L ∂ ˙q (4.2)

Since a field is the assignment of a physical value to each point in spacetime, we want to use a relativistic theory: one that is Lorentz covariant. The obvious candidate is classical field theory, which gives the Lagrangian density of a real scalar field as follows (Ramond 1981) (Landau and Lifshitz 1975):

L = 1 2(∂µφ) 21 2m 2φ2 = 1 2h ˙φ 2− |∇φ|2− m2φ2i (4.3)

Where the Lagrangian is obtained by integration of the Lagrangian density over all space. The action is then given by:

S[φ] = Z

d4xL (t, φ, ˙φ) (4.4)

Where the integral is over spacetime. The covariant Euler-Lagrange equations are: ∂L

∂φ = ∂µ ∂L ∂(∂µφ)

(26)

∂L

∂φ = −m

2φ (4.6)

And for the right-hand side:

∂µ ∂L ∂(∂µφ) = 1 2∂µ ∂(∂λφ ∂νφ ηλν) ∂(∂µφ) = ∂µ∂ µ φ =  φ (4.7)

This yields the equations of motion:

 φ + m2φ = 0 (4.8)

In which we recognise the Klein-Gordon equation.

4.2

Modes of a free scalar field

We want to solve the Klein-Gordon equation for our field. Consider a free scalar field φ(x, t). It is a physical quantity that has a value at each point in space and time. We may take the Fourier transform of this field, seperating x- and t-dependence (Mukhanov and Winitzki 2004, chapter 1):

φ(x, t) = Z

d3k exp[ ik · x ] φk(t) (4.9)

Where k and x are 3-vectors. Their components correspond to the three spatial directions. The φk(t) are called the Fourier modes of the field. Expressing φ(x, t) this way allows us to solve the Klein-Gordon equation. Plugging the field in gives:

Z

d3k exp[ ik · x ] ¨φk(t) + k2φk(t) + m2φk(t) 

= 0 (4.10)

This can only be true if:

¨

φk(t) + (k2+ m2) φk(t) = 0 (4.11)

For all k = kxx + kˆ yy + kˆ zz. We have obtained a system consisting of an infinite amountˆ of ordinary second-order differential equations, with plane waves as solutions. Thus, these differential equations describe classical harmonic oscillators of angular frequency ωk ≡ √

k2+ m2. The interpretation then is that each value of |k| corresponds to a harmonic oscillator with momentum |k|. This result reinforces the idea of a scalar field as a collection of SHOs.

(27)

Chapter 4 Section 4.4

4.3

The Hamiltonian and quantisation

So far all calculations have been done within a classical framework. To go from a classical field theory to a quantum field theory, the system needs to be quantised. We’ll start by deriving the classical Hamiltonian from the field Lagrangian 4.3. Within the Lagrange for-malism the Hamiltonian density is defined as the Legendre transform L of the Lagrangian density (see Appendix A.5), with respect to ˙φ. Note that the mass term 12m2φ2 is ignored, since the field is taken to be massless. The canonical field momentum π is defined as:

π = ∂L ∂ ˙φ = 1 2 ∂ ∂ ˙φh ˙φ 2− |∇φ|2i= ˙φ (4.12)

Then the Hamiltonian density is:

H = L ◦ L = π ˙φ − L = π2L = 1 2π

2+ |∇φ|2

(4.13) Thus, the Hamiltonian is given by:

H = Z d3xH = 1 2 Z d3x π2(x, t) + |∇φ(x, t)|2 (4.14)

In order to quantise the Hamiltonian, we introduce operators ˆφ and ˆπ. These replace the classical canonical coordinates φ and π. The operator algebra follows from postulating the canonical commutation relation:

[ ˆφ(x, t), ˆπ(y, t)] = iδ(x − y) [ ˆφ(x, t), ˆφ(y, t)] = [ˆπ(x, t), ˆπ(y, t)] = 0 (4.15)

4.4

Regularisation of infinities in momentum space

If we were to calculate the field energy using equation 4.14, we would find that the result diverges to infinity. This can be readily understood by replacing φ by its Fourier transform, as given by equation 4.9. In this section we will derive this result in order to justify the introduction of cutoff frequencies. Afterwards, we will return to the view of a field in real space and apply these same cutoffs.

According to equation 4.11 the field modes φkcan be interpreted as representing harmonic oscillators of frequency ωk ≡ √k2+ m2. Then equation A.46 gives the Hamiltonian density of our field in terms of the Fourier modes:

Hk= 1 2

h ˆ

πk2+ ω2kφˆ2ki (4.16)

Where ˆπ = ∂t∂φ is the canonical momentum operator (see equation A.45). Integrating overˆ all momenta gives the full Hamiltonian, since the oscillators are decoupled:

H = Z d3k Hk= 1 2 Z d3k h ˆ πk2+ ω2kφˆ2k i (4.17)

(28)

2005, chapter 2):

Ek= 1

2ωk (4.18)

Which means the total ground state energy is:

E = 1 2

Z

d3k ωk (4.19)

This expression diverges for two reasons. As it is now |k| can become arbitrarily large. The value of |k| corresponds to the momentum of the oscillator, and thus to the energy. Therefore, increasing |k| means an increase in energy, especially for large values of |k|. Thus E diverges. A solution is to introduce a cutoff frequency above which all oscillators are ignored. This essentially comes down to introducing a minimal grid size a: oscillators with wavelengths smaller than this are ignored. This is called the ultraviolet cutoff, after ultraviolet light having relatively short wavelength. Our integral now becomes a sum:

E = π a

X

k

ωk (4.20)

Where the sum is over all continuous combinations of kx, ky, kz, that add up to at most 2π

a. Here we have used that k = 2π

λ , where λ is the wavelength. Herein lies the second cause of divergence: k is continuous. Even if we stop counting above a certain frequency, we’ll still be adding up an infinite amount oscillators with positive energies. Clearly, this causes the energy to diverge. A solution is to discretise the possible values for k. A way to do this is to introduce a cube with sides L = N a, with N some large integer, in which the oscillators must exist (we demand that φk vanishes outside). This is called the infrared cutoff, since larger wavelengths correspond to lower energies. Just like standing waves in classical harmonics, quantum oscillators must obey the following periodic boundary condition:

φ(0, y, z, t) = φ(L, y, z, t) (4.21)

And equivalently for the y and z direction. From equation 4.9 we now see that:

1 = exp[ 0 ] = exp[ ikx,y,zL ] (4.22)

Thus kx,y,z= 2π nx,y,z/L, with nx,y,z∈ Z. Combining these two cutoffs gives a finite zero-point energy. This method of introducing cutoffs is called regularisation (Srednicki 1993, page 4). Its application is necessary to obtain physically sensible answers, and is justified by the fact that quantum field theories tend to fail at extreme energies (e.g. Planck-scale). Writing our field in its Fourier modes was a way to provide an intuitive explanation of why it can be modelled by an infinite set of harmonic oscillators; a view that was used in chapter 3 to derive a relatively simple formula for the entropy. However, since the oscillators from section 4.4 live in momentum space, we cannot easily use this view for our entropy calculation. For that reason, we will return to a view of the field in real space, as opposed to momentum space, in the next chapter and redo the regularisation.

(29)

Chapter 5

Regularisation of the field in real

space

Regularisation of a field in real space is done by the same methods as in momentum space. Both an ultraviolet and infrared cutoff are introduced in order to discretise the Hamiltonian of the system. As long as the field Hamiltonian is of the form of equation 3.22, its reduced density matrix will be of the form of equation 3.25. Then the entropy will be given by equation 3.40, and we will have achieved our goal. This chapter will be dedicated to discretising the field Hamiltonian in real space, and showing its equivalence to 3.22.

5.1

Regularisation in (1 + 1)-spacetime

We will start with a simplified model in (1+1)-spacetime. Remember that the Hamiltonian of a real free massless scalar scalar field is:

H = 1 2

Z

d3xh ˙φ2(x, t) + |∇φ(x, t)|2i (5.1)

If there exists only one spatial dimension this reduces to:

H = 1 2 Z dx " ˙ φ2(x, t) + d dxφ(x, t) 2# (5.2)

As the ultraviolet cutoff, we introduce a lattice with minimum spacing a. We label each lattice site by a natural number i ∈ N. Now we take a large integer N and define a length L = (N + 1)a. This length defines the infrared cutoff: we demand φ(x, t) be zero for x > L. Applying these regulators to 5.2 means discretising φ(x, t) so that it only takes values at the lattice sites, and only for x ≤ L. The first term in the Hamiltonian simply becomes:

˙

φ2(x) → ˙φ2(ia) ≡ ˙φ2i ≡ π2

i (5.3)

Where the continuous coordinate x is replaced by a lattice site index i. The notation φi implicitly uses the fact that the spatial separation between i and i + 1 is a. Keep in

(30)

derivative is more complicated. Remember the definition of the derivative: d dxφ(x) ≡ lim→0 φ(x + ) − φ(x) (x + ) − x = lim→0 φ(x + ) − φ(x)  (5.4)

If φ(x) is replaced by φ(ia) ≡ φi then φ(x + a) is replaced by φ((i + 1) a) ≡ φi+1. Thus the derivative is replaced by:

d dxφ(x) → φi+1− φi (i + 1) a − ia = φi+1− φi a (5.5)

Finally, the integral over x becomes a sum over i with spacing a, that is: dx → a. The discrete Hamiltonian is therefore:

H = a 2 N X i=1 " π2i + φi+1− φi a 2# = a 2 N X i=1  π2i + 1 a2 φ 2 i+1+ φ2i − 2φi+1φi   (5.6)

5.2

Regularisation in (2 + 1)-spacetime

In two dimensions equation 5.1 reduces to:

H = 1 2 Z dx dy h π2(x, y, t) + |∇φ(x, y, t)|2 i = 1 2 Z dx dy " π2(x, y, t) +  ˆ x d dx + ˆy d dy  φ(x, y, t) 2# (5.7)

Where ˆx and ˆy are the unit vectors in respectively the x- and y-directions. To discretise, we introduce the same lattice as before. This time however, it stretches in both directions, and thus forms a square of side L. The lattice sites are labelled by a coordinate (i, j), where i represents the x-direction as before, and j the y-direction comparably. Thus, the main substitution is:

φ(x, y) → φ(ia, ja) ≡ φij (5.8)

Dealing with the time derivative is again simple:

π2(x, y) → π2ij (5.9)

As for the gradient term:

ˆ x d dxφ(x, y) ≡ ˆx lim→0 φ(x + , y) − φ(x, y)  → ˆi φi+1,j − φi,j a (5.10)

And similarly for the y-derivative. ˆx and ˆy are orthogonal, so the square of the gradient reduces to:

(31)

Chapter 5 Section 5.3 |∇φ(x, y)|2→ φi+1,j − φi,j a 2 + φi,j+1− φi,j a 2 → 1 a2 φ 2

i+1,j+ φ2i,j− 2φi+1,jφi,j+ φ2i,j+1+ φ2i,j− 2φi,j+1φi,j 

→ 1

a2 φ 2

i+1,j+ φ2i,j+1+ 2φ2i,j− 2φi+1,jφi,j − 2φi,j+1φi,j 

(5.11)

Finally, our 2D Hamiltonian is:

H = a 2 N X i=1 N X j=1  π2ij+ 1 a2 φ 2

i+1,j+ φ2i,j+1+ 2φ2i,j− 2φi+1,jφi,j− 2φi,j+1φi,j 



(5.12)

5.3

Regularisation in (3 + 1)-spacetime

Lastly, in the (3 + 1)-spacetime we’re familiar with, the continuous Hamiltonian is of the form: H = 1 2 Z dx dy dz h π2(x, y, z, t) + |∇φ(x, y, z, t)|2 i = 1 2 Z dx dy dz " π2(x, y, z, t) +  ˆ x d dx + ˆy d dy + ˆz d dz  φ(x, y, z, t) 2# (5.13)

Continuing our original approach, we introduce a third direction in the lattice. We then have a cube of side L, with spacing a in the x-, y- and z-directions. A lattice site is now labelled by a coordinate (i, j, k). As such, we substitute:

φ(x, y, z) → φ(ia, ja, ka) ≡ φijk (5.14)

The first term is again obvious:

π2(x, y, z) → πijk2 (5.15)

Since ˆx, ˆy and ˆz are mutually orthogonal, the gradient is treated the same as before:

|∇φ(x, y, z)|2 → φi+1,j,k− φi,j,k a 2 + φi,j+1,k− φi,j,k a 2 + φi,j,k+1− φi,j,k a 2 (5.16) Writing out the brackets and combining terms yields the following Hamiltonian:

H = a 2 N X i=1 N X j=1 N X k=1  π2ijk+ 1 a2 

φ2i+1,j,k+ φ2i,j+1,k+ φ2i,j,k+1+ 3φ2i,j,k

− 2φi+1,jφi,j,k− 2φi,j+1,kφi,j,k− 2φi,j,k+1φi,j,k 

(5.17)

(32)

Entropy of a scalar field

Having reduced all our field Hamiltonians to the form required by the methods of chapter 3, what remains to be done is to calculate the entropy of our configurations. This chapter focuses on this task and its many unforseen difficulties.

6.1

The (1 + 1)-spacetime K matrix

Remember the 1D field Hamiltonian from equation 5.6:

H = a 2 N X i=1  ˙ φ2i + 1 a2 φ 2 i+1+ φ2i − 2φi+1φi   (6.1)

The form required by equation 3.22 is:

H = 1 2 N X i π2i +1 2 N X i N X l φiK{i,l}φl (6.2)

Where K is a real, symmetric matrix with positive eigenvalues. Now we imagine an inaccessible region of space (a line) covering the first n sites so that we have no knowledge about those. Our goal is to compute the entanglement entropy, using the methods derived in chapter 3. First then, we need to find K explicitly. From equation 6.1 we see that only nearest-neighbour coupling is accounted for. Mathematically:

K{i,l} = 0 ↔ i 6= l ± 1 (6.3)

Furthermore, K has to be symmetric so K{i,l} = K{l,i} = −1 for all i = l ± 1. The diagonal terms are all 2, except for the ones at the edges, which are 1. As such, we find that:

K = 1 a2           1 −1 0 . . . 0 0 0 −1 2 −1 . . . 0 0 0 0 −1 2 . . . 0 0 0 0 0 −1 . . . −1 0 0 0 0 0 . . . 2 −1 0 0 0 0 . . . −1 2 −1 0 0 0 . . . 0 −1 1           (6.4)

(33)

Chapter 6 Section 6.3

From now on, we will write Km for the (m × m) matrix K.

6.2

Eigenvalues of K

m

for general m

To find the eigenvalues of this matrix, we first search for the eigenvalues λ of Km− Im: the eigenvalues of Km are then simply λ + 1. Since K is tridiagonal, it’s determinant can be computed from the three-term recurrence relation (El-Mikkawy 2004). That is, the determinant of Jm = (Km− Im) − Imλ - the characteristic polynomial of Km− Im - is given by Pm= −λ Qm−1− Qm−2, where Qm= (1 − λ)Qm−1− Qm−2 is the characteristic polynomial of the same Km− Im, only with its mm-th entry of zero replaced by a one. The relation is teminated by the values Q1 = −λ and Q2 = λ2− λ − 1. An illustrative example is the (4 × 4) case:

P4= |J | = −λ −1 0 0 −1 1 − λ −1 0 0 −1 1 − λ −1 0 0 −1 1 − λ = −λ −1 0 −1 1 − λ 0 0 −1 −1 (−1)4+3· (−1) + −λ −1 0 −1 1 − λ −1 0 −1 1 − λ (−1)4+4· (−λ) = Q2 0 0 0 −1 −1 − λ |Q3| = −λ|Q3| − Q2 (6.5) Furthermore: Q3= −λ −1 0 −1 1 − λ −1 0 −1 1 − λ = −λ 0 0 −1 (−1)3+2· (−1) + −λ −1 −1 1 − λ (−1)3+3· (1 − λ) = (1 − λ)Q2− Q1 (6.6)

Which illustrates the reason for defining Qm: Pmcannot be written in terms of polynomials P<m, because the matrix Km− Im is not formed from Km−1− Im−1 by simply adding and m-th row and column that follows the tridiagonal pattern, since this would leave Km− Im with a zero in a diagonal cell not at the edge. Instead, it is formed from the matrix of which Qm−1 is the characteristic polynomial. Hence the recurrence relation gives P in terms of Q.

Finding the eigenvalues of Km− Im has now been reduced to finding the roots of Pm. Ideally, we want to find a polynomial of λ that gives the eigenvalues. Indeed, this prob-lem has an analytical solution, since symmetric tridiagonal matrices can be analytically inverted (Hu and O’Connell 1996), but the involved methods are beyond the scope of this thesis.

(34)

m

Instead it may be possible to calculate the eigenvalues of Km, for certain small values of m. As a reminder, the expression for the entropy was given by equation 3.40:

S =X

i

S(ξi) (6.7)

Equation 3.21 gives simplest form of S(ξ):

S(ξ) = − ξ ln(1 − ξ) − ξ

1 − ξ ln(ξ) (6.8)

We derived that ξi = β0i/[ 1 + (1 − β02i)1/2], and βi0 is the i-th eigenvalue of the matrix:

β0 = γ−1/2D V βVTγD−1/2 (6.9)

Where β = 12BTA−1B and γD the diagonal form of γ = C − β. V is the orthogonal matrix that diagonalises γ according to γD = V γVT. Note that this means that VT has the eigenvectors of γ as its columns. The expressions for A, B, and C follow from equation 3.26: Ωm =  A B BT C  (6.10)

Here Ωm is the square root of Km, defined as Ω = UTKD1/2U , where KD is the diagonal form of K, and U is defined by KD = U KUT. UT has the eigenvectors of K as its columns. K1 is the trivial case: with only one lattice site/harmonic oscillator, there can be no interactions, and therefore no entropy.

For K2 we have two oscillators. We may take our inaccessible region to envelop either (if we choose neither or both the entropy will be zero). The entropy contains one term, since β0 is a (1 × 1) matrix, and thus has one eigenvalue. Explicitly, we have:

K = 1 a2  1 −1 −1 1  (6.11)

Its diagonal form is:

KD = 1 a2  2 0 0 0  (6.12) And we have: UT = − 1 √ 2 1 √ 2 1 √ 2 1 √ 2 ! (6.13) Which yields:

(35)

Chapter 6 Section 6.4 Ω = 1 a 1 √ 2 − 1 √ 2 −1 2 1 √ 2 ! (6.14)

We trace over one of the oscillators/lattice sites, so A = √1 2a  , B = −1 2a  , and C =√1 2a  . It follows that β = 1 2√2a  , γ = 1 2√2a 

. Normalised VT is then simply I1. Note that normalisation is required to satisfy the orthogonality constraint of V . Plugging this into the equation for β0 gives:

β0=  1 2√2a −1/2 · I1·  1 2√2a  · I1·  1 2√2a −1/2 = I1 (6.15)

Its eigenvalue is 1, so ξ = 1/[ 1 + (1 − 1)1/2] = 1. Plugging this into 6.8:

lim ξ→1  − ξ ln(1 − ξ) − ξ 1 − ξ ln(ξ)  = ∞ (6.16)

This result is extremely curious, and requires more exploration. We will attempt solutions of larger K matrices using Mathematica. Appendix A.6 contains an example of the used code, and the observed numerical behaviours of the entropy of K4. We will conclude this section by interpreting the cases for which Mathematica managed analytical results: K2 and to a degree K3. These are given by the following matrices:

K2= 1 a2  1 −1 −1 1  K3 = 1 a2   1 −1 0 −1 2 −1 0 −1 1   (6.17)

Of course, we expect Mathematica to give the same result for K2 as we found above, namely that the entropy goes to ∞. Interestingly, this is not only the case for K2, but also for K3. It does not matter whether we enclose one or two lattice sites in the imaginary region. The mathematical reason for this is that β0 has only one nonzero eigenvalue in all three cases, and that eigenvalue is one. The next section is dedicated to this phenomenon.

6.4

The need for a mass term

At this point it may be prudent to step back to the Hamiltonian of two entangled SHO, given by equation 3.1: H = 1 2p 2 1+ p22+ k0(x21+ x22) + k1(x1− x2)2  (6.18) Interpreting the last two terms in this Hamiltonian as the coupling matrix K, we write in matrix form: K =  k0+ k1 −k1 −k1 k0+ k1  (6.19)

(36)

original plan was to take a massless scalar field, and indeed comparing our earlier expres-sion for K2 with the above K leads us to conclude that K2 lacks the terms associated with mass. The same can be said for K3, since the nearest-neighbour-coupling Hamiltonian for three oscillators has the form:

H = 1 2p 2 1+ p22+ p23+ k0(x21+ x22+ x23) + k1(x1− x2)2+ k1(x2− x3)2  (6.20)

The coupling matrix is then:

K =   k0+ k1 −k1 0 −k1 k0+ 2k1 −k1 0 −k1 k0+ k1   (6.21)

Yet again, comparing our earlier expression for K3 leads to the conclusion that it lacks mass terms. This would be no problem, were it not for the fact that the absence of mass seems to be the reason for the divergence of entropy. To show this, consider matrices of the form: K =           m + k −k 0 . . . 0 0 0 −k m + 2k −k . . . 0 0 0 0 −k m + 2k . . . 0 0 0 0 0 −k . . . −k 0 0 0 0 0 . . . m + 2k −k 0 0 0 0 . . . −k m + 2k −k 0 0 0 . . . 0 −k m + k           (6.22)

The eigenvalues are generally non-trivial, but one of the eigenvalues/modes is m. Physi-cally, this is the mode that described the movement of an uncoupled oscillator of mass m. Since our K2 and K3 contain no mass terms, this eigenvalue will be zero, which results in the nonzero eigenvalue of β0 being one. Indeed, one can analytically calculate the form of β0 for the matrix:

K =  m + k −k −k m + k  (6.23)

After tracing over one oscillator. The result is:

β0 = √ m + 2k −√m2 2 m + k + 3√m√m + 2k ! (6.24)

Setting m = 0 reduces this expressions to β0 = (1), hence the entropy will diverge. Let us now include a mass term and see what changes. Starting from equation 6.23, and tracing over one oscillator, the entropy is given by the expression in figure 6.1.

(37)

Chapter 6 Section 6.4

Figure 6.1: Entropy in one dimensional discrete space of two coupled oscillators with mass m and coupling k = r. Here the parameter g represents the square of the lattice spacing: a2.

Setting both m = 1 and r = 1, we obtain as simplest possible analytical expression:

S2,1 = 1 2 √ 3 − 2      ln  −2p90 + 52√3 + 8√3 + 13  p 6 + 4√3 + 2√3 + 4 tanh−1  3−2 4 √ 6+4√3+7√3+2  2p6 + 4√3 + 3√3 + 2     (6.25) Here the first subscript denotes the dimensionality of K, and the second the number of sites traced over. Note that this result seems to be independent of the lattice spacing a. Numerically, this evaluates to S2,1 ≈ 0.075943. Thus including a mass term seems to solve the problem of divergent entropies. The general expression for the entropy of mass-inclusive K3 is far too large to show here. If necessary, one may find it by running the code given in Appendix A.6 on the appropriate hardware. We will discuss the specific case where both mass and coupling terms are one here. The associated coupling matrix is: K =   2 −1 0 −1 3 −1 0 −1 2   (6.26)

Tracing over the first two oscillator sites gives entropy:

S3,2 = 1 12  17√2 − 27       ln  27−17√2 12 √ 38+27√2+55√2+27  p 38 + 27√2 + 6√2 + 12 ln 12 √ 38+27√2+6√2  12 √ 38+27√2+55√2+27 ! 12p38 + 27√2 + 55√2 + 27       (6.27) Numerically, S3,2 ≈ 0.0612146. An analytical expression for S3,1 is too complex to com-pute. However, its numerical value is also 0.0612146, which is an interesting symmetry. Already in the introduction of chapter 3 the prediction was made that entropy should be independent of whether the trace is over the degrees of freedom of the oscillators located

(38)

lators all have the same masses and coupling constants, thus tracing over the degrees of freedom of the first or last n oscillators is equivalent. S3,1 can then be interpreted as the entropy arising from covering the last of three sites (inside), while S3,2 will be interpreted as the entropy arising from covering the first two of three sites (outside). These entropies should be equal, which is what we found here.

6.5

The (2 + 1)- and (3 + 1)-spacetime configurations

In light of the complexities encountered in calculating the entropy of relatively simple configurations in (1 + 1)-spacetime, it seems that the generalisation to higher-dimensional spaces is beyond the scope of this thesis. In order to illustrate this fact, the general form of a coupling matrix K for two-space will be derived in this section.

The 2D field Hamiltonian is given by equation 5.12:

H = a 2 N X i=1 N X j=1  πij2 + 1 a2 φ 2

i+1,j+ φ2i,j+1+ 2φ2i,j− 2φi+1,jφi,j− 2φi,j+1φi,j 



(6.28)

The form required by equation 3.22 is:

H = 1 2 N X ij πij2 +1 2 N X ij N X lm φijK{ij,lm}φlm (6.29)

Where K is a real, symmetric matrix with positive eigenvalues. Each φ in equation 6.29 can be written as the tensor product of two N + 1 ≡ M vectors. This is due to the fact that equation 6.28 contains index+1-terms.

φ =                     φ11 .. . φ1M φ21 .. . φ2M .. . φM 1 .. . φM M                     (6.30)

Indeed, since K is written with 4 indices, we’ll need to split it up into M2 square M × M submatrices.

(39)

Chapter 6 Section 6.5 a2K =              K11,11 . . . K11,1M . . . K11,M 1 . . . K11,M M .. . . .. ... . . . ... . .. ... K1M,11 . . . K1M,1M . . . K1M,M 1 . . . K1M,M M .. . ... ... . .. ... ... ... KM 1,11 . . . KM 1,1M . . . KM 1,M 1 . . . KM 1,M M .. . . .. ... . . . ... . .. ... KM M,11 . . . KM M,1M . . . KM M,M 1 . . . KM M,M M              (6.31)

Thus, even the case where we consider two oscillators (M = 2) results in a (4 × 4) ma-trix, which we already could not solve analytically with the current soft- and hardware. Furthermore, upon closer inspection we will see that K is not necessarily of a form which allows for general analytical solutions (e.g. tridiagonal). We will explicitly find K for the M = 3 case to show this. φ is now a 9-vector and K a (9 × 9) matrix. The Hamiltonian for this system is:

H = a 2  π211+ π122 + π212 + π222 + 1 a2  φ221+ φ212+ 2φ112 − 2φ21φ11− 2φ12φ11 φ222+ φ213+ 2φ122 − 2φ22φ12− 2φ13φ12 φ231+ φ222+ 2φ221− 2φ31φ21− 2φ22φ21 φ232+ φ223+ 2φ222− 2φ32φ22− 2φ23φ22  (6.32)

Thus we find the following for K:

K =               2 −1 0 −1 0 0 0 0 0 −1 3 −1 0 −1 0 0 0 0 0 −1 1 0 0 0 0 0 0 −1 0 0 3 −1 0 −1 0 0 0 −1 0 −1 4 0 0 −1 0 0 0 0 0 0 1 0 0 0 0 0 0 −1 0 0 1 0 0 0 0 0 0 −1 0 0 1 0 0 0 0 0 0 0 0 0 0               (6.33)

Which is not of any simple form known to the author. Expressions for K in three-space are more complicated still. In general, a matrix describing a system of dimensionality M in one-space, will be of dimensionality Mn in n-space.

(40)

Discussion

Further research might focus on analytically solving for the eigenvalues of a general cou-pling matrix in (1 + 1)-spacetime. Finding these eigenvalues would allow for an exact calculation of one-dimensional entropy in terms of the number of oscillators in the system, and the number of oscillators in the inaccessible region. This in turn might lead to further insights on what parameters entropy is dependent on in one-space.

Furthermore, it may be possible to find exact solutions in higher-dimensional spaces as well. Such a course might warrant investigation if it allows for deducing in what way nearest-neighbour interaction terms contribute to the entropy.

Numerically, one might use a more optimised program, or a stronger computer to evaluate the entropy of larger systems, if analytical solutions do not exist for higher-dimensional spaces.

More specifically, a further exploration of the - as of yet unexplained - low entropy S4,2 found in the one-space four oscillator case, described in Appendix A.6, may be prudent. We bear in mind though, that the performed numerical calculations may not be completely accurate.

Most importantly however, a deeper look into the divergence of entropy for massless oscillators seems necessary, as our results in this area directly contradict the results found by Srednicki in his 1993 paper.

(41)

Chapter 8

Conclusions

We did not find the area dependent entropywe have been looking for. In fact the entropy in one-space seems to be independent of both the UV- and IR-cutoffs. This may be an artifact of (1 + 1)-spacetime, since the area separating the one-dimensional regions is a point with no well-defined size. However, earlier numerical results for similar situations indicate that one-dimensional configurations are, in fact, the only cases where a dependency on both the UV- and IR-cutoffs is observed (Srednicki 1993, page 5-6).

Interestingly, we did reproduce the independence of entropy with regards to tracing over the degrees of freedom of oscillators located on the inside or outside of a chosen region, which is one of two reasons to expect the arising entropy to be area-dependent. The other - more physical - reason is that it has been shown that the entropy of scalar fields in the ground state is given by the entangled degrees of freedom located just on the inside and outside of the boundary between chosen regions. These degrees of freedom scale linearly with the area of the boundary, hence the resulting area-law. Power-law corrections to this result exist for more uncertain exited states (Das, Shankaranarayanan, and Sur 2008). In summary, we have not been able to find the desired results. However, we have uncovered certain features of relatively small systems, which give an indication of being on the right track. Future research will have to demonstrate whether this notion is correct or not.

(42)

and to Theo Nieuwenhuizen for hosting a surprisingly well-planned ITFA conference. Bas Barendse, David Gardenier, Janna Goldstein, Monica van Santbrink, and Ren´e van Westen have my gratitude for putting up with me during my weekly API visits. Finally, my stomach would like to thank the staff at API for keeping it filled with all kinds of delicious cake.

Referenties

GERELATEERDE DOCUMENTEN

We asked some of the leading social science and humanities scholars in South Africa to throw new eyes on the problem of COVID-19 from the vantage point of their particular

共4.6兲; this gives the frontier curve. Upper panel: entanglement of formation versus linear entropy. Lower panel: concurrence versus linear entropy. A dot indicates a transition from

Using nite element calculations and random walk simulations we demonstrated that the current is suppressed more strongly for an asymmetric blocking scenario. This effect is caused

Unlike for other dispositions, a point that shows originality is Article VIII, which plans semi- annual meetings in order to exchange information on the Parties’ current

Research suggests the view of how the traditional and experiential marketing approach could influence these interactions and whether that particular approach predicts the

I first confirm that there is a positive relationship between tax avoidance and the delay in the annual earnings announcement, which means that higher tax avoidance significantly

In this tradition, Intetain 2011 – held in Genova, Italy, May 2011 – included the presentation of research works on virtual/mixed/augmented reality, hardware technologies and

Waterretentiekarakteristieken voor drie bodemlagen 10-15 cm; 35-40; en 70-75 cm op perceel 28.2A2 van Vredepeel; bepaald volgens de waterretentiekarakteristiek met