• No results found

Entanglement in the vacuum and the firewall paradox

N/A
N/A
Protected

Academic year: 2021

Share "Entanglement in the vacuum and the firewall paradox"

Copied!
103
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Amsterdam

MSc Physics

Theoretical Physics

Master Thesis

Entanglement in the vacuum and the firewall

paradox

by

Joris Kattem¨

olle

10624821

March 2016

60 ECTS

Research carried out between September 2014 and March 2016

Supervisor:

Dr. Ben Freivogel

Examiner:

prof. dr. Kareljan Schoutens

(2)

Abstract

After a succinct introduction to entanglement entropy, continuous variable quantum information and the class of Gaussian states, we show how this framework can be utilized to calculate the entanglement entropy of any set of modes of the vacuum. The entanglement entropy is presented as a functional which depends on the initial conditions of the modes only.

This functional is used to investigate the relation between localization and entropy in two concrete cases. Firstly, the entanglement entropy of a 1+d dimensional plane wave with a Gaussian envelope (i.e. a highly localized wavepacket) is calculated. We find an asymmetry between the longitudinal and transverse directions: ‘spaghetti-like’ wavepackets have relatively more localization entropy than ‘pancake-like’ wavepackets. Secondly, we obtain the mutual information between a 1+1 dimensional wavepacket in Rindler space, and its mirror mode on the other side of the Rindler horizon. Additionally, we discuss the mutual information of two 1+d dimensional Rindler wavepackets.

We follow with a brief introduction to black holes and the firewall paradox. In the formulation of this paradox, it is assumed that a near-horizon Hawking mode, which is a localized mode, can be purified by some other mode at the other side of the black hole horizon. We conclude by explicitly constructing such a pair of modes.

(3)

Contents

0 Notation iii

1 Introduction 1

2 Entanglement 3

2.1 States and the von Neumann entropy 3

2.2 The entropy of a simple harmonic oscillator in a thermal state 5

2.3 Entanglement entropy and mutual information 6

2.4 Page’s theorem 8

2.5 The Schmidt decomposition 9

3 Free field theory 12

3.1 The Klein-Gordon field 12

3.2 The entanglement entropy of a set of modes 15

4 Bogolyubov transformations 16

4.1 Definition 16

4.2 Properties 16

4.3 Particles in the vacuum 18

5 Rindler Space 19 5.1 1+1 dimensions 19 5.1.1 Geometry 19 5.1.2 Field theory 20 5.2 1 + d dimensions 23 5.2.1 Geometry 23 5.2.2 Field theory 25

6 Continuous variable quantum information 27

6.1 The Wigner function 27

6.2 Gaussian states 28

6.3 Thermal states revised 29

6.4 Symplectic transformations and the entropy of general Gaussian states 30

6.5 One and two mode Gaussian states 31

7 The entropy of a set of modes in the vacuum 34

8 The entropy of a wavepacket 36

8.1 1+1 dimensions 36

8.2 Asymptotic expansions 41

(4)

9 The entropy of two Rindler wavepackets 52

9.1 Rindler plane wave modes revised 52

9.2 1+1 dimensions 54

9.3 1 + d dimensions 64

10 Black holes and the firewall paradox 69

10.1 Schwarzschild black holes 69

10.2 Zooming in near the horizon 69

10.3 Scalar field on the Schwarzschild background 70

10.4 The Hawking effect 72

10.5 The firewall paradox 73

11 The situation of the wavepacket 77

12 Conclusion 83

13 Discussion and outlook 84

A De Firewall-paradox (in Dutch) 86

A.1 De firewall-paradox 86

A.1.1 De ingredi¨enten van de firewall-paradox 86

A.1.2 De paradox 90

A.1.3 Wat nu? 92

(5)

0 Notation

We use natural units, where c = ~ = kB= 1.

N Natural numbers.

N+ Positive natural numbers.

R Real numbers.

R+ Positive real numbers.

C Complex numbers

a := b Define the new symbol a to represent b. a= b! Demand a to be equal to b.

xµ A vector with one time component (x0) and d spatial components.

ηµν The Minkowski metric with ‘space-like signature’ (gµν = diag(−, +, +, +, . . .)).

x A d-dimensional vector with only spatial components.

x Euclidean length of the vector x. Commonly also just some parameter x ∈ R.

A A d × d matrix.

H Hilbert space (possibly infinite-dimensional). |H| Dimension of the Hilbert space.

|ψi Vector in H. ˆ

ak Operator on H with the label k (usually momentum).

Mode Mode Operator Mode name

fp ˆap Minkowski plane waves

gk ˆbk Minkowski wavepacket

gR

p, gpL ˆbRp, ˆbLp Rindler modes

hp cˆIp, ˆcIIp Unruh modes (Rindler modes that annihilate the vacuum)

IR

k, IkR dˆRk, ˆdLk Rindler wavepacket

Some useful formulæ

The closed form of the geometric series x0+ x1+ . . . + xN is

N X n=0 xn= 1 − x N 1 − x . (x 6= 1) (0.1)

In the limit N → ∞, this reads

X

n=0

xn= 1

(6)

The hyperbolic cosecant is related to the more familiar hyperbolic sine by

csch(z) = 1/ sinh(z). (0.3)

Similarly, the hyperbolic cotangent and the hyperbolic tangent are related by

coth(z) = 1/ tanh(z). (0.4)

The Gamma function, which can be defined by

Γ(z + 1) = Z ∞

0

dt tze−t, (z ∈ C \ {−1, −2, . . .}) (0.5)

can be seen as an analytical extension of the factorial, since

Γ(n + 1) = n! (n ∈ N+). (0.6)

A useful property is the recurrence relation

Γ(z + 1) = zΓ(z). (0.7)

The Pochhammer symbol

(n)m= (n)(n + 1)(n + 2) . . . (n + m − 1) (0.8)

can also be extended to non-integer m, and can be seen as shorthand notation for Γ(n + m)

Γ(n) = (n)m. (0.9)

An n − 1 sphere with radius r is a sphere with a n − 1 dimensional surface area,

S(n − 1) = n π

n/2

Γ(1 + n2)r

n−1, (0.10)

and it can be embedded in Rn. The error function is defined as erf(x) = √2

π

Z x

0

dt e−t2, (0.11)

and is related to the complementary error function by

erfc(x) = 1 − erf(x). (0.12)

The commonly encountered integral Z ∞ −∞ dx e−ax2+bx= q π ae b2 4a (Re(a) > 0) (0.13)

can be generalized to n dimensions, Z dx e−12x·A·x+j·x = q (2π)n det Ae 1 2j·A −1·j

. (A real and symmetric) (0.14)

By definition, the modified Bessel function of the second kind Kn(z) is the solution of the

second order differential equation  z2 d 2 dx2 + z d dx − (z 2+ n2)  Kn(z) = 0. (0.15)

(7)

1 Introduction

Ever since the seminal papers by Hawking [1,2], black hole evaporation has shown to form a fertile ground for research in theoretical physics. In 2012, a new impulse to this area of study was given by the so-called firewall paradox [3].

In short, the paradox is as follows. From unitarity, it can be shown there should be high energy quanta near the black hole horizon, which form the firewall. This firewall would burn up an observer who jumps into the black hole. However, by the equivalence principle, there should be nothing special about the black hole horizon from a local point of view.

Thus there seems to be a conflict between the principles of unitarity and equiva-lence, which are fundamental to the theories of Quantum Fields and General Relativity respectively. It is likely that the paradox is only solved by the unified theory, so a better understanding of the paradox could reveal important clues about quantum gravity.

There are both papers that claim there is no paradox [4–6] and papers that claim there is [7–9], so in any case, no consensus has been reached. The formulation of the paradox is full of approximations, simplifications and assumptions, both explicit and implicit, and this is indeed where much of the criticism is directed towards. This shows there is a need for a more explicit description.

In this thesis, we zoom in on a essential but formerly implicit aspect of the paradox, and make it explicit. The aspect in question is the (von Neumann) entanglement entropy of a Hawking wavepacket that is localized near the horizon of a black hole. Using continuous variable quantum information theory, we quantify this entanglement entropy and show how it is influenced by localization. This allows us to give a precise description of the Hawking wavepacket.

Although black holes and the firewall paradox formed the initial motivation for our research, there is no mention of them in most chapters. This is because the relation between localization and entanglement entropy is interesting in its own right. By making several chapters independent of the paradox, the results that are obtained in these chapters can easily be applied elsewhere. Black holes and the firewall paradox make their entrance as late as chapter 10. Hence the body of this thesis can be divided into two parts: the ‘no-black holes’ part and the ‘black holes’ part.

In the first part, we commence by introducing several concepts and techniques from quantum information theory en special relativity (chapters2-6). So, although the current introductory chapter might seem rather short, the true introduction is quite extensive. These chapters are by no means intended to give a complete account of the matter and are heavily inclined towards whatever is useful later in this thesis. After the introductory chapters there is a short but essential chapter in which we show how all the techniques laid out in the preceding chapters can be combined (chapter7). As a result of this combination, we are able to present the entanglement entropy of any orthogonal set of modes of the vacuum as a functional of the initial value conditions of these modes only.

We then apply this result to calculate the entanglement entropy in two cases. The first case is a localized plane wave in ordinary Minkowski space (chapter8), and the second case

(8)

(chapter9) is a localized plane wave in the space of a uniformly accelerated observer (i.e. a Rindler observer). In both cases, the calculation is initially done in 1 + 1 dimensions, after which it is generalized to 1 + d dimensions. We chose to present the matter in this way because the technical difficulty of the 1 + d dimensional case can easily obscure the conceptual steps that are already present in the 1 + 1 dimensional case.

In part two we first give an extremely short introduction to black holes and the firewall paradox (chapter 10). In the chapter thereafter, which is the final chapter of the body (chapter 11), we combine part one and two by showing how the results from part one fit into the discussion of the firewall paradox. The connection is made by explicit construction of the wavepacket that plays a quintessential role in the paradox, thereby removing the assumption that such a wavepacket exists.

A popular science article on the firewall paradox by the author of this thesis1 appeared on www.quantumuniverse.nl, a website which is part of the outreach project of Erik Verlinde. A slightly adapted version of the article can also be found in appendixA.

(9)

2 Entanglement

We will be concerned with computing the entanglement entropy of various systems. There-fore, a synoptic review of states, entropy and entanglement entropy is suitable. More details can be found in your favorite book on quantum mechanics or quantum information theory. (See, for example, [10].)

We elaborate a bit more on the von Neumann entropy of a simple harmonic oscillator in a thermal state. This is because of its importance later on. Namely, we show how the entropy can be written in terms of the expectation value of the simple harmonic oscillator’s number operator only. This will show to be crucial in the calculation of the entropy of general Gaussian states in section6.4.

2.1 States and the von Neumann entropy

Consider a quantum mechanical system with Hilbert space H.

States Ultimately, states can only be discerned because they produce different mea-surement outcomes. Mathematically, the meamea-surements one can make are represented by Hermitian operators on H (the observables), and the outcomes are real numbers. So, a state can be seen as a map from the observables to the real numbers. A natural choice for this map is the map that adds to any observable ˆO the expectation value of that observable, h ˆOi. This map can be uniquely implemented by

h ˆOi = tr( ˆO ˆρ), (2.1)

where ˆρ is a positive semi-definite Hermitian operator of trace one [11], and is known as the density operator .

The probability p of finding the system in the subspace P ⊆ H after some measurement, is given by

p = tr(P ˆρ),

where P is the projection operator that projects onto P.

Pure- and mixed states A state can be pure, in which case there is a vector |ii ∈ H such that

ˆ

ρ = |ii hi| .

A state can also be mixed , in which case the density matrix can solely be written as a non-trivial combination of pure states. So for a mixed state,2

ˆ

ρ =X

i

pi|ii hi| . (2.2)

2The set over which the summation index i runs, is deliberately left implicit. In this way, our expressions

(10)

It can quite easily be shown that pi is the probability that the system is in the state |ii.

So, p is a probability distribution, and obeys pi > 0,

X

i

pi= 1. (2.3)

In other words, a mixed state is a non-trivial convex combination of pure states. In general, this combination is not unique.

Now, by the spectral theorem and the fact that ˆρ is Hermitian, we can always write ˆρ in such a way that the pi in the decomposition (2.2) are the eigenvalues, and the |ii are the

eigenvectors of ˆρ. Therefore, we will denote the eigenvalues of ˆρ by pi, and the eigenvectors

of ˆρ by |ii from here on.

Von Neumann entropy We start with a kind of entropy that is more common to information theory than to physics: the Shannon entropy.3 Let λ be a discrete probability distribution with ` possible outcomes, where the possibility of an event i is given by λi.

The Shannon entropy S of λ is then defined as S(λ) = −X

i

λilog λi.

Heuristically, S it is a measure of the average ‘surprisedness’ to see a certain outcome. For example, if there is only one i such that λi is non-zero, we are the least surprised to

observe the only event i for which λi 6= 0. Accordingly, S vanishes. If, on the contrary, λ

is uniformly distributed (i.e. λi = 1/` for all i ∈ {1, . . . , `}), we are surprised to observe

the specific event i. In this case the entropy equals log 1/`, which is the maximum value. The eigenvalues of the density operator form a discrete probability distribution. This relates the Shannon entropy to the von Neumann entropy: the von Neumann entropy S of a system is the Shannon entropy of the eigenvalues pi of the density operator ˆρ,

S( ˆρ) = S(p) = −X

i

pilog pi= −hlog ˆρi.

In the nomenclature of quantum mechanics, the von Neumann entropy S is a measure for the ‘mixedness’ of a state, analogous to the ‘surprisedness’ of the previous paragraph. Namely, if and only if a state is pure, there is only one i such that pi 6= 0, and therefore

S = 0. If, on the contrary, the pi are uniformly distributed, then S = log(1/|H|), the

maximum value it can take. In this case, the state is not only mixed, it is said to be maximally mixed .

We will only be concerned with the von Neumann entropy, which we will henceforth commonly refer to as ‘the entropy’.

3

The order in which the two kinds of entropy are presented here is contrary to the historical development. First there was the von Neumann entropy, and then there was the Shannon entropy, as is clear from the following quote by Claude Shannon: “My greatest concern was what to call it [the Shannon entropy]. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.’ ”[12]

(11)

2.2 The entropy of a simple harmonic oscillator in a thermal state

We will now show how one can compute the entropy of a simple harmonic oscillator that is in a thermal state. In general, a thermal state of a system with Hamiltonian ˆH is the state ˆρth such that the entropy S obtains its maximal value, under the restriction that the

system has some fixed energy h ˆHi. This is a standard optimization problem, and it can be solved by using the method of Lagrange multipliers. One obtains

ˆ ρth(β) = e −β ˆH Z , Z = tr  e−β ˆH, (2.4)

where β is the Lagrange multiplier. If the system is in thermal equilibrium with a heat-bath, β is just the inverse temperature, β = 1/(kBT ). The factor Z makes sure the trace

of ˆρth is unity, and is known as the partition function.

We now go to a well-known, specific system: the simple harmonic oscillator. The Hamiltonian is

ˆ

H = ω ˆa†ˆa, (2.5)

with ω the frequency and (ˆa, ˆa†) the ladder operators.

In terms of the eigenvalues and eigenvectors of the number operator ˆN = ˆa†a, which are defined by

ˆ

N |ni = n |ni ,

the density operator reads

ˆ ρth = e

−βω n

Z |ni hn| . (2.6)

The expectation value of the number operator can now easily be computed,

h ˆN i = tr( ˆρ ˆa†a) = 1 Z

X

n

ne−βωn, (2.7)

where, using the closed form of the infinite geometric series (0.2),

Z =X

n

e−βω n = 1

1 − e−βω. (2.8)

We can now write h ˆN i as

h ˆN i = 1

Z∂−βωZ = 1

eβω− 1. (2.9)

So, if we have a black body consisting out of many harmonic oscillators from which some energy is allowed to escape, the energy distribution of this radiation would be as in (2.9). This is the famous Planckian spectrum which sparked the development quantum mechanics.

(12)

We are now set to compute the entropy of the thermal state. First of all, note that the density matrix (2.6) is diagonal in the number basis, so that we can immediately identify the eigenvalues,

pn=

e−βω n

Z . (2.10)

Thus, the entropy of a simple harmonic oscillator in a thermal state equals

S = −X n pnlog pn = 1 Z X n e−βω nlog  Zeβω n, = 1 Z X n e−βω n(log Z + βω n).

By formulas2.8 and2.7, this is

S = log Z + βωh ˆN i.

Now from (2.8), we have that Z = eβωh ˆN i, so S = βω + logh ˆN i + βωh ˆN i. Furthermore, from (2.9), we have βω = log(h ˆN i + 1) − logh ˆN i. This enables us to write S in terms of h ˆN i only,

S =h ˆN i + 1logh ˆN i + 1− h ˆN i logh ˆN i. (2.11)

At this point, it might not seem to be of much use to have S in this particular form. It will show its usefulness, however, in section6.4.

2.3 Entanglement entropy and mutual information

Bipartite systems We now consider a quantum system that can be divided into two subsystems, A and B,

HAB = HA⊗ HB.

An orthonormal basis of this space is formed by {|ii ⊗ |ji}, where {|ii} and {|ji} are orthonormal bases for HAand HB respectively. We will, as is customary, use the notation

|ii ⊗ |ji = |i, ji. The inner product of this space is related to the separate inner products of the spaces HA and HB by

ψ, ϕ ψ0, ϕ0 = ψ ψ0 ϕ ϕ0 .

Linear operators on this space need not be of the form ˆO1⊗ ˆO2, but could in general be

any sum thereof.

The subsystems A and B each have their own density operator, written as ˆρA and

ˆ

ρB respectively. These density operators define the states of the subsystems, and are

(13)

done exclusively on A or B. The reduced density operators can be obtained from the overall density operator ˆρAB by ‘trancing out the other system’. Stated more precisely, ˆρA

is obtained by taking the partial trace of ˆρAB over the subsystem B,

hi| ˆρA|ji =

X

k

hi, k| ˆρAB|j, ki ,

or as it is more commonly written, ˆ

ρA= trB( ˆρAB).

Vice versa, ˆρB can be obtained by tracing out the subsystem A.

Von Neumann entanglement entropy A particular case is when ˆρAB is pure whilst

ˆ

ρA and ˆρB are mixed. In this case we cannot know the behavior of AB by knowing the

separate behavior of A and B. This means that A and B must be correlated. Correlation between quantum mechanical systems is called entanglement, and in general, quantum mechanical systems can be correlated in ways that are classically impossible.

As a measure for entanglement we take the amount of uncertainty about the state (i.e. the ‘surprisedness’ to find the system is a certain state) of a subsystem that is induced by inaccessibility of the other system. In other words: if ˆρAB is pure, the entanglement

entropy of subsystem A is the von Neumann entropy of the reduced density operator ˆρA. As

noted before, we will commonly refer to the von Neumann entanglement as ‘the entropy’. If system A is maximally mixed as a result of tracing out system B, then A and B are said to be maximally entangled .

Mutual information More generally, ˆρAB can be in a mixed state, which means AB

must be entangled with some third system C if one assumes the universe is in a pure state. Surely, it is still possible that A and B are correlated and thus have some entanglement, but how can we quantify this entanglement? The entropy S(ρA) is not a good measure,

because some of this entropy could be due to entanglement with C. (In the case that ˆρAB

is mixed, and the entire universe is in a pure state, this actually must be the case.) The resolution is given by the mutual information

I( ˆρAB) = S( ˆρA) + S( ˆρB) − S( ˆρAB). (2.12)

The mutual information of systems A and B measures the total amount of ‘shared en-tanglement’. That is, it measures the amount of entanglement that is solely between the systems A and B. It does so by removing the entanglement that AB has with a third system C (i.e. subtracting S( ˆρAB)).

To understand the mutual information a bit more, let us look at the extremes. The minimum is I( ˆρAB) = 0, and it occurs if and only if S( ˆρA) + S( ˆρB) = S( ˆρAB). In turn,

this happens if and only if A and B are totally untangled. In this case, A could have some entropy, but none of it is due to entanglement with B.

The other extreme is I( ˆρAB) = S( ˆρA) + S( ˆρB), which occurs if and only if S( ˆρAB) = 0.

This is exactly the less general case we discussed before: A and B are entangled whilst AB is pure. In this case, A could have some entropy, and all of it is due to entanglement with B.

(14)

2.4 Page’s theorem

Consider the system AB in some random pure state. In what kind of state is the system A typically if |HA|  |HB|? The answer is given by Page’s theorem [13], which says that

ˆ

ρA will be very close to 1/|HA|. That is, the system A is very close to being maximally

mixed when A is a small subsystem of AB.

We will now make this statement more precise closely along the lines of Harlow [14], but without proof. For the proof, we would like to refer the reader to Harlow [14].

First of all, we need a notion of ‘closeness of states’. A good metric on the space of states is the operator trace norm

k ˆOk = trpOˆ†O.ˆ

With this norm, the distance between two states ˆρ and ˆσ is simply k ˆρ − ˆσk. It is a sensible norm, since if k ˆρ − ˆσk < , then the difference in probability of finding the system in subspace P is at most . That is, if k ˆρ − ˆσk < , then

tr[P ( ˆρ − ˆσ)] < 

for any projection operator P .

Secondly, we need to describe what a ‘random state’ is. A random state can be obtained by taking some fixed state |ψ0i and acting on it with a random unitary matrix,

|ψ(U )i = U |ψ0i .

To find an average over all states, one has to integrate over the group of all unitary operators U (N ) = U (|HA||HB|). To do so, one uses the so-called group-invariant Haar measure to

define the ‘volume element’ in U (N ). We will not go into the details, but it has the properties that Z dU = 1, and Z dU UijUkl† = 1 Nδilδjk.

Page’s theorem in the version of Harlow [14] then states that

Z dU ˆ ρA(U ) − 1A |HA| ≤ s |HA|2− 1 |HA||HB| + 1 ≤ s |HA| |HB| .

So, if |HA|  |HB|, we expect ˆρA≈1A/|HA| to a good approximation.

To appreciate the quality of this approximation, let HAB consist out of M = mA+ mB

subsystems, each of which with a Hilbert space of dimension two. (I.e. a collection of M qubits, where mA qubits belong to system A and mB qubits belong to B). If the system

is in a random pure state, one expects the distance between ˆρA and the maximally mixed

(15)

In other words, if you have a large collection of qubits, take a few of them and mea-sure them, you learn virtually nothing about the system as a whole: the outcome of the measurements is random because the density matrix of the small collection is proportional to the identity. This is in sharp contrast with classical bits. If you take n classical bits out of a large collection of bits, you obtain exactly n bits of information about the system.

So it seems that getting information out of a ‘quantum memory’ is always much harder than getting information out of a classical memory, but this is not true. Interestingly, it is possible to show that one can, in principle, get almost all of the information out of a quantum memory by only asking the memory half of the questions (queries) you would need to do classically [15].

2.5 The Schmidt decomposition

So far, we know by how much two systems can be entangled. As we have seen, the ‘amount of entanglement’ is measured by the entanglement entropy. But, as of yet, we did not discuss how two systems can be entangled. To investigate how two systems are entangled, we use the so-called Schmidt decomposition.

In this thesis, this decomposition is only used to prove some side-track of section 9.1, so in that sense, it could be skipped by the reader.

Theorem 2.1. (E. Schmidt) Any state vector of a bipartite quantum system |ψi ∈ HA⊗HB

can be written, essentially uniquely, as

|ψi =X

i

pi|iiA|i0iB,

where the unit vectors {|iiA}, as well as {|i0i

B}, are mutually orthonormal, and the {pi}

are equal to the nonzero eigenvalues of the reduced density matrix ˆρA(or equivalently ˆρB).

This decomposition of the state is known as the Schmidt decomposition.

Proof. Any state |ψi ∈ HA⊗ HB can be written as

|ψi =X ij cij|iiA|jiB =X i |iiA X j cij|jiB | {z } :=|ϕiiB  =X i |iiAiiB, (2.13)

with cij ∈ C, Pij|cij|2 = 1, and {|ji} some orthonormal basis of HB. As the {|iiA} we

can take the eigenvectors of the reduced density operator of subsystem A. So, on the one hand, we have

ˆ ρA=

X

i

(16)

where the pi are the eigenvalues of the density operator. On the other hand, we can

compute the reduced density operator by taking the partial trace over the total density matrix ˆρAB = |ψi hψ|, ˆ ρA= trB( ˆρAB) = X j hj|B(|ψi hψ|) |jiB. By (2.13), this equals ˆ ρA= X i,j,k hj|BiiBk|B|jiB |iiAhk|A =X i,j,k hϕk|B|jiBhj|BiiB |iiAhk|A =X i,k hϕk| ϕii |iiAhk|A. (2.15)

Comparing (2.14) and (2.15), we find

k| ϕii = piδik.

Therefore, the {|ϕii} are orthogonal. To make them orthonormal, define

i0 :=√|ϕii pi

.

Equation 2.13now reads

|ψi =X i √ pi|iiA|i 0i B, (2.16) as desired.

With the Schmidt decomposition at hand, we now now exactly how the two systems A and B are entangled: the state |iiAis ‘linked’ to the state |i0iB. That is, if a measurement is performed on subsystem A and and it collapses to |iiA, we instantly know system B is in the state |i0iB.

From the Schmidt decomposition, the reduced density matrices can be obtained by tracing out the other system. It is not clear a priori, however, that this also works the other way around. So the question is: can we obtain the state of a bipartite quantum system from only the two reduced density matrices, and the fact that the combined system is pure? It tuns out to be so if the nonzero eigenvalues of the reduced density matrix are non-degenerate. This is shown in the following corollary.

Corollary 2.2. If the two reduced density matrices ˆρA and ˆρB of a bipartite quantum

system in the pure state ˆρAB = |ψi hψ| are diagonal in the bases |iiA and |iiB respectively,

and have non-degenerate, non-zero eigenvalues pi, then

|ψi =X

i

(17)

Proof. Assume, on the contrary, that ˆρA=Pipi|iiAhi|A, ρˆB=Pipi|iiBhi|B, but |ψi =P jp ˜pj|j 0i A|j 00i B with {˜pj} 6= {pj}. Then ˆ ρA= X j hj00|B |ψi hψ||j00i B= X i,j,k p ˜ pip˜khj00|i00ihk00|j00i  i0 Ak0 A= X k ˜ pk k0 Ak0 A,

and likewise ˆρB =Pkp˜k|k00iBhk00|B. Since matrices cannot be diagonal in more than one

basis (as a set), {|k0iA} = {|iiA} and {|k00iB} = {|iiB}. It follows that {˜pk} = {pk}.

This corollary is used in section9.1to show how the flat spacetime vacuum of the free field is entangled.

(18)

3 Free field theory

We now go from quantum mechanics to the simplest of quantum field theories: the free massless real scalar field, also known as the massless Klein-Gordon field. Despite its sim-plicity, it provides enough of a basis for interesting physics, as we will see in the next sections of this thesis. After a succinct introduction to the free massless scalar field, this chapter and the previous are brought together. Namely, we explain how modes of the free field field can have an entanglement entropy.

Again, it should be noted we do not intend to give a complete account of the subject matter. More details can be found in any book on quantum field theory. (See, for example, Peskin and Schr¨oder [16].) Unless stated otherwise, we work in D = 1 + d dimensions.

3.1 The Klein-Gordon field

First we treat the classical field, after which we show how this is quantized.

The classical field The classical massless Klein-Gordon field φ(t, x) is a function from spacetime to the real numbers. The action is given by,

S = Z

dDx L[φ],

where the Lagrangian density reads

L[φ] = −1 2g

µν

µφ ∂νφ. (3.1)

Here, gµν could be any metric. For now, we take it to be the Minkowski metric gµν = diag(−1, 1, . . . , 1). From the Euler-Lagrange equations, it follows that the action S is minimized when

∂2φ = 0, (3.2)

which is therefore the equation of motion, or in this context, the massless Klein-Gordon equation. Here, ∂2 =  = gµν∂µ∂ν is the D’Alembert operator. Any solution of the

equation of motion is known as a mode of the field. The massless Klein-Gordon equation is simply a wave equation, which is, for example, also obeyed by the pressure in water or the displacement of a sting of a piano.

The set of all solutions of the Klein-Gordon equation has the structure of an inner product space (i.e. a vector space endowed with an inner product). Like any inner product space, it has an inner product and a basis.

Inner product The inner product of solutions g(t, x) and f (t, x) to the Klein-Gordon equation is the Klein-Gordon inner product

(g, f ) = −i Z

Σ

ddx [ g ∂tf∗− (∂tg)f∗] , (3.3)

where Σ is a hypersurface of constant time. Note that one only needs to know the functions and their time derivatives at some specific time to compute their inner product. This can

(19)

be a great advantage. For example, it allows us to pick g(0, x), (∂tg)(0, x), f (0, x) and

(∂tf )(0, x) at will, and compute the inner product between g(t, x) and f (t, x) without even

having to know the their time evolution. This feat will be of great value in section 9.3, where we compute the inner product between a 1 + d dimensional Minkowski wavepacket and any Minkowski plane wave, notwithstanding the absence of a full formula of the 1 + d dimensional wavepacket.

Basis Usually, the normalized Minkowski plane waves

fp(t, x) = 1 p 2(2π)dω e ip·x−iωt , (ω = |p|) (3.4)

are chosen as the basis for solutions to the Klein-Gordon equation. Here, the momentum p, itself a vector, should be seen as a label for the basis vectors of the space of solutions. Such a basis vector is called a basis mode. By plugging (3.4) into the equation of motion (3.2), it can be verified that the Minkowski plane waves are indeed solutions. Furthermore, by computing the Klein-Gordon inner product of fp and fp0 it can be verified that they are (Dirac) orthonormal, that is,

(fp, fp0) = δd(p − p0). (3.5)

Actually, the space that is spanned by this basis is too large because it also includes complex-valued functions. Nevertheless, every real-valued solution may be written as a linear combination of basis vectors,

φ = Z

dp (apfp+ c.c.), (3.6)

with (possibly complex) coefficients ap. The complex conjugate c.c is added to make sure

φ is real. Note that the modes fp are positive frequency modes, in the sense that

∂tfp= −iωpfp, (3.7)

whereas the fp∗ (inside the c.c.) are of negative frequency ,

∂tfp∗ = +iωpfp∗. (3.8)

We could, of course, have chosen an other basis for solutions to the equation of motion. This other basis could be related to the old one by the transformation

gk=

Z

dp (αkpfp+ βkpfp∗), (3.9)

with αkpand βkp complex coefficients. We will see in chapter4that such a change of basis

(20)

Canonical quantization To quantize the field, we promote the field φ and its conjugate momentum

π = ∂L

∂(∂0φ)

to the operators ˆφ and ˆπ, and impose the (equal time) commutation relations

[ ˆφ(t, x), ˆφ(t, x0)] = 0,

[ˆπ(t, x), ˆπ(t, x0)] = 0, (3.10)

[φ(t, x), π(t, x0)] = iδd(x − x0).

The operator version of the expansion of the field over the basis modes reads (cf. equa-tion 3.6)

ˆ φ =

Z

dp (fpˆap+ h.c.), (3.11)

where h.c. always stands for the Hermitian conjugate of the preceding term. Plugging this expansion into the commutation relations (3.10) yields the Bosonic commutation relations

[ˆap, ˆap0] = 0,

[ˆa†p, ˆa†p0] = 0, (3.12)

[ˆap, ˆa†p0] = δd(p − p0).

The Hamiltonian can be derived from the operator version of the Lagrangian density (cf. equation3.1). When written in terms of the number operator

ˆ

Np= ˆa†pˆap, (3.13)

the Hamiltonian reads

ˆ H =

Z

dp ωpNˆp.

Because of the similarity to the Hamiltonian of the simple harmonic oscillator (2.5), we can basically see the free field as collection of uncoupled harmonic oscillators, one for every p. It can easily be shown that the operators ˆap transform an eigenstate with energy E to

an energy eigenstate with energy E −ω. Similarly, the operators ˆa†p transform an eigenstate

with energy E to an energy eigenstate with energy E + ω. That is, ˆ

H ˆap|Ei = (E − ω) |Ei , H ˆˆa†p|Ei = (E + ω) |Ei ,

where |Ei = ωp|ni, and |ni is an eigenket of the number operator (3.13) with eigenvalue n.

So, it can be said that the operators ˆap (ˆa†p) annihilate (create) a particle with momentum

p and energy ωp. The ground state |0ia, then, is the state such that

ˆ

(21)

That is, it is the state from which no particles can be removed anymore. This is why it is known as the vacuum. By acting on the vacuum with the creation operators ˆa†p,

multi-particle states can be created, 1 √ n1!n2! . . . nN!  ˆ a†p1 n1 ˆ a†p2 n2 . . .  ˆ a†pN nN |0i := |n1i ⊗ |n2i ⊗ . . . ⊗ |nNi := |np1, np2, . . . , npNi . (3.15)

In this way all eigenstates of the Hamiltonian can be constructed, which, as always, form an orthogonal basis of the entire Hilbert space. This ‘occupation number basis’ (3.15) is known as the Fock basis. So, the Hilbert space of the free scalar field has the structure4

H = H1⊗ H2⊗ . . . ⊗ HN. (3.16)

3.2 The entanglement entropy of a set of modes

In this thesis, we will mainly be calculating the entropy of modes of the free field. But what do we exactly mean by ‘the entropy of a mode’ ? This small section is here to answer that question, and it does so by making the connection between this chapter and the previous (chapter2).

Consider the free (scalar) field in the pure state |φi. We can divide the Hilbert space (3.16) into two parts, A and B. Let A have m subfactors, and B the rest. For example, we could then have

H = H1⊗ H2⊗ . . . ⊗ Hm | {z } HA ⊗ Hm+1⊗ Hm+2⊗ . . . | {z } HB = HA⊗ HB.

(In general, the subfactors of A need not have adjacent indices, of course.) Remember that, although the overall state |φi is pure, the reduced density operator ˆρA needs not to

be pure. We can now simply define the entanglement entropy of a set of modes A as the von Neumann entropy of ˆρA.

4

Actually, the Hilbert space is ‘smaller’, since the field is Bosonic, and consequently all states must be symmetric under the exchange of particles. Also, note that the notation of (3.16) suggest a finite set of basis modes, whereas the basis (3.4) is continuous. We will, as is quite common, switch to a countable (maybe even finite) basis whenever this is more convenient. A countable basis can always be obtained by introducing an IR-cutoff, for example by imposing periodic boundary conditions on the equation of motion (3.2). The basis can than be made finite by introducing a UV-cutoff, for example by putting the system on a grid.

(22)

4 Bogolyubov transformations

To calculate the entanglement entropy of a general set of modes in the Minkowski vacuum, we need to relate these modes to the Minkowski plane waves, as we shall see in chapter7. This relation is via a Bogolyubov transformation. For more information, one can consult the references [17–19].

4.1 Definition

In short, a Bogolyubov transformation is a basis transformation that takes us from one complete orthonormal quantization basis to another. Equivalently, a Bogolyubov trans-formation can be seen as a symplectic transtrans-formation on operators that are the quantum analogues of the phase space coordinates (the quadrature operators).

To be a bit less short, say we have some set of modes {fp(t, x)}, labeled by p, that

form an orthonormal basis for the inner product space of classical solutions to the equation of motion of the field φ(t, x). This basis could could consist out of the Minkowski plane waves as in (3.4, 3.6), but in general, it could be any complete orthonormal basis. As we have seen in equation3.11, we may then expand our field operator as5

ˆ

φ(t, x) =X

p

[ fp(t, x) ˆap+ h.c. ] . (4.1)

The basis {fp(t, x)} is in general not unique, so we might as well have picked a basis

{gk(t, x)} to expand our field over,

ˆ φ(t, x) =X k h gk(t, x) ˆbk+ h.c. i . (4.2)

The transformation from {fp(x, t)} to {gk(x, t)} is a Bogolyubov transformation. Since the

old basis is complete, there are coefficients (αkp, βkp) such that

gk(t, x) =

X

p

 αkpfp(t, x) + βkpfp∗(t, x) . (4.3)

These coefficients are the Bogolyubov coefficients. They can be found by simply taking the Klein-Gordon inner product between the old and new modes,

αkp = (gk, fp), βkp= −(gk, fp∗). (4.4)

4.2 Properties

Equating (4.1) and (4.2), substituting gk(x, t) using (4.3) and using the orthonormality of

the modes, one finds

ˆ ap= X k  αkpˆbk+ β∗kpˆb † k  , (4.5)

5We previously used a continuous basis. In the context of Bogolyubov transformations however, the

(23)

and ˆ bk= X p  α∗kpˆap− β∗kpˆa†p  . (4.6)

Thus a Bogolyubov transformation can also be seen as a transformation of the creation and annihilation operators.

For what follows, matrix notation is best suited. By a we will denote the vector containing all the annihilation operators, a = (ˆap1, ˆap2, . . .)

T. The Bogolyubov

transfor-mations are then matrices that act on these vectors. For example, {αkp} is an element of

the matrix α. By a†symbolizes a column vector containing the creation operators a†p, that

is, a† = (ˆa†p1, ˆa

† p2, . . .)

T. Taking the matrix notation even one step further, the relations

(4.5) and (4.6) can be summarized by

b b† ! = α ∗ −β∗ −β α ! a a† ! . (4.7)

Since a Bosonic field stays Bosonic if we decide to quantize it using another basis, a Bo-golyubov transformation keeps the Bosonic commutation relations (3.10 ) invariant. With a bit of work, it can be shown [19] that this happens if and only if

αα†− ββ† =1, α†α − βTβ∗ =1, αβT − βαT = 0,

αTβ∗− β†α = 0.

(4.8)

This summarizes all the properties of the Bogolyubov transformations.

There is yet another, and most beautiful, point of view. It utilizes the concept of symplectic transformations. We will see more about symplectic transformations in chapter

6. By definition a real matrix S is symplectic if and only if it satisfies

SΩST = Ω, (4.9)

where Ω is some fixed invertible skew-symmetric matrix (i.e. ΩT = −Ω) [20]. Now define the quadrature operators

q = √1

2(a + a

), p = 1

i√2(a − a

). (4.10)

Here it should be kept in mind that q, p, a and a† are column vectors of operators. The operators ˆpp and ˆqp satisfy the Bosonic commutation relations 3.12 by construction. The

transformed quadrature operators

q0 = √1 2(b

0+ b0†), p0 = 1

i√2(b

(24)

can be obtained by the action of a Bogolyubov transformation. By sequential use of (4.11), (4.6) and (4.10), this transformation can be compactly written as

q0 p0 ! = Re(α − β) Im(α + β) −Im(α − β) Re(α + β) ! | {z } :=B q p ! . (4.12)

Written in this form, the Bogolyubov transformation B is manifestly real. By using the properties (4.8), one can show that B is also symplectic and invertible. By changing α and β we can in fact make any real symplectic matrix. Therefore, the group of Bogolyubov transformations on an N mode system is isomorphic to the real symplectic group Sp(R, 2N ). This will be used in chapter6to show that a Bogolyubov transformation maps a Gaussian state to a Gaussian state.

4.3 Particles in the vacuum

As promised in chapter 3, we will now show one of the non-trivial and more physical consequences a Bogolyubov transformation can have: due to the possibility of making Bogolyubov transformations, the vacuum state is ambiguous.

The vacuum is the state from which we cannot remove any particles, as we have seen in chapter 3. However, someone using an alternative basis g, with associated annihilation operators ˆbk, does not agree with this vacuum state since ˆbk|0ia is not necessarily zero.

This can easily be seen from the fact that expansion 4.6 contains creation operators ˆa†p.

His or her vacuum is the state |0ib such that ˆbk|0ib = 0 for all k. So, the definition of the

vacuum state depends on the basis that is used for quantization.

The Minkowski vacuum |0iM, then, is the vacuum with respect to the basis of the Minkowski plane waves with associated annihilation operators ˆap (3.4),

ˆ

ap|0iM ≡ 0 for all p.

This is the ‘right vacuum’ in the sense that all inertial observers in flat space that use this basis agree on the vacuum. In curved spacetime however, there is usually no such preferred basis, so that there is no preferred definition of the vacuum.

Equation 4.6 allows us quantify by how much two vacua differ. Namely, we can cal-culate the expectation value of the number operator ˆNk(b) = ˆb†kˆbk in the state |0ia. This

yields

h0|ak(b)|0ia≡ h0|aˆb†kˆbk|0ia=

X

p

kp|2. (4.13)

Looking back to (4.3) and (3.8), we see that the more negative frequencies are present in the decomposition of an alternative basis mode gk, the more particles we expect to see when

the state is the vacuum state of the original basis {fp}. If and only if no negative frequency

modes are present in the alternative basis, then βkp = 0 for all (k, p) and consequently

(25)

5 Rindler Space

One particular change of quantization basis occurs when instead of the Minkowski coordi-nates, coordinates are used that are more natural to an observer who is constantly being accelerated. These coordinates are known as Rindler coordinates. Aided by the previous chapter, we will show how to obtain the expectation value of the number operator of these Rindler modes.

5.1 1+1 dimensions

We will start in 1+1 dimensions, along the lines of Carroll [18] and Birell & Davies [17] and Susskind [21]. In the next subsection, the less commonly encountered generalization to 1 + d dimensions is made.

5.1.1 Geometry

The term Rindler space is slightly deceptive, since the spacetime under consideration is ordinary 1+1 dimensional Minkowski space. The difference between the two is a mere coordinate transformation. This transformation comes in four patches (see figure 2).

The region where x > |t| is called the right Rindler wedge R. In this patch, the transformation from the Minkowski coordinates (t, x) to the Rindler coordinates (ηR, ξR) is defined as t = 1 ae a ξR sinh(aηR ), x = 1 ae a ξR cosh(aηR ), (5.1)

with 1/a some length scale we can chose at will. The next region, where x < |t|, is called the left Rindler wedge L. Here, the transformation is

t = −1 ae a ξL sinh(aηL ), x = −1 ae a ξL cosh(aηL ). (5.2)

Similarly, coordinates that cover the Rindler past wedge P , where t < |x| , and the Rindler future wedge F , where t > |x|, can be defined, but we will not do so here.

In terms of the Rindler coordinates, the Minkowski metric

ds2 = −dt2+ dx2

reads

ds2 = e2aξ(−dη2+ dξ2), (5.3)

be it for the coordinates in R of L.

In the literature, an alternative way of writing this metric is commonly encountered,6

ds2 = −ρ2d˜η2+ dρ2. (5.4)

6The coordinate ρ should not be confused with the density operator ˆρ. The latter has a hat, so the two

can be discerned easily. Furthermore, the meaning of ρ (with or without hat) should always be clear from the context.

(26)

The two ways of writing the Rindler metric are related by ˜ η = η a, ρ = 1 ae aξ . (5.5)

So at η = 0, the coordinate ρ is equal to the proper distance (the Minkowski coordinate x).

Now, what is the use of the coordinate transformation (5.1)? Is it as follows: the coordinates in R (or L) are the coordinates that are most natural for an observer who is being uniformly accelerated. First of all, this naturalness is because an object that stays at ‘rest’ at the origin follows the world line ξR

0 = const., whereas in Minkowski coordinates,

it reads t(ηR) = 1 ae aξR 0 sinh(aηR) (5.6) x(ηR ) = 1 ae aξR 0 cosh(aηR), (5.7)

where ηR is in fact the proper time of the object that is being accelerated. Equations

5.6 and 5.7 describe the world line of an object with proper acceleration ae−aξR0, as can straightforwardly be shown. (For example, see [18]). Thus, objects standing still in the Rindler frame experience uniform acceleration. At ξ = 0, the proper acceleration is a, or in the alternative notation (5.5), 1/ρ.

Secondly, as it is clear from the spacetime diagram in figure 2, the Rindler observer has a horizon. The part of spacetime which a Rindler observer in R can observe is exactly the patch covered by (ηR, ξR).

5.1.2 Field theory

We now quantize the field as in chapter3, but this time we will be using Rindler coordinates. In these coordinates, the equation of motion (3.2) is simply

ξ2− ∂η2= 0, (5.8)

in both R and L. Consequently, it is solved by the Rindler plane waves

gR p(η R , ξR ) = (1 4πωpe ipξR−iω pηR in R 0 in L , gL p(η L , ξL ) = ( 0 in R 1 √ 4πωp eipξL+iωpηL in L . (5.9) For a plot of these modes, see figure1. The Rindler plane wave modes form an orthonormal basis. So, the field ˆφ may be expanded over these modes as

ˆ φ = Z dphgR p ˆbRp + gpLˆbLp  + h.c.i. (5.10)

Note that the modes are now labeled by the Rindler momentum p which is not the same as the Minkowski momentum. The Rindler vacuum |0iR is defined as the state for which ˆbp|0i

R = 0 for all p. As we have seen in section 4.3, it needs not coincide with the

(27)

ax Re{gR p[0, ξ(x)]} Re{gL p[0, ξ(x)]} -10 -5 5 10 0.1 -0.1

Figure 1: The real part of a pair of Rindler modes at η = t = 0 as a function of the Minkowski spatial coordinate x. For this plot, we have chosen p/a = 2π for both of the modes.

A special combination of modes From the Rindler point of view, how much particles are present in the Minkowski vacuum? In principle we could answer this question by calcu-lating the Bogolyubov coefficients to then use formula (4.13) to calculate the expectation value of the Rindler number operator. However, there is a more elegant way of calculating the expected number of particles, which was first shown by Unruh [22].

This goes as follows. Define the null coordinates

u = x − t, v = x + t, (5.11)

and the ‘Unruh modes’

hI p = q 1 2csch(πωp/a) h eπωp/(2a)fR p + e−πωp/(2a)(f L −p)∗ i , (5.12) hII p = q 1 2csch(πωp/a) h e−πωp/(2a)(fR −p)∗+ eπωp/(2a)fpL i . (5.13)

The field may now be expanded as

φ = Z

dp  hI

pˆcIp+ hIIp ˆcIIp  + h.c. . (5.14)

So, the mode operators ˆcI

p and ˆcIIp are associated with the Unruh modes hIp and hIIp . These

operators also obey the canonical commutation relations (3.12), with the addition that a commutator can only be non-zero if it is between two operators of the same kind (i.e. between I or II). The Unruh modes can be written in terms of (u, v) using (5.1) and (5.11). We find they are proportional to

hI p ∝ uiωp/a (p > 0), (5.15) hI p ∝ v−iωp/a (p < 0), (5.16) hII p ∝ viωp/a (p > 0), (5.17) hII p ∝ u−iωp/a (p < 0). (5.18)

It is clear that (5.15) and (5.18) are analytical and bounded on the lower complex u plane if we put the branch cut in the upper complex plane. Similarly, (5.16) and (5.17) are analytical and bounded on the lower complex v plane if we put the branch cut in the upper complex plane.

(28)

Any function h(z) that is bound and continuous on the lower complex z plane contains only positive frequency (Minkowski) modes, for otherwise

h(z) = Z ∞

−∞

dωe−iωz˜h(ω) (5.19)

where ω is the (Minkowski) momentum and ˜h(ω) the Fourier transform of h(z), would diverge.

As we have just seen, hI

p and hIIp are always of the form h(z). Therefore, they contain

only positive frequency (Minkowski) modes. Consequently, βkp = 0 (see the discussion in

section 4.3). Then, by (4.6),

ˆ cp0|0i

M = ˆap|0iM = 0 (5.20)

for all (p0, p). That is, the vacua of the Unruh modes (5.12,5.13) and the Minkowski modes (3.4) coincide. Or, in other words: the (annihilation operators belonging to the) Unruh modes annihilate the Minkowski vacuum state.

By equating the expansion of the field over the Unruh modes (5.14) to the expansion of the field over the Rindler modes (5.10), we can express the mode operators of the Rindler modes ˆbp in terms of the mode operators of the Unruh modes ˆcp0. This gives

ˆbR p = q 1 2csch(πωp/a)  eπωp/(2a)cˆI +e−πωp/(2a)ˆcII† −p  , (5.21) ˆ bL p = q 1 2csch(πωp/a)  eπωp/(2a)ˆcII p +e −πωp/(2a)ˆcI† −p  . (5.22)

These expressions are very useful to compute expectation values involving the operators (ˆbp, ˆb†p) when the state is the Minkowski vacuum.

The Unruh effect We are now well prepared to calculate the expectation value of the Rindler mode number operator in the Minkowski vacuum,

h ˆNpi = h0|Mˆb R†

p ˆbRp |0iM. Substituting ˆbR

p by (5.21), using (5.20) and the Bosonic commutation relations of the

op-erators ˆcp, this is

h ˆNpi =

1 e2πaωp− 1

δ(0). (5.23)

The factor δ(0) is only there because Rindler modes are delta function normalized, as in (3.5), which causes the commutation relations to be in terms of delta functions, like in (3.12). If we would have used wavepacket modes, which do have a finite norm, the factor would be unity. Additionally, the act of localizing the modes induces small corrections to (5.23), as we will show in chapter 9 (cf. equation9.23).7

7So, the spectrum after constructing normalized wavepackets is not ‘identical’ to (5.23) without the

(29)

The spectrum (5.23) is exactly the Planckian spectrum we found before in (2.9), where the temperature equals

T = a

2π. (5.24)

Thus, a Rindler observer traveling through a space that is empty to a Minkowski observer will see thermal radiation with a temperature proportional to the Rindler observers accel-eration a. This is known as the Unruh effect.

Temporarily restoring units, we have

T = a 2π ·

~ c kB

≈ a 4 × 10−21K,

with ~ ≈ 10−34Js the reduced Planck constant, c ≈ 3 × 108ms−1 the speed of light in vacuum, and kB ≈ 1 × 10−23J/K Boltzmann’s constant. We can now see that both

in the limit of ‘no quantum mechanics’ (i.e. ~ → 0) and ‘no relativity’ (i.e. c → ∞) the temperature goes to zero. So, the Unruh effect is manifestly a relativistic, quantum mechanical effect.

The acceleration needs to be tremendous for the temperature to be of the order of one Kelvin. For macroscopic objects such accelerations are nowhere near the accelerations technologically available to men. However, an observer ‘hovering’ just outside the horizon of a black hole feels a proper acceleration depending on his distance to the horizon. This acceleration goes to infinity as the distance of the observer to the horizon goes to zero. So in the vicinity of a black hole, we do encounter such tremendous accelerations. This will give rise to the so-called Hawking radiation, and will cause black holes to evaporate. More about this in part two of this thesis.

It is, with the derivation given in this section, still questionable whether a Rindler observer will actually detect any particles. Quite surely, the observations of such an ob-server do not depend on the way he or she decides to quantize the field by writing some equations on a piece of paper. Maybe a Rindler observer just picks the ‘wrong’ basis. It is possible, however, to mathematically model a particle detector, known as a Unruh-deWitt detector, that couples to the free field. It can be shown, that if such a detector is uniformly accelerated, it detects particles with a thermal spectrum with temperature T = a/(2π) [17].

5.2 1 + d dimensions

In this subsection, we treat Rindler space in an arbitrary number of spatial dimensions. For a comprehensive review of scalar field theory in Rindler coordinates of arbitrary dimension, see Tagakagi [23] or Crispino [24].

5.2.1 Geometry

The general Rindler metric is obtained by adding a flat, n-dimensional hyperplane perpen-dicular to dρ and d˜η to the 1+1 dimensional Rindler metric (cf. 5.4),

(30)

Rindler coordinates

1 1

1

1

Figure 2: Spacetime diagram of Rindler space, with the four patches R, L, F and P . Lines of constant ξ are parabola, whereas lines of constant η are straight lines towards the Minkowski origin. From the diagram, it is clear that a Rindler observer in R cannot perform measurements on (i.e. both send and receive signals from) any of the other wedges. The border of the Region R is therefore a horizon, known as the Rindler horizon. Note that in L, the ‘future’ is towards negative η, and the Minkowski origin is, like in R, towards negative ξ.

So we have D = 1 + d = 1 + 1 + n dimensions in total: one time direction, one ‘parallel’ direction, and n ‘perpendicular’ directions. The Rindler horizon is now a (hyper)plane located at ρ = 0. This metric is the actual metric people experience on earth if we put n = 2 and the proper acceleration a = g ≈ 9.8 ms−2, since people are small compared to the radius of the earth and experience constant acceleration.

(31)

5.2.2 Field theory

Writing the Klein-Gordon equation (3.2) using the general Rindler metric (5.25), one finds the equation of motion equals

 −1 ρ2∂ 2 ˜ η+ ∂ρ2+ 1 ρ∂ρ+ ∂ 2 x⊥  φ = 0. (5.26)

As an ansatz, we separate the (spacial) perpendicular directions x⊥ from the parallel di-rection ρ, and assume the solutions in the perpendicular didi-rection to be like plane waves,

˜

fp(γ)(˜η, x) = Θ(γρ)h(γ)p (ρ) exp [i p⊥· x⊥− iγΩ˜η] , (5.27) with Θ the Heaviside step function. We switched to a compacter notation where γ = 1 in the right wedge and γ = −1 in the left. This mode is, as of yet, unnormalized, which we remind ourselves of by writing a tilde over the function. Normalization will be suspended until section9.3.

Plugging the ansatz (5.27) into the equation of motion, we find that ˜fp solves the

equation of motion (5.26) if  ρ2 d 2 dρ2 + ρ d dρ− (ρ 2|p ⊥|2− Ω2)  h(γ)p (ρ) = 0. (5.28)

This is, by definition, solved by the modified Bessel function of the second kind (cf. equation

0.15) when |p⊥| 6= 0,

h(γ)p (ρ) = KiΩ(|p⊥|ρ). If |p⊥| = 0, then the solution is simply

h(γ)p =eiaΩξ,

with ξ = log(aρ) (cf. equation5.4). So, a basis for solutions to the massless8Klein-Gordon equation in 1 + d dimensional Rindler space is given by

˜ fp(γ)=

(

Θ(ρ)KiΩp(|p⊥|ρ) exp [i p⊥· x⊥− iγΩ˜η] |p⊥| 6= 0 exp [i p⊥· x⊥+ iaΩξ − iγΩ˜η] |p⊥| = 0

(5.29)

Effective potential Let us momentarily return to the equation that governs hγp(ρ). If

we use the Rindler coordinates (ξ, η, x⊥) rather than the Rindler coordinates (ρ, ˜η, x⊥) (see equation 5.4), equation5.28reads,

 −d 2 dξ2 + V (|p⊥|, ξ)  h(γ)p = (aΩ)2h(γ)p (5.30) with V (|p⊥|2, ξ) = |p⊥|2e 2aξ . (5.31)

8When we consider the Klein-Gordon equation with a mass term, we have (|p

⊥| + m) instead of |p⊥| in

(32)

So, there is an effective Schr¨odinger equation for the part of the field that depends on the distance from the horizon, with a potential V that vanishes for |p⊥| = 0, but confines modes to the region close to the horizon for |p⊥| > 0.

Note that this potential is absent in the 1+1 dimensional case. This is perfectly sound, since as far as a general Rindler plane wave with |p⊥| = 0 is concerned, space is essentially 1 + 1 dimensional.

The special combination of modes In the previous section, a special combination of 1+1 dimensional Rindler modes, called the Unruh modes, were shown to annihilate the Minkowski vacuum. We saw how the mode operators ˆbp of the Rindler modes can be

written in terms of the modes that annihilate the Minkowski vacuum ˆcp. In a similar way,

it can be shown that in the general case, the Rindler mode operators can be rewritten as

ˆbR p= ˆb R (Ω,p⊥)= q 1 2csch(πΩ)  eπΩ/2ˆc(−Ω,p⊥)+e −πΩ/2 ˆ c†(,−p ⊥)  , (5.32) ˆbL p= ˆbL(Ω,p⊥)= q 1 2csch(πΩ)  eπΩ/2ˆc(Ω,p⊥)+e −πΩ/2 ˆ c†−(,p), (5.33)

where the operators ˆc satisfy the commutation relations h ˆ c(±Ω,p⊥), ˆc † (±0Ω0,p0 ⊥) i = δ(Ω − Ω0)δ(p⊥− p⊥0),

and annihilate the Minkowski vacuum [24]. Because of the minus signs, there is no need for the superscripts I and II (cf. 5.21).

(33)

6 Continuous variable quantum information

Calculations of the entropy of continuous variable systems, like modes of the vacuum, are in general too hard to be tractable. In many physical discussions therefore, the system is reduced to a ‘cartoon version’ with low dimensionality, like the ubiquitous system of particles with spin 1/2.

There is, however, a class of states called the Gaussian states, for which these calcula-tions are in fact tractable. This is because a Gaussian state can be characterized by only a few expectation values, or weights, all of which are contained in the so-called covariance matrix. In this chapter, we sill see how the entropy of a Gaussian state can be expressed in terms of these expectation values.

Gaussian states are not only mathematically practical, the are also very realistic. Al-most all of the continuous variable states produced in the lab are in fact Gaussian [25]. The class contains all coherent states, squeezed states, and most importantly for us, the Minkowski vacuum.

We only treat the concepts that are essential for the further development of this thesis. A more thorough account can be found in Demarie [26] or Paris [20].

6.1 The Wigner function

In classical (statistical) mechanics we can describe the state of a particle as a point in phase space: it has a certain momentum and a certain position. More generally, if we lack some information about the particle, the state is described by the Liouville density function, which is a probability density function on phase space. The time evolution of this function is governed by the Poisson brackets and the Hamiltonian.

The situation in the standard formulation of quantum mechanics is quite different. There, we have a wave function that is a function of either position or momentum. There is a formulation of quantum mechanics that is closer to the classical picture. This is is known as the phase space formulation of quantum mechanics.

The central object in this formulation is the Wigner function, which is the direct quantum mechanical analogue of the Liouville probability density function. It is defined on the phase space with variables p, q as [26]

W (q, p) = 1 π Z dq0q − q0 ρˆ q + q0 e2iq 0p . (6.1)

This relation can be inverted, thus yielding the matrix elements of the density matrix,

hx| ˆρ |yi = Z dp W x + y 2 , p  eip(x−y). (6.2)

Since the relation between the density matrix and the Wigner function is essentially one-to-one, the Wigner function contains all the information about the state.

The conceptual advantage of working with the Wigner function is its resemblance to the Liouville probability density function. Namely, the marginal distributions of the

(34)

Wigner function are the probability densities of finding the particle at position q, or with momentum p respectively. That is, form (6.1),

Z ∞ −∞ dp W (q, p) = hq| ˆρ |qi = |ψ(q)|2, (6.3) Z ∞ −∞ dq W (q, p) = hp| ˆρ |pi = | ˜ψ(p)|2, (6.4)

where the last line of equation 6.3and of 6.4 follows in case the system is in a pure state |ψi.

So far, everything is in accord with classical intuition, but ‘quantum weirdness’ is bound to make its entrance somewhere, and it does so here. Namely, out of the three axioms of a probability distribution, the Wigner function only satisfies unitarity. The Wigner function can be smaller than zero, and it does not satisfy ‘σ-additivity’. Hence it a ‘quasi probability distribution’.

6.2 Gaussian states

Consider an N mode system (cf. equation3.16)

H ⊆ H1⊗ H2⊗ . . . ⊗ HN, (6.5)

with mode operators ˆam that satisfy the Bosonic commutation relations (cf. equation3.12)

[ˆam, ˆam0] = 0, [ˆa†m, ˆa†m0] = 0, (6.6) [ˆam, ˆa†m0] = iδmm0. Define X = (q1, p1, . . . , qN, pN) T

to be a vector in the 2N -dimensional phase space.

A Gaussian state, then, is a state with a Gaussian Wigner function,

W (X) ∝ exp{−12(X − ¯X)T

σ−1(X − ¯X)}.

Where σ−1 is some 2N × 2N dimensional matrix and ¯X some linear term. The normaliza-tion factor is determined by unitarity. As it will turn out, the linear term does not affect the entanglement entropy of the state, so as far as we are concerned, a Gaussian state is just

W (X) ∝ exp{−12XT

(35)

Now define the quadrature operators (cf. equation 4.10) ˆ qm= 1 √ 2(ˆam+ ˆa † m), pˆm = 1 i√2(ˆam− ˆa † m), (6.7)

and the vector that contains them, ˆ

R := (ˆq1, ˆp1, . . . , ˆqN, ˆpN)T. (6.8) Using the (straightforward) higher dimensional generalization of (6.2), one can calcu-late the expectation values of the quadrature operators and their second weights (i.e. hq2mi and hp2mi). After some algebra, one finds we can express the matrix elements of σ directly in terms of these expectation values,

[σ]kp = 12h{ ˆRk, ˆRp}i − h ˆRkih ˆRpi, (6.9)

where {·, ·} is the anti-commutator. This matrix is known as the covariance matrix .

6.3 Thermal states revised

We have already discussed thermal states in section2.2. In this section, we will see thermal states in the context of Gaussian states.

Consider a collection of N untangled simple harmonic oscillators, labeled by k, all of which are in a thermal state (cf. equation2.4),

ρth = N O k=1 ρthk , ρthk = e −βkωka † kak tr(e−βkωka†kak) . (6.10)

We will not show it here, but such a thermal state not only has an Gaussian density operator, it also has a Gaussian Wigner function [20]. Using the expression for the matrix elements of the covariance matrix (6.9) and the expression for the quadrature operators in terms of the mode operators (6.7), one finds

σth = diagh ˆN1i +12, h ˆN1i +12, . . . , h ˆNNi +12, h ˆNNi +12 

, (6.11)

where ˆNk = a†kak. Therefore, an N-mode thermal state has a diagonal covariance matrix.

The converse is also true. Namely, we can see the elements of (6.11) as some real numbers. There exist inverse temperatures βk such that the elements of (6.11) coincide

with these numbers (see equation2.9). Thus, any Gaussian state with a diagonal covariance matrix is thermal in some basis.

Of thermal states, we already know the entropy. Generalizing equation 2.11 to the entropy of an N -mode, unentangled system, we get

S(ρth) =X k h (1 + h ˆNki  log1 + h ˆNki  − h ˆNki logh ˆNki i . (6.12)

We did not show explicitly how to find these temperatures, we just mentioned that it is possible to find them. This is because for what follows, the explicit temperatures are irrelevant, whereas their existence is essential.

(36)

6.4 Symplectic transformations and the entropy of general Gaussian states In this section, we will show that the transformation to this basis in which the covariance matrix is diagonal, leaves the so-called symplectic eigenvalues invariant. As we will see, we can express the entropy of a thermal state in terms of its symplectic eigenvalues. We can then calculate the entropy of any Gaussian state: we just calculate the syplectic eigenvalues of the covariance matrix, and plug them into the equation for the entropy of a thermal state.

Symplectic eigenvalues To introduce the symplectic eigenvalues, we start with sym-plectic matrices. As was already mentioned in section4.2, a real matrix S is symplectic if and only if it satisfies

SΩST = Ω, (6.13)

where Ω is some fixed invertible skew-symmetric matrix. There are multiple choices for this matrix. We will take

Ω = N M k=1 ω, ω = 0 1 −1 0 ! . (6.14)

It is straightforward to show that symplectic matrices have a unit determinant. The set of all 2N × 2N symplectic matrices forms the group Sp(R, 2N ) under matrix multiplication. We proceed with a theorem, that applies to the covariance matrix, without stating its proof. For the original theorem and a proof see Williamson [27].

Theorem 6.1. (Williamson) For any real 2N × 2N symmetric positive definite matrix σ there exists a real symplectic matrix S such that

σ = STWS, W =

N

M

k=1

σk12×2, ∀k∈{1,...,n}: σk> 0.

The matrices S and W are unique up to a permutation of the σk.

By Hermiticity of the quadrature operators (6.7) the covariance matrix σ (6.9) is real, symmetric and positive, so that we can apply this theorem to the covariance matrix. The σkare called the symplectic eigenvalues of σ and are, in general, not equal to the (regular)

eigenvalues of σ.

Finding the symplectic eigenvalues In general, it can be hard to explicitly find the symplectic transformation that diagonalizes the covariance matrix. Fortunately, this is not necessarily to find the symplectic eigenvalues: the symplectic eigenvalues can be found by computing the (regular) positive eigenvalues of iΩσ. Switching to an abbreviated notation where SE stands for ‘positive eigenvalues” and PE for “symplectic eigenvalues”, this statement reads

(37)

This equivalence can be shown by using that PE(W) = PE(iΩW), that S is symplectic, and the fact that similar matrices have the same spectrum,

SE(σ) ≡ PE(W) = PE(iΩW) = PE(iSΩSTW) = PE(iΩSTWS) = PE(iΩσ). (6.16)

The entropy in terms of the symplectic eigenvalues By Williamson’s theorem (theorem 6.1), we have that any covariance matrix can be diagonalized by a syplectic transformation, thereby casting it into the form of the covariance matrix of a thermal matrix (6.11). Comparing the matrix W from Williamson’s theorem to the covariance matrix of a thermal state (6.11), we see that the relation between the symplectic eigenvalues and the number operator in the thermal basis reads

h ˆNki +12 = σk.

With (6.12) this directly gives us the entropy in terms of the symplectic eigenvalues of a general Gaussian state,

S( ˆρ) =X k  1 2 + σk log 1 2 + σk − σk− 1 2 log σk− 1 2 . (6.17)

So, to summarize: in order to compute the entropy of a Gaussian state, one needs to compute the eigenvalues of iΩσ and plug these into equation6.17.

6.5 One and two mode Gaussian states

To make things a bit more explicit, we will now compute the entropy of a single and a double mode Gaussian state, where hˆqi and hˆpi are already zero, or are put to zero by a local transformation (which does not affect the entropy).

Single mode From (6.9), one finds the covariance matrix of a single mode Gaussian state, σ = hˆq 2i 1 2h{ˆq, ˆp}i 1 2h{ˆq, ˆp}i hˆp 2i ! . (6.18)

It’s symplectic eigenvalue equals

SE(σ) = E(iΩσ) =phˆp2ihˆq2i − h{ˆq, ˆp}i2 =: σ

1. (6.19)

It is sometimes computationally more convenient to have hˆqk2i, hˆp2ki and h{ˆq, ˆp}i in terms of the mode operators rather than the quadrature operators. Using the expression for the quadrature operators in terms of the mode operators (6.7) and the Bosonic commutation

Referenties

GERELATEERDE DOCUMENTEN

Bevindingen per indicator 21.. Absoluut gezien ligt de mediane te-koopduur in het risicogebied met lage schade-intensiteit, met 225 dagen, in het tweede kwartaal van 2019

Therefore, if we want to have a general prescription, we must make our trajectory match the lower boundary of the escape region; this corresponds to a family of asymp-

More precisely, we calculate, for different classes of stabilizer states, the most general structure of the local Clifford operations used for the protocol, such that they effect

Next, we apply our formal theory to an experiment on orbital-angular-momentum entanglement of two photons, in order to illustrate how detector characteristics bound the

Unlike for other dispositions, a point that shows originality is Article VIII, which plans semi- annual meetings in order to exchange information on the Parties’ current

Research suggests the view of how the traditional and experiential marketing approach could influence these interactions and whether that particular approach predicts the

As the dimensions power distance and collectivism vary highly per culture, it is expected that their influence on the relationship between ethical leadership and safety climate

In this tradition, Intetain 2011 – held in Genova, Italy, May 2011 – included the presentation of research works on virtual/mixed/augmented reality, hardware technologies and