• No results found

Noise-resilient simulation of (exact) MPS states on a quantum computer

N/A
N/A
Protected

Academic year: 2021

Share "Noise-resilient simulation of (exact) MPS states on a quantum computer"

Copied!
88
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master Physics and Astronomy

Track: Theoretical Physics Master Thesis

Noise-resilient simulation of (exact) MPS states

on a quantum computer

Claartje Ottens

10534911

September 2019 – December 2020 December 21, 2020 60 EC Supervisor:

dhr. prof. dr. C.J.M. (Kareljan) Schoutens

Examiner:

(2)

Abstract

We investigate a noise-resilient approach to studying many-body states on NISQ devices. This approach reduces the circuit that is needed to compute observables on these many-body states. Thus we can bound the circuit size and the number of qubits. This leads to a bound of the error rate that is smaller than the usual error, when the mixing rate is small. We test this approach on the PXP states, eigenstates of the Rydberg atom chain that show non-thermal behaviour and have exact MPS expressions.

To turn the PXP state into a quantum circuit we use a method to turn the MPS state into a sequentially generated state. We also use this successfully for the related AKLT state. Using these quantum circuits we successfully simulated these states using the quantum computing package Cirq. For the PXP state we simulated the local energy and the spin correlator. For the latter, we found a correlation length of ξ = 0.42, close to the theoretical correlation length of ξ = 1/2 log(3) ≈ 0.455.

We found that the tested approach improves the efficiency of the computations, but only in some very specific situations improves the noise-resilience. Because reducing the circuit doesn’t worsen the error rate, this leads to a more efficient use of resources. It also shows that for computing a specific observable only a small part of the circuit is relevant.

From the theoretical error bounds and our simulations we can bound the mixing rate. For the spin correlator this is δ(1, 2) > 0.16 and for the local energy 0.3 < δ(2, 1) < 1.8.

Title: Noise-resilient simulation of (exact) MPS states on a quantum computer Author: Claartje Ottens, claartjeottens@gmail.com, 10534911

Supervisor: dhr. prof. dr. C.J.M. (Kareljan) Schoutens Second grader: Dr. M. (M¯aris) Ozols

Date: December 21, 2020

QuSoft, Research Center for Quantum Software Science Park 123, 1098 XG Amsterdam

(3)

Contents

1. Introduction 5

2. Spin chain models 7

2.1. Quantum lattice models . . . 7

2.1.1. Entanglement . . . 7

2.2. Tensor networks . . . 8

2.3. Matrix product state . . . 9

2.3.1. Sequential generation of MPS . . . 11

2.4. AKLT . . . 14

2.5. PXP . . . 15

2.6. Connection between the PXP and AKLT model . . . 18

3. Quantum simulation 20 3.1. Preliminary concepts . . . 20

3.2. Quantum circuit for PXP state . . . 21

3.2.1. The matrices . . . 21

3.2.2. The quantum circuit . . . 21

3.3. Quantum circuit for AKLT state . . . 25

3.4. Noise . . . 25

3.5. Concepts to improve noise-resilience . . . 27

3.5.1. Explanation and practical use . . . 27

3.5.2. Framework . . . 27

3.5.3. Past causal cone . . . 29

3.5.4. Mixing rate . . . 30

3.5.5. Bounding the error rate . . . 31

3.6. Fitting PXP in the noise-resilient framework . . . 31

4. Results 33 4.1. Technical aspects . . . 33 4.2. Correlation length PXP . . . 34 4.2.1. Theory . . . 34 4.2.2. Simulations . . . 36 4.3. Energy density PXP . . . 38 4.3.1. Theory . . . 38 4.3.2. Simulations . . . 38 4.4. Noise-resilience . . . 38

4.4.1. Noise on part of the circuit . . . 38

4.4.2. Past causal cone . . . 41

4.4.3. Leaving out part of the circuit . . . 41

(4)

5. Concluding 52 5.1. Summary . . . 52 5.2. Open questions . . . 53

A. Decomposing rotation gates 54

B. PXP 57 C. Code 59 C.1. PXP.py . . . 59 C.2. results.py . . . 67 C.3. AKLT.py . . . 80 D. Some terminology 85

(5)

Chapter 1

Introduction

A promising technical advancement nowadays is the quantum computer. The concept of the quantum computer was first proposed by Richard Feynman in 1981. Since then great progress has been made in the world of quantum computing. First a fully new branch of the field of information science was developed, as not all concepts of Shannon’s information theory are one-to-one applicable on quantum computing. In the 1990’s Peter Shor discovers the Shor algorithm, an algorithm that can quickly factorize large numbers and can thus break many of our cryptosystems. Also on the hardware side the first successes were achieved. The first qubits were stabilized in 1998, when Jonathan Jones and Michele Mosca used a two-qubit NMR quantum computer to solve Deutsch’s problem. IBM reveals a quantum computer containing 52 qubits in 2019, and Google claims to have reached quantum supremacy. But although the developments in the field evolve rapidly, quantum computers still seem a long way from being socially relevant.

In 2018 John Preskill wrote an article about the stage of quantum computing at this moment [23] and called it the Noisy Intermediate Scale Quantum (NISQ) era. This is the first stage where there are enough qubits available to solve relevant problems, but the problems have to be small as the amount of available qubits is small, and noise is a great problem.

Finding suitable problems is hard because there is a lot that can still be simulated better, faster and more precise on a classical computer. But problems that are quickly too hard to simulate on a classical computer, are problems with quantum effects. One example are problems in quantum chemistry. The behaviour of large molecules is gov-erned by quantum physics and becomes too large to simulate on a quantum computer even for small molecules. Simulating many-body states is another field that is promising to do on these near-term quantum computers.

But the noisiness of the available quantum computers is a great problem. It is still hard to keep qubits stable for a longer time and perform operations on them. There are not yet enough qubits available to do error correction. Therefore it is important to develop algorithms that can deal with this noise. Some algorithms and procedures are more noise-resilient than others. The variational quantum eigensolver for example, is a good candidate for near-term use because it has a natural resilience to noise. Instead of one long computation (needing a long coherence time) it does a lot of short computations after which the qubits have to be initialized again (resulting in a short coherence time). For the same reason circuits with small depth are good candidates. The shorter the circuit, the shorter the needed coherence time, the easier it is to keep the qubits protected against noise during the time of the computation. There are a lot of papers describing research in this area. In this thesis we test the concepts of one of them.

(6)

In september 2019 Borregaard et al. [6] published a paper about the noise-robust exploration of quantum matter on NISQ devices. They claim that knowing what infor-mation you want to extract from a simulated quantum state helps to design a noise-resilient quantum circuit. They arrive at the counter intuitive conclusion that excluding part of the circuit in the computation could give more accurate results when measuring the value of an observable. We are testing this on the PXP state.

The PXP state is a very specific eigenstate of the Rydberg atom chain. In experiments this chain showed unusual behaviour: a non-thermalizing oscillation. This non-thermal behaviour is the result of scarred eigenstates, which are excitations of the PXP state. A remarkable attribute of the PXP state is that it has an exact MPS representation. To make this state usable for testing the concepts of Borregaard et al., it needs to be put in a sequential form. This sequential generation was developed by Sch¨on et al. [26, 25] and uses the concept of singular value decomposition.

From this sequential generation we have designed a quantum circuit in elementary gates that simulates the state. It is possible to find properties of the PXP state from these simulations, like the correlation length. First we test the locality of noise in the PXP state, then we test the theory of Borregaard et al. with two observables, the local energy and the spin correlator between two sites.

This thesis is structured as follows. We start by giving an introduction to spin chain models and matrix product states in chapter 2. Then we introduce two examples of states with an exact MPS representation in sections 2.4 and 2.5, the AKLT and the PXP state. In chapter 3 we treat the subject of quantum simulation and we introduce the theory of Borregaard et al.. We work towards the quantum circuit of the AKLT and the PXP state. In chapter 4 we use the circuit of the PXP state to do simulations, find the values of some observables and test the theory of Borregaard et al.. Finally, in section 4.4.4 we compute the bounds from the paper and connect the experimental results to the theory.

(7)

Chapter 2

Spin chain models

This chapter is a short introduction to spin chain models. A useful description of spin chains are matrix product states. We will use this format to describe two spin chain models: the PXP model and the AKLT model.

2.1. Quantum lattice models

A spin chain is the one-dimensional version of a quantum lattice model. When talking about quantum lattice models, we think about a spin system having an underlying lattice. This lattice is described by a graph G = (V, E) with V the vertices and E the set of edges. Interactions in the physical system are local, meaning that the vertices only interact with finitely many neighbours. An example is a model where all interactions except for direct neighbour interactions can be neglected. These models capture strongly correlated quantum systems very well.

Definition 1 (System size). The system size of a quantum lattice (or number of sites) is given by the number of vertices, N = |V |.

Every vertex is a spin system with degree of freedom d and Hilbert space Cd. The Hilbert space of the total spin chain is given by H = Cd⊗N. The Hamiltonians we will consider are local and of the form

H =X

j∈V hj.

A Hamiltonian is called k-local if a term hj is supported at k sites at most. A k-local observable O is only supported on k sites, the support on other sites is trivial.

2.1.1. Entanglement

For a Hamiltonian with only neighbour interaction, all interactions between other sites have to be passed through nearest neighbours. Since the interaction between different segments of a spin chain then is minimal, you’d expect correlations between different segments to decay with distance. This is called clustering of correlation. Nearby lattice sites in the chain will be correlated to some extent, but this decays quickly when the sites are further apart. In gapped models this decay is exponentially,

(8)

with C(r) the correlation as a function of the distance r = |x − y| between two sites x, y and ξ the correlation length. This expression will be derived in section 4.2.1. For gapless models this is not true: here we find a algebraic decay instead of an exponential decay. Hence the ‘correlation length’ has no meaning anymore and is infinite.

Another way to quantify entanglement is the von Neumann entropy

S(ρ) = − Tr (ρ log(ρ)) . (2.2)

Let A and B be two subsystems of the quantum system S in state ρ. To get the state

ρA of subsystem A we have to trace out system B. But the more entangled A and B

are, the less ρA tells you about A. The entropy of ρA tells how entangled A is with B. When A and B are in a tensor product state, the entropy S(ρA) is zero.

So how does this entropy scale with the region of A, |A|? A first thought would be that the entropy scales extensively with the region, S(ρA) ∼ O(|A|), and for random quantum states this is quite true! But for gapped many-body ground states this is not the case, here the entropy scales with the boundary of A. If we define the boundary as

∂A = {j ∈ A : ∃k ∈ B with dist(j, k) = 1}, (2.3)

then the entropy of A satisfies the area law

S(ρA) = O(|∂A|). (2.4)

For spin chains, this boundary consists of two sites only.

This shows that the entanglement of ground states of gapped many-body states is small and mostly local. Correlations decay fast, and due to the area law the entanglement of the many-body state is much smaller than it could have been. This observation tells us that ground states only live in a small corner of the entire Hilberspace; only a small part of the Hilbertspace is physical.

2.2. Tensor networks

Because the entanglement in ground states of quantum spin chains is limited, they can be modeled by tensor networks. A tensor is a multidimensional array. The order of the tensor is the dimension of the array. A scalar is a tensor of order 0, a vector a tensor of order 1 and a matrix a tensor of order 2. Penrose introduced a graphical notation that

Figure 2.1.: Some base constituents from the Penrose graphical notation for tensor networks. From left to right: a scalar, a vector, a matrix and the trace of a tensor.

(9)

will be used in this thesis [21]. In this notation a box represents a tensor and an edge is an index. A closed edge represents a contracted index and an open edge an uncontracted index (see figure 2.1). This can also visualize matrix multiplication. Let A, B, C ∈ Cn×n be three matrices. Then

Cαβ = AB = AαγBγβ is the same as

A B

= C .

This simple yet powerful representation is used a lot in tensor networks.

An arbitrary state of a quantum many-body spin system, with spin j ∈ Cd, is given by |ψi = d X j1,j2,...,jn=1 cj1,j2,...,jn|j1, j2, . . . , jni . (2.5)

This is a tensor cj1,j2,...,jn with indices j1, j2, . . . , jn. These indices are also called the

physical edges. The graphical notation is given in figure 2.2. [9]

2.3. Matrix product state

Matrix product states were introduced for the first time by Fannes et al. in [19] as finitely correlated states. Matrix product states project highly entangled states on local correlations. This makes them a good candidate to parametrize highly entangled states. There are several ways of thinking about a matrix product state. An intuitive one is the valence bond picture. Let’s take a look at a spin chain. Assume that every site is not a single d-dimensional system, but two virtual systems of dimension D. Let every pair of virtual spins be in a maximally entangled state |Ii =PD

α=1|α, αi. This pair is called an entangled bond. Then apply a linear map A : CD ⊗ CD → Cdto all N sites:

A = d X j=1 D X α,β=1 Aj,α,β|ji hα, β| . (2.6)

In a more general setting the bond |Ii and the maps A can be of different dimensions for each site. Thus A(k)j refers to a Dk× Dk+1 matrix on site k. Applying maps to all N sites gives a state representation in the form of a matrix product and leads to the following definition. [22]

Definition 2 (Matrix product state). A matrix product state (MPS) of bond dimension D on N sites with open boundary conditions is a pure state with a state vector of the form |ψi = d−1 X j1,j2,...,jN=0 hψI| A(1)j 1 A (2) j2 . . . A (N ) jN |ψFi |j1, j2, . . . , jNi , (2.7)

(10)

cj1,j2,...,jn

j1 j2 j3 j4 j5 · · · jn

Figure 2.2.: An arbitrary quantum many-body spin state. Here cj1,j2,...,jn is a tensor and

j1, j2, . . . jn the spin indices.

with d the spin dimension, Aji linear maps and |ψIi, |ψFi vectors that specify the open

boundary conditions.

In this definition the bond dimension is site-dependent and the maps Aji are of size

Dk × Dk+1. When the bond dimension differs between sites, D is the maximal bond

dimension, D = max {D1, D2, . . . , DN}. For a MPS with periodic boundary conditions

we have j1 = jN +1 and the sum becomes a trace over the matrix product. Then the

state is translational invariant.

Example 3 (GHZ state). A GHZ state is of the form |ψi = |000 . . .i + |111 . . .i. We can also rewrite this to |ψi = |+ + + . . .i + |− − − . . .i where |+i = |0i+|1i√

2 and |−i =

|0i−|1i 2 . The GHZ state has a MPS representation given by matrices

A+ = 1 + σ z 2 = 1 0 0 0  , A−= 1 − σ z 2 = 0 0 0 2  . (2.8)

Every pure state can be approximated arbitrarily well by a MPS representation when the bond dimension becomes large enough. But are there natural states with an exact MPS representation? Actually, all matrix product states are ground states of gapped local Hamiltonians [9]. When a MPS is the exact ground state of a Hamiltonian, this Hamiltonian is called the parent Hamiltonian. That every (infinite) MPS has a gapped parent Hamiltonian was proven in [19]. But as we will only be dealing with finite chains, the proof is slightly different. This can be found in [22, 31], here we only show the result.

Figure 2.3.: A MPS state in the valence bond picture. The blue circles are a pair of two virtual spins, which together form one physical spin (or site). [30]

(11)

A1 s1 A2 s2 A3 s3 A4 s4 A5 s5 (a) Open boundary conditions.

A1 s1 A2 s2 A3 s3 A4 s4 A5 s5 (b) Periodic boundary conditions. Figure 2.4.: Two matrix product states in the Penrose graphical notation.

Theorem 4 (Completeness). Any state |ψi ∈ (Cd)⊗N has an open boundary MPS

representation of the form of equation (2.7) with bond dimension D ≤ dbN/2c. However, the choice of matrices is not unique.

2.3.1. Sequential generation of MPS

Until now we spoke about MPS in the valence bond picture and the theoretical descrip-tion (equadescrip-tion 2.7). But the MPS formalism is also well suited for the descripdescrip-tion of sequential schemes. These arise naturally in the generation of many-body states. An example is time-bin photons leaking out of a atom cavity system of laser pulses prop-agating through atomic ensembles. But it also naturally adepts to the generation of multi-qubit states on a quantum computer. The latter is of good use to us. Garcia et al. [22] identify two ways to sequentially prepare a state of a spin chain: with ancilla and without ancilla. In the last one the particles interact sequentially with one another. In the first scenario the particles all interact one by one with an ancilla. The ancillary system is denoted by A with Hilbertspace HA ' CD. The ancilla couples sequentially to initially uncorrelated particles of dimension d. The system particles are denoted by B with Hilbertspace HB ' Cd. At every step there is a unitary time-evolution of the total system HA⊗ HB. This can be described by an isometry V : HA → HA⊗ HB. After N iterations of this process, the ancilla decouples and thus the final state of the system is

|Ψi = |φFi hφF| V[n]. . . V[1] Ii = |φFi |ψi ,

with |φIi the initial state of the ancilla, |φFi the final state of the ancilla and |ψi the N -particle spin state.

This leads to the following theorem, which was proven in [26, 25]. The proof imme-diately provides a recipe to sequentially generate a particular state. We will use this to generate a PXP state and an AKLT state on a quantum computer.

Theorem 5 (Sequential generation with ancilla). The following two sets are equivalent sets of N -qubit states:

1. States with MPS representation with open boundary conditions and maximal bond dimension D.

(12)

2. States which are generated isometrically and sequentially by interaction with a D-dimensional ancillary system which decouples in the last step. That means that the generation is deterministic.

Proof. Let HB ' Cd be the Hilbertspace of one site and let a spinchain of N sites be in the zero state, |0i⊗N ∈ HB⊗N. Let |φIi ∈ HA be the initial state of the ancillary system, where HA ' CD. Define the following isometries

V = X

i,α,β

Vα,β,i|α, ii hβ| , (2.9)

with Via D×D matrix, i runs over the d dimensions of the systemparticles and {|αi , |βi} a basis for the D ancillary levels. The isometry condition is given by

V†V = d X

i=0

Vi†Vi =1D. (2.10)

After the process of sequential generation, the resulting N -particle state is

|ψi = d X

i1,...,in=0

hφF| Vi[n]n . . . Vi[1]1 |φIi . (2.11)

This state is a MPS state with open boundary conditions specified by the initial and final state of the ancillary system. This means that the set of MPS states includes the set of sequentially generated states.

Now let us turn to the other inclusion: is the set of MPS states included in the set sequentially generated states? Then we have to show that every MPS state

˜ ψ E =D ˜φ1 A (1)A(2). . . A(N ) ˜ φ2 E (2.12) (where A(k) are arbitrary maps) can be generated in a sequential manner by isometries of the same dimension. The proof is based on a repeated use of the singular value decomposition for every matrix A(k). Start with the left matrix by writing

M =D ˜φ1 ⊗1d  A(1) = V[n]0sU = V[n]0M[n].

The first step decomposes the product of the left vector with the left matrix into a left unitary V[n]0, a singular value matrix s and a right unitary U . M[n] is the product of the last two. Then you multiply the matrix M[n] with the next matrix A(2) and repeat the singular value decomposition. So to get all the isometries we use the induction step



(13)

Figure 2.5.: The singular value decomposition of M into U · Σ · V∗. On the top you see the direct action of M on the unit disc with unit vectors e1 and e2. On the left the action

of the right unitary V∗, a rotation of the unit disc. On the bottom the action of Σ, the singular value matrix. The unit disc stretches to a ellipse with a horizontal and vertical scaling given by the singular values. On the right the action of the left unitary matrix U , again a rotation. [1]

For the last step, set |φIi = M[1] ˜ φ2 E

. Then we get for the MPS state the expression

˜

ψE= V[n]0. . . V[1]0|φIi . (2.14)

Because at the one-to-last step only d levels of HA are occupied, the entire state can be mapped onto B. This means that the ancilla decouples at the last step, in the application of V[n].

As this procedure shows that every MPS state can be generated sequentially with the help of a D-dimensional ancillary system, this proves the equivalence.

In some situations it is possible to sequentially generate a MPS state without the help of an ancillary system. Then the spins interact sequentially with each other, spin 1 with spin 2, spin 2 with spin 3 and so on. So to entangle site k in the right state, site k + 1 is used as a temporary ancillary system (see figure 2.6). This is only possible for matrix product states with small bond dimension bounded by d. This is formalized in the following theorem, for which the proof comes from [22].

Theorem 6 (Sequential generation without ancilla). The following two sets are equiv-alent sets of N -qubit states:

1. States with MPS representation with open boundary conditions and maximal bond dimension D ≤ d.

2. States which are generated by a sequential scheme without ancilla, both determin-istic as probabildetermin-istic schemes.

(14)

(a) With ancilla. The ancilla is decoupled in the last step.

(b) Without ancilla.

Figure 2.6.: A circuit to generate the same many-body state with and without ancilla. [25]

Proof. Consider a sequentially generated state. Let U[k] be the map that models the interaction between site k and site k + 1. For k < N − 1, construct MPS matrices as follows:

A(k)i,α,β = hi, β| U[k]|α, 0i , (2.15)

where A[1], A[N +1] are the boundary vectors. This means that a sequentially generated state can be written as a MPS with bond dimension D ≤ d.

Now the other inclusion. We follow the line of the proof of theorem 5 but for site k we use site k + 1 as ‘ancilla’. To store this in site k + 1 the bond dimension must be smaller or equal to d. After the application of U[k] we swap site k + 1 with site k + 2. As the last step of the proof of theorem 5 is only a swap between site N and the ancilla, N − 1 steps are sufficient. This proves that every MPS state can be sequentially and deterministically generated.

2.4. AKLT

The father of all exact MPS models is the AKLT ground state. This is named after Affleck, Kennedy, Lieb and Tasaki who studied this model in [2] for the first time. Here it was proven to be the exact ground state of the spin-1 chain (so d = 3) with parent Hamiltonian HAKLT= X hi,ji  ~ Si· ~Sj+ 1 3  ~Si· ~Sj2 +2 3  , (2.16)

where ~Si = Si1, Si2, Si3 is the spin operator at site i and hi, ji denotes next-neighbours. The ground state of this Hamiltonian has a MPS representation which is given by [22]

{Ai} =  σz =1 0 0 −1  , √ 2σ+= 2 √ 20 1 0 0  , − √ 2σ−= −2 √ 20 0 1 0  . (2.17)

The two edgestates for open boundary conditions are w1 =

1 0  and w2 = 0 1  . This ground state is constructed from the Hamiltonian following the valence bond picture

(15)

[30, 2]. Define the projectors ((Pij)2= Pij) Pij = ~Si· ~Sj+ 1 3  ~Si· ~Sj2 +2 3. (2.18)

Pij is the orthogonal projection onto the (5-dimensional) spin-2 subspace of two spin-1’s located at the sites i and j. The AKLT-Hamiltonian is the sum of these projectors,

HAKLT =P

hi,j,iPij. Since projections are always positive, this Hamiltonian is positive as well. Now imagine the 3-dimensional Hilbertspace of every spin-1 in the chain as the symmetric subspace of two spin-1/2’s (the circles in figure 2.3). To assure the global state of the spin chain has spin-0, let each of the spin-1/2 states be in a singlet state with a spin-1/2 of its neighbour (the lines in figure 2.3). Now form the AKLT state by locally projecting the pair of spin-1/2’s in a symmetric subspace onto the spin-1 basis with the following projector:

P = |−1i h00| − h11|√ 2  + |0i h01| + h10|√ 2  + |1i h00| + h11|√ 2  = X α=x,y,z |αi h00| + h11|√ 2  (σα⊗ σy) , (2.19)

where σα are the Pauli matrices and we have identified {−1, 0, +1} with {x, y, z}. In terms of qutrits, the projectors Pij in the sum of the Hamiltonian project on the subspace spanned by the following five basis states:

|ψ1i = 1 √ 2(|0, −1i + |−1, 0i) |ψ2i = 1 √ 2(|0, 1i + |1, 0i) |ψ3i = 1 √ 6(|1, −1i + |−1, 1i + 2 |0, 0i) (2.20) |ψ4i = |1, 1i |ψ5i = |−1, −1i ,

where a ket describes neighbouring qutrits. The total spin of this subspace is S = 2. The ground state of the AKLT Hamiltonian has only short-range correlation. Its correlation length is 1/ ln(3). In the limit of an infinite chain the model is frustration-free and has a non-vanishing gap.

2.5. PXP

In a experiment with Rydberg atoms, unusual behaviour was observed: a non-thermalizing oscillation. It was discovered that these oscillations could be explained as quantum scar states. [5] A good model to describe these dynamics is the PXP model. The PXP model has several eigenstates at infinite temperature which have an exact MPS representation with finite bond dimension.

(16)

A specific characteristic of the Rydberg chain is that there cannot be two excited states next to each other. Consider N Rydberg atoms on a chain and let |0i be the atomic ground state and |1i be the Rydberg excitation. The Hamiltonian ensures there won’t be two states |1i on neighbouring sites. So there are states excluded from the Hilbert space of 2N tensor-product states. The cause of this restriction is the Rydberg blockade. The dynamics in the Rydberg atom chain are described by the PXP model with the following Hamiltonian:

HPXP=

N −1 X

j=2

Pj−1XjPj+1+ H1+ HN, (2.21)

with Pj = |0i h0| the projector on the atomic ground state and X = |0i h1| + |1i h0|

the transitions between the ground state and excited state. H1 and HL depend on

the boundary conditions: H1 = PNX1P2 and HN = PN −1XNP1 for periodic boundary

conditions and H1 = X1P2 and HN = PN −1XN for open boundary conditions. In

this thesis we only work with the PXP model with open boundary conditions. The Hamiltonian for open boundary conditions has inversion symmetry around the middle of the chain. The Hamiltonian is hard to solve. It is nonintegrable, but [13] suggests that 2.21 could be a deformation of an integrable Hamiltonian.

For an even number of sites, certain eigenstates of the Hamiltonian have an exact MPS representation. Define the following matrices:

B0 =1 0 0 0 1 0  , B1=√20 0 0 1 0 1  , C0 =   0 −1 1 0 0 0  , C1 = √ 2   1 0 0 0 −1 0  , (2.22)

and the following boundary vectors: v1 =1 1  , v2= 1−1  . (2.23)

A few highly specific eigenstates in the middle of the energy spectrum (around E = 0) with open boundary conditions can then be defined as

α,βi =X σ

vαTBσ1Cσ2. . . BσN −1CσNv

β|σ1. . . σNi , (2.24)

with σi ∈ {0, 1} (this tells if the specific site is in the atomic ground state or Rydberg excited state) and α, β ∈ {1, 2}. Because the multiplications C1B1 and B1C1 give zero, there won’t be excited atomic Rydberg states on neighbouring sites. As there are four combinations of boundary vectors possible, this gives four different eigenstates. |Γα,αi

have eigenenergy E = 0, the state |Γ1,2i has energy E =

2 and |Γ2,1i has energy E = −√2.

(17)

Figure 2.7.: The energy density profiles hXjiα,β for the four exact eigenstates |Γα,βi of the PXP

model with open boundary conditions and a chain length of N = 50. [16]

To calculate the exact energy it is easier to work in a blocked reformulation of the Hamiltonian. Then two sites 2b − 1, 2b are ‘blocked’ into one site. The allowed block states are (00), (01), (10) (denoted as O, R, L) and the number of blocks is Lb = N/2 (this is always an integer as N has to be even throughout). This leads to a MPS state of bond dimension 2 defined by matrices A(σ1σ2)= Bσ1Cσ2,

AO =0 −1 1 0  , AR= √ 2 0 0 0  , AL=0 0 0 −√2  . (2.25)

The Hamiltonian can be rewritten into a sum of two-body terms on blocked sites:

H = Lb

X

b=1

hb,b+1, with (2.26)

hb,b+1= (|Ri hO| + |Oi hR|)b⊗ (I − |Li hL|)b+1 + (I − |Ri hR|)b⊗ (|Li hO| + |Oi hL|)b+1.

This blocked formulation is very useful in computations of characteristics of the PXP state. The states have quite characteristic energy density profiles. They all have a bump at the edges, ‘localized energy humps’ (see figure 2.7). These humps are the boundary effects and decay exponentially into the bulk density with decay length 2 ln(3). Integrating the energy density profiles gives the eigenenergies.

(18)

2.6. Connection between the PXP and AKLT model

The PXP model has a connection with the AKLT model both on the level of the MPS representations [16] as on a Hamiltonian level [28].

In [16] it was shown that the exact eigenstate of the PXP model with periodic boundary conditions in the blocked representation has a close relation with the AKLT ground state. After a gauge transformation of the PXP state the matrices become exactly the same as the matrices used in the MPS representation of the AKLT state. With periodic boundary conditions no boundary vectors are needed and equation 2.24 becomes a trace. In the blocked reformulation in equation 2.25 this is |ψi = Tr(As1As2. . .). Now we can perform

the following gauge transformation where U is the Paulimatrix σx Tr(As1As2As3. . .) = Tr As1U U−1As2As3U U−1. . .

= Tr (A0)s1(A00)s2(A0)s3. . .,

with (A0)s = AsU and (A00)s = U−1As. When we identify s = O, R, L with Sz = 0, 1, −1 in the spin-1 chain of AKLT, (A0)s are precisely the matrices used in the MPS representation of AKLT in equation 2.17. Swapping the R and L states by a unitary transformation on the physical state, transforms the (A00)smatrices to (A0)s. This shows a clear equivalence between the MPS state of the PXP model and the AKLT model. This equivalence can be seen in characteristics of the state. From section 2.4 we know that the correlation length of AKLT is 1/ log(3). In section 4.2.1 we derive that the correlation length of the PXP state in the blocked formulation is the same!

In [28] also a connection between the two Hamiltonians is discovered through the method of embedded Hamiltonians. This method constructs a non-integrable Hamilto-nian with the desired states as it’s non-thermal eigenstates. The method of embedded Hamiltonians is explained in [29]. The gest is that the PXP model is expressed as a sum of local operators sandwiched between the same projection operators as the ones in the AKLT Hamiltonian (equation 2.18). Starting from the blocked formulation of the PXP Hamiltonian (equation 2.26), we introduce a projection operator PPXP similar to the AKLT projector. It projects onto the subspace spanned by the states

|φ1i = 1 √ 2(|ORi + |LOi) |φ2i = 1 √ 2(|OLi + |ROi) |φ3i = √1

6(|RRi + |LLi + 2 |OOi) (2.27)

|φ4i = |RLi |φ5i = |LRi ,

which are of the same form as 2.20. Due to the correspondence between the MPS states of the AKLT and PXP Hamiltonian, the projector operating on the PXP eigenstate also

(19)

makes zero. It is possible to rewrite the Hamiltonian to HPXP= X b  −h(2)b,b+1+1 2h (1) b,b+1  , (2.28)

h(2)b,b+1= (|Ri hO| + |Oi hR|)b⊗ (|Li hL|)b+1+ (|Ri hR|)b ⊗ (|Li hO| + |Oi hL|)b+1,

h(1)b,b+1= (|Ri hO| + |Oi hR| + |Li hO| + |Oi hL|)b + (|Ri hO| + |Oi hR| + |Li hO| + |Oi hL|)b+1.

In [28] Shiraishi proves that h(2)b,b+1, h(1)b,b+1 operating on the MPS PXP eigenstate is zero. Now the PXP Hamiltonian can be interpreted as the embedded Hamiltonian

HPXP= X b Pb,b+1PXPHbPb,b+1PXP + H0, (2.29) where H0 = h(1)b,b+1and Hb = 2 √ 2 (|φ4i hφ2| + |φ2i hφ4|)b,b+1+ √ 2 (|φ1i hφ5| + |φ5i hφ1|)b,b+1 +√3 (|φ3i (hφ1| + hφ2|) + (|φ1i + |φ2i) hφ3|)b,b+1.

In the AKLT gauge, the AKLT Hamiltonian is represented as

HAKLT =X

hi,ji

Pij, (2.30)

and the PXP Hamiltonian becomes in this gauge [28]

HPXP= X hi,ji PijHiPij + X i Six, (2.31)

where Hi is a proper local Hamiltonian. So the PXP Hamiltonian is a sum of the same projection operators as the AKLT Hamiltonian, but then with local Hamiltonianterms sandwiched in between and with addition of a magnetic field in the x direction. The first term still has the AKLT ground state as eigenstate with zero energy, and the magnetic field does not disturb the AKLT state due to the spatial symmetry in the x direction. So the PXP Hamiltonian reflects the frustration-free structure and spatial symmetry of the AKLT Hamiltonian.

(20)

Chapter 3

Quantum simulation

Quantum simulation is a fast growing field of research, as it is a promising first use-case for NISQ devices. Use-cases are important in the progress to societal relevance. For a thorough introduction to quantum computing and quantum simulation we refer to the book of Nielsen and Chuang [8]. In the next section we will cover some of the needed concepts. In section 3.2 the quantum circuit to generate a PXP state is developed, in section 3.3 for the AKLT state. Then we will work towards a quantum circuit that is noise-resilient.

3.1. Preliminary concepts

The concept of the quantum computer was first proposed by Richard Feynman. In a seminal lecture in 1981 he spoke the memorable words “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechani-cal, and by golly it’s a wonderful problem, because it doesn’t look so easy.” [11] Almost forty years later we have access to working quantum computers. Google even reached ‘quantum supremacy’. The quantum devices that are available belong to the NISQ tech-nology: Noisy Intermediate Scale Quantum technology. These are devices in the range 50–100 qubits that are subject to noise. It is hard to make stable qubits and perform operations on them. But they can be useful for specific needs, for example simulating many-body physics. [23] In this section we discuss a few important concepts.

A quantum circuit is a graph consisting of wires which represent qubits and quantum gates which represent unitary operations. Some important elementary gates are the Pauli gates X, Y and Z, the Hadamard gate H, the rotation gate Rφ, the Toffoli gate T , the swap gate SWAP and the controlled-not gate CNOT. A universal gate set is a set of elementary gates from which every unitary transformation can be built. One example of a universal gate set is the set consisting of CNOT and all single qubit gates.

Two important measures for the complexity of a circuit are size and depth. The size of a circuit is the number of quantum gates in the full circuit. The depth of a circuit is the length of the longest path in the circuit from input to output node.

When you have simulated a many-body state with a quantum circuit, you will probably want to know some characteristics of the quantum state. These characteristics can be measured by an observable. An observable M is a Hermitian operator on the state space

of the observed system. The observable has a spectral decomposition, M = P

mmPm,

where Pm is the projector on the eigenspace of M with eigenvalue m. A term that we

(21)

Definition 7 (Support). Let M be an observable on a statespace V . The support of an observable is the vectorspace spanned by the eigenvectors of M with nonzero eigenvalues. For a quantumstate generated by a quantum circuit this means the qubits on which the observable gives information. An observable that is k-local has support k.

3.2. Quantum circuit for PXP state

3.2.1. The matrices

In section 2.3.1 we proved that every MPS state has a sequential generation. The proof provides a recipe to find the matrices for the sequential generation. Using this recipe it is easy to find the unitary matrices for the PXP state. The first matrix found with the protocol is always a 4 × 4 matrix. The next matrices are of growing size, but with a lot of zeroes. After truncation of the zero rows, we are left with 8 × 8 matrices. All the matrices can be computed sequentially from the start. But in this computation it becomes clear that the M matrices (using the nomenclature of the proof of theorem 5) follow an analytical pattern when computed per iteration of two sites. This leads to two series of matrices that form the sequence: Wodd and Weven. Every matrix belongs to a site in the spinchain, and even and odd alternate. Which site is odd and which is even depends on the leftvector of the MPS: when this is v1, W1 is even and when the leftvector is v2, W1 is odd. Further the matrices depend on the site i via the variable a =√fi, where

fi = 3fi−1− 1, with f1 = 1. (3.1)

So every odd (even) matrix has the same form, but due to the value of a the numbers in the matrix differ per site. The analytical description of the matrices is given in appendix B.

3.2.2. The quantum circuit

Because the matrices are unitary they can immediately be turned into custom quantum gates. The 4 × 4 matrix (W1) is a two-qubit gate, the 8 × 8 matrices three-qubit gates. The third qubit (or second qubit in the case of the 4 × 4 matrix) is the ancilla qubit. Because d = 2 and in the blocked formulation D = 2, this MPS fulfills the D ≤ d requirement of theorem 6 so it is possible to sequentially generate the state without an ancilla. For qubit k, qubit k + 1 will be used as temporary ancilla. This means that the PXP circuit gets a stairslike form (see figure 3.1). The third qubit line coming out of a gate is one of the input qubits for the next gate. The other two qubits are still in the zero state. The right vector vR is either 1 0 0 . . .

T

or 0 1 0 . . .T

(when

normalized) depending on the combination of boundary vectors. This vR is the input

state of our quantum circuit.

It is possible in Cirq to define custom quantum gates using a unitary matrix. The first circuit we built was with these custom gates. But to get the most general circuit as possible, we want to translate the circuit in a universal gate set: CNOT and single qubit

(22)

site 1 |0i Weven1 site 2 |0i Wodd2 site 3 |0i site 4 |0i W3 even site 5 |0i site 6 |0i W4 odd site 7 |0i site 8 |0i

Figure 3.1.: Schematic sketch of the full PXP circuit for the state |Γ1,1i on eight

sites. |0i V2 V1 |0i V3 |0i V4 |0i V5 |0i V6 |0i V7 |0i

Figure 3.2.: Schematic sketch of the full AKLT circuit for the state on eight sites with edgestates vL = w1, vR= w1.

gates. Because of the structure of the circuit (see figure 3.1) the input of a gate is always |000i or |001i. This means that only the first two columns of the W matrices matter for the final state: the rest of the columns are there to make the full matrix unitary. The first two columns can be split up into controlled (by |0i or by |1i) X, Z, H and rotation gates. One matrix from the sequential description of the MPS state becomes one layer of a series of gates on three qubits in the quantum circuit. The dependence on the variable a is then concentrated in two controlled rotation gates. The rotation gates Rot1 and Rot2 are given by

Rot1= 1 √ 6a2− 2  −2a √2a2− 2 −√2a2− 2 −2a  , (3.2) Rot2= 1 √ 6a2− 4 −2√a2− 12a −√2a −2√a2− 1  . (3.3)

Rot1 is controlled by a qubit being 1, Rot2 by a qubit being 0.

The gates that need to be translated into universal gates are the SWAP gate, the controlled Pauli Z and controlled Hadamard gate, gates controlled by a qubit being |0i and the two controlled rotation gates that encapsulate the essence of every layer and have a parameter a. Figure 3.3 shows how to decompose the first gates. For the rotation gates we used a lemma of Barenco [4, Lemma 5.4]. The proof of this lemma can be found in appendix A.

Lemma 8 (Decomposition controlled rotation gates). A controlled rotation gate based on rotation matrix R can be simulated by a circuit of the form

R =

A B

(23)

= (a) SWAP-gate. Z = H H (b) Controlled Z-gate H = Ry(π/4) Ry(−π/4) (c) Controlled H-gate = X X

(d) CNOT-gate controlled on a qubit being in state |0i.

Figure 3.3.: The decomposition of various gates (the SWAP-gate, controlled Z-gate, controlled H-gate and a CNOT controlled on state |0i) in elementary gates.

where A and B ∈ SU (2) if and only if R is of the form R = Rz(α) · Ry(α) · Rz(α) =

eiαcos θ/2 sin θ/2

− sin θ/2 e−iαcos θ/2 

(3.4) with α, θ ∈ R.

In our case, eiα = 1. Thus α = 0 will do and Rot1 simply becomes R = Ry(θ) =

Ry 

2 arcsin 

p2(a2− 1/6a2− 2. Rot

2 has the same shape, but with a different

angle, R = Ry 

2 arcsin 

a√2/√6a2− 4. Now following the proof of the lemma, the needed single qubit rotation gates in the decomposed circuit are

A1= Ry(φ) = Ry  arcsinp2a2− 2/p6a2− 2, B1= Ry(−φ) = Ry  − arcsinp2a2− 2/p6a2− 2, A2= Ry(θ) = Ry  arcsin  a√2/p6a2− 4and B2= Ry(−θ) = Ry  − arcsina√2/p6a2− 4.

Figure 3.1 gives a schematic view of the full circuit, figure 3.4 the decomposed circuit for the layers of W1 and figure 3.5 the decomposed circuit for the rest of the odd and even layers.

(24)

|0i Ry(π/4) Ry(−π/4) Ry(π/4) Ry(−π/4)

|0i Z X X Z

(a) Quantum circuit for W1 odd.

|0i Ry(π/4) Ry(−π/4) Z Ry(π/4) Ry(−π/4) Z Z

|0i X X X X

(b) Quantum circuit for W1 even.

Figure 3.4.: Decomposed circuit for the first matrix of the PXP state. This is always the last layer in the circuit.

|0i X X |0i X X Ry(θ) Ry(−θ) Ry(φ) Ry(−φ) X |0i H H X X X X X X Ry(π/4) Ry(−π/4) Ry(π/4) Ry(−π/4) Z Z

(a) Quantum circuit for Wodd.

|0i X X |0i X X Ry(θ) Ry(−θ) Ry(φ) Ry(−φ) X |0i H H X X X Ry(π/4) Ry(−π/4) Ry(π/4) Ry(−π/4) Z

(b) Quantum circuit for Weven.

Figure 3.5.: The decomposed circuit for all layers of the PXP circuit, except W1

(25)

3.3. Quantum circuit for AKLT state

For the AKLT state we can follow the same recipe to find the matrices to sequentially generate the state. From the matrix representation we know D = 2. When we apply the recipe for sequential generation, the first matrix we find is a 3 × 3 matrix, the next matrices are all 9 × 9 matrices. The physical dimension d is 3, thus the spins are qutrits. So the bulk of the circuit is built out of two-qutrit gates. The matrices get a quite simple structure. The first matrix is either

V1,odd=   0 0 1 0 −1 0 −1 0 0   if vL= 1 0  or V1,even= 1 if vL= 0 1 

. The next matrices are again alternating between two structures, labeled by even and odd. Thus the left vector determines the sequence. The AKLT state can just like the PXP state be generated without an ancillary system, because the D = 2 ≤ d = 3. This means that we get the same stairslike circuit as for the PXP state, but now with qutrit lines. A schematic sketch of the circuit is given in figure 3.2.

3.4. Noise

In the NISQ era noise is a great problem. The available resources are too small to do proper error correction. Therefore it is important to improve the noise-resilience of algorithms and quantum circuits.

Noise is described by quantum operations in the operator-sum representation. Simple one qubit errors are the bitflip and the phaseflip. Other noise models include the depolar-izing channel, amplitude damping and phase damping. Amplitude and phase damping are ideal models for quantum noise which capture most of the important features of noise now occurring in quantum systems. [8]

The depolarizing channel acts on a single qubit. With probability 1 − p the qubit is left untouched and with probability p it is replaced by a completely mixed state. This can be modeled by applying with probability p/3 a X, Y or Z gate on the qubit.

Amplitude damping is the model for the dissipation of energy to the environment. The action on the densitymatrix ρ is E0ρE0+ E1ρE1, with

E0 = 1 0 0 √1 − γ  , E1 = 0 √γ 0 0  . (3.5)

E0 leaves the |0i state untouched, but damps the amplitude of |1i. E1 changes a |1i into a |0i with probability γ.

Phase damping is a type of noise that is unique for quantum systems. In this process the energy eigenstates of the system do not change, but they do accumulate a relative phase. With the passage of time part of the quantum information in the state is lost

(26)

due to this process. This is what we call decoherence of the state. Due to decoherence we are dealing with a time limit on our quantum algorithms: the coherence time. This is the time that qubits can stay stable, without being disturbed by decoherence. The matrices to model the phase damping process are

E0 = 1 0 0 √1 − λ  , E1 = 0 0 0 √λ  . (3.6)

To know how much influence noise has on a final state, we need to quantify how far the final state is away from the desired quantum state. We need some measure of distance. There are two widely used concepts for this in quantum information: trace distance and fidelity. Both also have a classical equivalent. [8]

Definition 9 (Trace distance). The trace distance between two quantum states ρ and σ is given by

D(ρ, σ) = 1

2Tr |ρ − σ|, (3.7)

where |A| =√A†A.

The trace distance satisfies the properties of a metric on the space of density operators. When the trace distance between two density operators is small, the probability distri-butions of measurement outcomes on these quantum states will be close in a classical sense.

The second distance measure is the fidelity.

Definition 10 (Fidelity). The fidelity between two quantum states ρ and σ is given by F (ρ, σ) = Tr

q

ρ1/2σρ1/2 

. (3.8)

The fidelity is symmetric and bounded between 0 and 1. When ρ = σ, F (ρ, σ) = 1. When ρ and σ have their support on orthogonal subspaces, F (ρ, σ) = 0. In the case of orthogonal support, ρ and σ are perfectly distinguishable and thus it is logical that the fidelity goes to zero. The fidelity itself is not a metric. A metric can be derived from it though, and it is called the Bures distance.

Definition 11 (Bures distance). The Bures distance between two quantum states ρ and σ is given by

DB(ρ, σ) = q

2 − 2pF (ρ, σ), (3.9)

with F (ρ, σ) the fidelity between ρ and σ. [17]

In the context of gates we often talk about error rates. The error rate of a gate is the probability that the gate will produce an erroneous state. This is the same as the infidelity of the gate.

(27)

3.5. Concepts to improve noise-resilience

Because error correction is not a viable option on NISQ devices, it is important to make quantum algorithms noise-robust. A few candidates are the variational quantum eigensolver (VQE) [18] and the quantum approximate optimization algorithm (QAOA), both based on variational optimization and both fitted to find the expectation value of local Hamiltonians. But for all algorithms there are general measures possible to improve the efficiency of the algorithm. Recently some articles were published that introduced some general measures that make the exploration of many-body states more noise-robust. Kim and Swingle introduce DMERA in [15], a MERA-like tensor network that can be efficiently simulated on a noisy quantum device using a smaller amount of qubits than the system size. In [14] Kim proves error bounds for this type of states. Another article that proposes an efficient and noise-robust way to compute many-body states on a quantum computer is from Borregaard et al. [6]. It proposes a more general framework for many-body states, in which both DMERA and MPS fit. We will bring the concepts of this article in practice and test them on the PXP-state.

3.5.1. Explanation and practical use

We will bring two measures into action to reduce the effect of noise on the PXP state. The first is the concept of the past causal cone [27, 10], the second the concept of mixing rate [6]. To be able to prove that these concepts work, the generation of a quantum state has to fit in a certain framework (described in section 3.5.2). Then the necessary number of qubits and quantum gates to compute a local observable can be bounded above by a constant number, instead of scaling with N , the size of the many-body state. 3.5.2. Framework

To prove the claims about noise-resilience Borregaard et al. [6] propose a framework to generate many-body states on a quantum computer. To be a candidate for the noise reducing concepts, the generation of the many-body state has to fit in this framework. This framework consists of a repeating series of three steps. In the first step qubits are added. In the second step there is interaction between the qubits possible with some restrictions. In the third step some qubits might be discarded.

The system S0 of n0 qubits starts in the state ρ0, and we have a bath B of nB qubits in a fixed state ρB. The t-th iteration consists of the following steps:

1. A subsystem St of nt qubits and ancillary states At of at qubits are added. The total number of qubits at iteration t is thus st= nB+Ptj=0nj (not counting the ancillary systems). The quantum channel to describe this step is At.

2. The qubits interact with the new subsystem following an interaction scheme based

on how the qubits are connected. The interaction is modeled by a graph Gt =

(Vt, Et). This graph has st+ atvertices. Every vertex stands for a qubit. An edge between vertices means that we can apply two-qubit gates between the qubits. In

(28)

Figure 3.6.: One iteration of the framework. The first step is the quantum channel A, which adds new system qubits and an ancillary system A. In the next step, U , unitaries U are applied to the edges of the graph Gt. In

this figure you see that these edges are the places where qubits are con-nected. In the third step, D, the ancillary systems are discarded. [6]

Figure 3.7.: This figure illustrates the mixing of an observable. It shows the back evolution of an observable. In the first step we see an observable at time T with support 1. In step 2 U ∗ is applied, here two unitary gates between the two neighbouring qubits. This spreads the support to these qubits. Then we have A∗, and half of the qubits is discarded. This causes the observables to mix: they get closer to the identity (this is shown by half filled circles and tri-angles). In step 4 the unitaries act again on neighbouring qubits and the support is spread, but the cir-cles cannot be filled again. [6]

total g two-qubit gates are applied. To keep track of the position of every qubit in the graph, we have injective functions φt : [st] → Vt+1. The quantum channel for this interaction is Ut. Ut consists of k two-qubit gates applied to each edge in Gt. 3. The ancillary systems are discarded. The quantum channel tracing out the

ancil-lary systems is Dt.

These steps are illustrated in figure 3.6.This series is repeated T times to generate the full many-body state. Every step is a quantum channel of the form Φt = Dt◦ Ut◦ At.

(29)

Figure 3.8.: The purple indicates the past causal cone of a two-qubit observable (the blue box).

The final state of the system is given by ρ = TrB



Φ[0,T ] (ρ0⊗ ρB) , (3.10)

where Φ[0,T ]= ΦT ◦ ΦT −1◦ . . . ◦ Φ0. 3.5.3. Past causal cone

The first concept evolves around the past causal cone.

Definition 12 (Past causal cone). The past causal cone of a outgoing wire is the set of wires and gates that can affect the state on the wire. The past causal cone of a local observable is thus the set of wires and gates that can influence the expectation value of this observable (see figure 3.8).

In [27] Shehab et al. divided the circuit of a full state in parts, based on the past causal cone of different terms of the Hamiltonian. As every part was a much smaller circuit than the circuit for the full Hamiltonian, it was affected less by noise. So together these smaller circuits produced a more exact result than computing the full circuit at once. Thus they showed that restricting to the past causal cone of an observable can make a computation more noise-resilient.

When the causal cone involves at any fixed moment in time only a constant number of wires and gates (so it doesn’t scale with N ), then the past causal cone is of bounded width [10]. So if you have to generate a many-body state to measure some local observable, it is not necessary to compute the full state. This has two advantages: as we are in the NISQ period, we only have access to a limited amount of qubits. Only using the past causal cone of an observable reduces the amount of qubits needed to compute the observable, even if the many-body state is large. Secondly, reducing the amount of gates in the circuit reduces the effect of noise. In section 3.5.5 we quantify the reduction of the error. Restricting the computation of an observable to its past causal cone does not introduce a new error, as no information about the observable is lost. Though there was information lost on the many-body state.

(30)

Now we can use the framework introduced in section 3.5.2 to keep track of the qubits and unitaries of an observable in the past causal cone. Let OT be an observable at time T . Then the expectation value of OT is given by

Tr(ρOT) = Tr  Φ∗[0,T ](OT ⊗ 1B)  ρ0⊗ ρB  , (3.11)

where Φ∗[0,T ] is the inverse of Φ[0,T ] with Φ∗t = A∗t ◦ Ut∗ ◦ D∗t. In other words, it works backwards from time T to time zero following the inverse steps of the steps introduced in this framework.

3.5.4. Mixing rate

The second measure involves the concept of a mixing rate. The past causal cone of an observable consists of many different layers. We want to know how much each step contributes to the expectation value of the observable. When this value stabilizes after a few layers, it might not be necessary to produce the whole causal cone. Then we can find a circuit smaller than the whole past causal cone which approximates the expectation value. This reduces the size of the circuit. But how do we know if the observable stabilizes? This is where the mixing rate enters the picture. The mixing rate measures how many steps contribute to the expectation value until it stabilizes.

Definition 13 (Mixing rate). Let OT be a local observable on the final state, t ∈ [0, T ] a moment in time, R(OT) the radius of the observable OT and Φ∗[t,T ] a reversed transfer operator. The mixing rate is defined as

δ(t, r) = sup R(OT)≤r,kOTk∞≤1 inf c∈R Φ ∗ [t,T ](OT) − c1 . (3.12)

The supremum is taken over all observables with radius smaller than or equal to r. So the mixing rate is not different for specific observables, only when their radius differs. The norm measures how close an observable is to the identity 1 at time t. When this number is small, the steps before t don’t contribute much to the final value of the observable. This means that only the last T − t steps are necessary to compute the eigenvalue of the local observable up to some error. This error is quantified in the next theorem.

Theorem 14. Let OT be an observable supported in a ball of radius r. Then for all ρ0

Tr Φ[t,T ](ρ0)OT − Tr(ρOT)

≤ 2δ(t, r)kOTk, (3.13)

with ρ = TrB[Φ[0,T ](ρ0⊗ ρB)], the final state of the system.

The proof of this theorem is given in [6]. Figure 3.7 gives an intuitive understanding what mixing means for an observable: getting closer to the identity.

(31)

3.5.5. Bounding the error rate

Now that we have the two buildingblocks of our noise-resilient protocol, we can add them together. Limiting the circuit to the past causal cone of an observable limits the amount of qubits and gates needed. The mixing rate tells us which part of the past causal cone has the biggest contribution to the expectation value of the observable. When the expectation value stabilizes after a few iterations going back in the past causal cone we can limit the amount of gates even more. By excluding part of the causal cone, we do have to accept a small error 2δ(t, r).

This result is formalized in the following theorem [6]. Let NQ(t, r) be the number of qubits in the past causal cone of a observable with support r at time t and NU(t, r) the number of unitaries.

Theorem 15. Let OT be an observable supported in a ball of radius r and ρ0 be a state on the qubits that are in the support of OT. It is possible to compute Tr(ρOT) up to an additive error 2δ(t, r) by implementing a circuit consisting of NU(t, r) two-qubit gates on NQ(t, r) qubits.

But what do we gain on noise resilience? When completing the full causal cone, the number of qubits and unitaries are NQ(0, r) and NU(0, r). Now suppose that all two-qubit gates have an error rate of U and that the preparation of the qubits comes with an error P. If we have a two-local observable, for example a Hamiltonian with only next-neighbour interaction or a spin-correlation function, than the total error is bounded by

NQ(0, 2)Q+ NU(0, 2)U. (3.14)

When we use the introduced concepts and we apply theorem 15 the total error will be

NQ(t, 2)Q+ NU(t, 2)U+ 2δ(t, 2). (3.15)

So when there is a t such that 3.15 is smaller than 3.14, we have gained more than the error that is added by the mixing rate. Thus this is a more efficient manner to compute eigenvalues of observables!

In [6] even a stricter bound is derived.

2δ(t, r) + T X k=t+1 δ(k − 1, r)U(NU(k, r) − NU(k − 1, r)) + T X k=t+1 δ(k − 1, r)Q(NQ(k, r) − NQ(k − 1, r)). (3.16)

3.6. Fitting PXP in the noise-resilient framework

Both AKLT and PXP have a sequential MPS generation. This means they fit the

(32)

The initial system S0 consists of one qubit. Normally the dimension of the bath is the same as the maximal bond dimension, thus the bath consists of one qubit. But as we do sequential generation without ancilla qubit, we don’t have a bath but add an extra system qubit. At each iteration we add two qubits, so nt= 2. We apply unitaries between neighbouring qubits. We have no step where we discard qubits, so the next step is to add two systemqubits again. In the last step we only add one qubit. ρ0 is either |0i or |1i, depending on the vR. The maximum of two-qubit unitaries per step is the maximum number of CNOT gates in a layer. This maximum is g = 16.

(33)

Chapter 4

Results

To test the theory of Borregaard et al. [6] we did a lot of simulations. In this chapter we give the most important results. We found the correlation length of the PXP state by simulating the spin correlator and reproduced the energy density by measuring the local energy. We tested the locality of noise in the PXP state with the energy density. Then we show the results of doing simulations with noise and the effect of introducing the concepts of Borregaard et al. [6].

4.1. Technical aspects

There are a lot of packages that allow you to write quantum programs and simulate them on your own device or on real qubits. The one used in this thesis is Cirq. This package is specialized in quantum computing (in contrast to e.g. qiskit, which is specialized in quantum information science). It can do all the basics and has different simulators. It has an extensive library of qubit gates, but also magic methods that enable you to adjust the same concepts to other dimensional qids, like qutrits. Next to the standard gates it is also possible to define a custom gate, based on a unitary matrix or on the effect it should have. This makes it easy to turn a MPS state into a quantum circuit.

Another feature of Cirq that we use is noise models. The Cirq library contains different kinds of noise. All the noise introduced in section 3.4 can be modeled by Cirq. It is possible to put noise over the whole circuit, on one place in the circuit or define a custom noise model that tells in which specific occasions you want to apply noise.

Single-qubit gates can be applied almost perfectly. Noise rates vary between 0.001 and 0.005 [27, 12, 24]. Two-qubit gates are a bit harder to execute perfectly, here noise rates are estimated in the range 0.015–0.020 [27, 12, 32]. Thus we assume that single-qubit gates can be executed perfectly and two-qubit gates with a fidelity of 98%. We did simulations with amplitude damping, depolarization and phase damping. Our custom noise model applies one of these three noise models to every two-qubit gate (in the PXP circuit the only two-qubit gates are controlled-not gates).

For the simulations we did 5000 runs per estimation (so we take the average of 5000 measurements). We used the Simulator and the DensityMatrixSimulator of Cirq, using the run option. This option mimics real hardware on your own device. On the device used it was possible to simulate up to 24 qubits on the Simulator and up to 12 qubits on the DensityMatrixSimulator. The DensityMatrixSimulator is necessary to simulate noisy circuits, but required more memory.

(34)

4.2. Correlation length PXP

Because the PXP model has a MPS representation we can use the extensive toolbox of matrix product representations to compute some properties. One of the properties is the correlation length. First we compute this using the theory of matrix product representations, then we will estimate it from the simulations of the spin correlator. 4.2.1. Theory

To calculate the correlation length, it is easier to work with the blocked reformulation of the Hamiltonian introduced in section 2.5.

We start with the decay of correlations. The two-point correlator of an observable O on state |ψi between site i and i + r is given by

hψ| OiOi+r|ψi . (4.1)

The O-transfer matrix EO describes the contraction of a site with the observable at that site and is given by (see figure 4.1)

EO =

d−1 X

i,j=0

Oi,jA∗i ⊗ Aj. (4.2)

Usually when we say ‘transfer matrix’, E1is meant and just referred to as E. [7] Following [20] this can be written as

(E1)r= (λ1)r D2 X i=1  λi λ1 r vR,iT vL,i, (4.3)

where λi are the eigenvalues of the transfermatrix ordered by magnitude from large to small and vR,i, vL,i the corresponding left and right eigenvectors. Let’s assume λ1 is non-degenerate. Then for r  1 we have that

(E1)r ∼ (λ1)r vR,1T vL,1+  λ1 λ2 r−1 ω+1 X k=2 vTR,kvL,k ! , (4.4) A O A∗ = EO

Figure 4.1.: The transfermatrix EO in the

Penrose graphical notation.

A A∗

=

E1

Figure 4.2.: Transfermatrix of the identity in the Penrose graphical notation.

(35)

where ω is the degeneracy of λ2. Going back to the two-point correlator from 4.2, we can now express that as

hOiOi+ri ∼ vL,1EOvR,1T λ1 !2 + λ2 λ1 r−1 ω+1 X k=2  vL,1EOvR,kT   vL,kEOvTR,1  λ21 . (4.5) As hOii = vL,1EOvR,1T

λ1 , the first term is just hOii hOi+ri. In terms of two-point functions,

the correlation function is given by

C(r) = hOiOi+ri − hOii hOi+ri , (4.6)

and thus combining 4.5 and 4.6 gives

C(r) ∼ λ2 λ1 r−1 ω+1 X k=2  vL,1EOvTR,k   vL,kEOvR,1T  λ21 . (4.7)

For large r we can rewrite the first term to an exponential function with a log(|λ2/λ1|) in the exponential and the second term is a constant with a dependency on ω. Thus

C(r) ∼ f (r)ae−r/ξ (4.8)

with a proportionality constant a, a site dependent phase f (r) = ±1 if O is hermitian, and correlation length

ξ = −1/ log(|λ2/λ1|). (4.9)

With this last equation we can compute the correlation length of the PXP state from its MPS representation. Using equation 4.2 and the blocked matrices 2.25, the transition matrix for the PXP state is

EA= X s∈{O,R,L} (As)∗⊗ As=     2 0 0 1 0 0 −1 0 0 −1 0 0 1 0 0 2     . (4.10)

This transfermatrix has eigenvalues 3, −1, 1, 1, so λ1 = 3 and λ2 = 1. Thus the correla-tion length becomes

ξt= −1/ log(|λ2/λ1|) = −1/ log(|1/3|)

= 1/ log(3). (4.11)

This means that the correlation function scales as

C(r) ∼ e−r log(3). (4.12)

(36)

4.2.2. Simulations

To get the correlation length from quantum simulations we first prepare the PXP state using the quantum circuit described in section 3.2. Then we perform measurements on the resulting state to compute the spin correlator. To know the spin correlation between two sites we need to know if they are both in the 0 or both in the 1 state. To measure if they are both 1 append a CCNOT gate to the two sites and an ancilla qubit. To measure if they are both in the 0 state append a X gate, a CCNOT and another X gate to the control qubits. The CCNOT is attached to the same ancilla qubit. Measuring the ancilla qubit will tell you if the two sites are either in the |00i or the |11i state.

When the ancilla qubit is 1 the value 1 is assigned, when the value is 0, the value of −1 is assigned. With these values the correlation still jumps from positive to negative values for even and odd distances between the two sites. Taking the absolute value of the correlation gives a probability distribution between 0 and 1 (see the first plot of figure 4.3). Fitting an exponential decay function to the data according to equation 4.8 gives a fit but not a very smooth fit. The found correlation length with this fit is ξs= 0.42. The decay is not smooth because there seems to be some oscillation in it over two sites. To explore this a bit further we split the spin correlator in even and odd distances. Fitting only the even or odd distances does give smooth exponential functions (see plot (c) and (d) of 4.3). This leads to a correlation length of 0.5206 for even sites and 0.525 for odd sites. But if we compare this with the theoretical correlation length 4.11 this is not the correlation length of PXP. Looking at ξs agian, we see that this is approximately half of the theoretical correlation length!

This has a logical explanation. The theoretical correlation length ξt = 1/ log(3) was computed from the blocked formulation. That means that the correlation function mea-sures the distance between two blocked sites in blocks: rb = 2r, where rb is the distance measured in blocked sites and r is the distance measured in single sites. As the correla-tion funccorrela-tions should be the same, we have

e−rb/ξt = e−r/ξs ⇒ e−2r/ξt = e−r/ξs ⇒ e2/ξt = e1/ξs ⇒ ξs= ξt 2 = 1 2 log(3) ≈ 0.455. (4.13)

The reason that the decay of simulated correlation function is not smooth, is that the scale we work on is too small. In this situation we still have to deal with boundary effects, which are probably responsible for the small oscillation.

(37)

(a) The spincorrelation for a PXP state |Γ1,1i

of 24 sites. The correlation is measured be-tween site 1 and site i. The x-axis gives the distance between site 1 and site i, the y-axis shows the absolute value of the correlation.

(b) The spincorrelation for a PXP state |Γ1,1i

of 24 sites. The correlation is measured be-tween site 1 and site i. The x-axis gives the distance between site 1 and site i, the y-axis shows the absolute value of the correlation. An exponential function Ae−Kt was fitted to the data. The estimated parameters are A = 0.76, K = 0.42.

(c) The spincorrelation for a PXP state |Γ1,1i

of 24 sites. The correlation is measured be-tween site 1 and site i. The x-axis gives the distance between site 1 and site i, the y-axis shows the correlation. Only the even dis-tances are shown. An exponential function Ae−Ktwas fitted to the data. The estimated parameters are A = 1.306, K = 0.526.

(d) The spincorrelation for a PXP state |Γ1,1i

of 24 sites. The correlation is measured be-tween site 1 and site i. The x-axis gives the distance between site 1 and site i, the y-axis shows the correlation. Only the odd dis-tances are shown. An exponential function −Ae−Kt was fitted to the data. The

esti-mated parameters are A = 0.754, K = 0.525. Figure 4.3.: Several simulations of the spin correlation for the PXP state |Γ1,1i of 24 sites.

(38)

4.3. Energy density PXP

The next observable that we study for the PXP state is the local energy. 4.3.1. Theory

The Hamiltonian of the PXP model is a sum of X operators sandwiched between pro-jection operators on the neighbouring sites. So the local energy is given by

hXjiα,β ≡ hΓα,β| Xj|Γα,βi

α,βα,βi , (4.14)

with α, β ∈ {0, 1}. An exact expression is easily calculated using the blocked formulation and is given by [16] hX2b−1iα,β = hX2biα,β = √ 2 1 + (−1)Lb+α+β3−Lb  (−1)α+b3−b+ (−1)β+Lb−b3−(Lb−b+1)  , (4.15) for sites j = 2b − 1, j + 1 = 2b, b ∈ {1, 2, . . . , N/2} and Lb = N/2. This formula gives the energy density profiles as shown in figure 2.7.

4.3.2. Simulations

Using the simulated PXP state we can reproduce these energy density plots by measuring the local X operator. On every site we attach a Hadamard gate and a measurementgate. The measurements are interpreted by an energyfunction that maps measurement 0 to energy 1 and measurement 1 to energy −1. To compute the expected value of the local energy we did 5000 runs resulting in an average of 5000 measurements per data point. This produces the same profile as the exact energy (see figure 4.4). The difference between the exact energy and the simulation becomes smaller when more runs were used. With 5000 runs the error is small enough to get the PXP characteristics and it is still possible to execute the program in a reasonable time span.

4.4. Noise-resilience

The PXP circuit has a very special form. The layers span three qubits in width and have an overlap of only one qubit, resulting in a stairslike structure (see figure 3.1). This means that for the propagation of noise, only one qubit at a time is responsible. Intuitively it seems like the propagation of noise through the circuit must be small. 4.4.1. Noise on part of the circuit

To get a feeling for the propagation of noise we tested the influence of noise applied on only a small part of the circuit. We concentrate the noise on the first layer (Wodd4 in figure 3.1). Due to the build up of the circuit this means that the noise is concentrated

Referenties

GERELATEERDE DOCUMENTEN

moeilijker project kunnen (moeten) nemen. Bij ieder project wordt wat literatuur ter bestudering opgegeven en eventueel een probleem ter oplossing. Een ander bezwaar, dat men

21 maart is het opnieuw Dag van de Zorg en dat wordt een hele week, vanaf 15 maart, in de kijker gezet.. Geen opendeurdagen dit jaar, om evidente redenene, wel een warm hart onder

In de voorhof van het jaar 2000 : enkele kanttekeningen bij de relatie tussen technische en economische ontwikkeling.. Wiel

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

The identified source of the vibrations that lead to the booming noise in the cabin was identified as the drive shaft assemblies, with the right side assembly contributing more than

Op vraag van het Agentschap R-O Vlaanderen - Entiteit Onroerend Erfgoed werd in opdracht van GOWACO BVBA op 6 juli 2009 een archeologisch vooronderzoek, zijnde een

Effect of yeast strain on &#34;medicinal&#34; odour intensity ratings and p-vinyl guaiacol (PVG) concentrations in Kerner wines. 2) Average of duplicate

Het in dit rapport beschreven onderzoek heeft voornamelijk in Noord Amerika plaatsgevonden. De resultaten kunnen daarom niet direct doorvertaald worden naar de