• No results found

Supersymmetry, or scaling difficult potentials easily

N/A
N/A
Protected

Academic year: 2021

Share "Supersymmetry, or scaling difficult potentials easily"

Copied!
63
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Supersymmetry,

or solving difficult potentials easily

THESIS

submitted in partial fulfillment of the requirements for the degree of

BACHELOR OF SCIENCE

in

MATHEMATICS ANDPHYSICS

Author : M.G. Wellens

Student ID : s1329367

Supervisor : Prof. Dr. K.E. Schalm

2ndcorrector : Dr. M.F.E. de Jeu

(2)
(3)

Supersymmetry,

or solving difficult potentials easily

M.G. Wellens

Huygens-Kamerlingh Onnes Laboratory, Leiden University P.O. Box 9500, 2300 RA Leiden, The Netherlands

July 13, 2018

Abstract

We first introduce the concept of partner potentials in non-relativistic quantum mechanics, i.e. a pair of potentials with the same spectrum, possibly except for a zero-energy ground state. We use this to define a family of partner potentials, giving

us a technique to calculate the entire spectrum of a potential. The mechanism of partner potentials is then used for a quantum mechanical model of supersymmetry. It

turns out that a special class of potentials exists where the spectrum can be determined very quickly using the techniques developed. We explore some of these

potentials, called shape invariant potentials or SIPs and discover some simple properties of them. Finally, we take a quick look at a Hamiltonian with a p4term in it,

discovering that for a small class of potentials, we can make a supersymmetric quantum mechanical model out of it.

(4)
(5)

Contents

1 Introduction 7

2 Supersymmetry 11

2.1 Simple supersymmetry 11

2.1.1 Partner potentials in quantum mechanical systems 11

2.1.2 Example: infinite square well 20

2.1.3 Families of partner potentials 21

2.2 Supersymmetric model 24

2.3 Singular potentials 28

3 Shape Invariant Potentials 31

3.1 Easily solvable potentials 31

3.1.1 Concept of shape invariant potentials 31

3.1.2 Example: Morse potential 36

3.1.3 Shape invariance in multiple steps 39

3.2 Translational SIPs 42

3.3 Multiplication SIPs 45

4 Further Topics 49

4.1 Non-linear oscillators 49

4.2 Reflection and transmission 52

4.3 Omitted topics 55

A Mathematics 57

(6)
(7)

Chapter

1

Introduction

Symmetries of systems are studied in all parts of physics. They give insight in the structure of a problem, such as in the study of crystals. Equations also become easier with them, for example those that describe the more-dimensional oscillator. Noether’s theorem makes explicit that continuous symmetries in a system give rise to conserved quantities, such as conservation of energy from the translational symmetry in time and conservation of angular momentum from rotational symmetry along an axis. In quantum mechanics symmetries give rise to degenerate eigenvalues, as is seen in the treatment of the quantum mechanical hydrogen atom. Symmetries are thus an inte-gral part of physics.

The first subject of this thesis is one particular symmetry, namely supersymmetry, and a model of it in quantum mechanics. The prefix super is from the fact that this metry behaves different than other symmetries we encounter. Normally, when sym-metries are studied, they are studied in the context of Lie algebras, i.e. vector spaces equipped with a multiplication operator, the Lie bracket, which is the commutator of two elements1. For supersymmetries, we generalise this so we may use the anti-commutator. For example, there are no longer only commutating generators of the Lie algebra, i.e. [A, B] = −[B, A], with A and B commutating generators and[·,·]a bilin-ear and alternating multiplication. Anti-commutating generators are now allowed as well, i.e. [C, D] = [D, C], with C and D anti-commutating generators and[·,·]the same bilinear bracket, but no longer alternating for every two generators2. Furthermore, where commutating generators commutate with themselves, [A, A] = −[A, A] = 0, anti-commutating generators anti-commutate with themselves, [C, C] = [C, C] = 0. This is the concept behind supersymmetry.

The reason we use supersymmetries is because of one important group of continuous symmetries, the Poincaré group, and its connection to discrete symmetries, also called internal symmetries. The Poincaré group is the Lie group of continuous symmetries of Minkowski spacetime [28], i.e. the three boosts and three rotations along the space axes and the four translations along all the axes. The Hamiltonian and momentum

1Other choices are possible, but normally the commutator is chosen. 2Details are found in AppendixA.1.

(8)

operators are for example part of a representation of this group3. Discrete symmetries are for example time reversal, the laws of nature are the same if you go forward in time or backward, or charge symmetry, changing the electric charges in the hydrogen atom for example should not alter the behaviour of system. Internal symmetries may depend on the coordinates, but they only change the physical system, not the space the system lives in [29].

One question one could ask is if there is a connection between these symmetries, i.e. if one can generate the total group of symmetries by only using the discrete one, or only using the Poincaré group. This was tried, but in 1967 it was proven by Coleman and Mandula that, under some mild assumptions, a symmetry group containing both the Poincaré group and other symmetries will be a direct product of both parts [5]. So, there does not exist an interesting connection between these types of symmetries. In 1975 by Haag, Lopuszanski and Sohnius a loophole was discovered [18]. Instead of only using commutating symmetries, they allowed anti-commutating symmetries as well. This made it possible to have a group of anti-commutating discrete symmetries to generate a group of symmetries including the Poincaré group. One application of this discovery is a description of a quantum gravity by Freedman, Ferrara and Van Nieuwenhuizen [30].

Physically, anti-commutating symmetries can be seen as symmetries between bosons and fermions. The idea is as follows. An anti-cummating symmetry anti-commutates with itself, as we already saw, so for a symmetry C we have CC+CC = 0. If we look at a representation of this Lie algebra, so we look at the operators on states that correspond to these symmetries, we thus have an operator Q corresponding with C, such that QQ+QQ=0. However, multiplication in the linear maps is given, it is just the composition. This means we have Q2=0.

For example, imagine a system with two non-interacting bosons in the same state. If we change one boson into a fermion, while preserving all the other properties of the particle, we expect the total energy to be the same. This is because in both cases we have two almost equal particles that behave the same in the system. However, if we let this symmetry act again on the system, we get an invalid system, because we end up with two fermions in the same state. It is therefore only possible to let the symmetry act on the system once.

The second subject of this thesis are partner Hamiltonians. In the model of quantum mechanical supersymmetry, a two-dimensional non-relativistic Hamiltonian is gener-ated by a pair of real operators as H = (Q+Q∗)2, making the ground state energy of the Hamiltonian non-negative. Looking at the one-dimensional sub-Hamiltonians, H0

and H1, we see that they are given by the linking operators A and its adjoint, A∗, by

the relations H0 = A∗A and H1 = AA∗. These relations makes it possible to reduce

the differential equation for the zero-energy ground state from a second order equa-tion (calculating the kernel of H0) to a first order equation (calculating the kernel of

A).

With these operators it is also possible to calculate the eigenstates of H0from the

eigen-states of H1, hence the name partner Hamiltonians. As we can easily calculate the part-3A representation of a group is a map from this group to the linear maps on a vector space.

(9)

ner of the partner of a Hamiltonian, we get a sequence of Hamiltonians, each with the same spectrum of the previous Hamiltonian, but with one state less. This sequence can then be applied to calculate the complete spectrum of the first Hamiltonian. Chapter

2is dedicated to this model and the main concepts of partner Hamiltonians.

The last subject of this text is shape invariance. It turns out that the technique of part-ner Hamiltonians is especially useful for a special, and luckily large and interesting, class of potentials that have the shape invariance property. In short, shape invariance means that the partner potential of a given potential is just a parameter change of the original potential. Since we already calculated the partner Hamiltonian for every pa-rameter, the partner of the partner of this first Hamiltonian is already known in terms of the first Hamiltonian, only with a parameter change. This allows us to easily cal-culate the entire sequence of Hamiltonians and therefore to calcal-culated the complete spectrum of the first Hamiltonian. The details are found in Chapter3.

There are a couple of interesting potentials that are shape invariant. The Morse po-tential for example, which is used to model the binding force between two atoms [10]. This potential is also given as an example in Subsection3.1.2. Other interesting shape invariant potentials are the Pöschl-Teller potential, used to study non-linear behaviour such as second and third harmonic generation [24], or the Scarf potential, for example used to describe photonic crystals [27].

This thesis is for a large part based on the review of supersymmetric quantum me-chanics written by F. Cooper, A. Khare and U. Sukhatme [8].

(10)
(11)

Chapter

2

Supersymmetry

In this chapter we will examine the basic mechanics of the supersymmetric model we will use. We start with the definition of partner potentials, which are potentials with (almost) the same spectrum as their partner. These partner potentials and Hamiltoni-ans are the main topic of Chapter3and in a lesser degree of Chapter 4, because they can be used to easily solve the Schrödinger equation for the given potential. At the end of the chapter, we show the supersymmetric model that can be build with these partner potentials and linking operators, thus explaining why this method is called supersymmetric quantum mechanics.

Normally, Lie algebras are used to describe systems of a Hamiltonian and its sym-metries, but our symmetries do not fit into this description. To accommodate these anti-commuting operators, we use instead super Lie algebras. In AppendixA.1a brief introduction into this topic is given. For a rigorous treatment of this subject, which also goes into the mathematics of supergravity, we recommend Varadarajan’s Super-symmetry for mathematicians: an introduction [31].

We strive for mathematically rigorous proofs, but for some it is simply not possible to do this in the scope of thesis. Therefore, sometimes we give a more intuitive proof, leaving the details to the references. In these cases we point out were the main problem lies and directions to solve them.

2.1

Simple supersymmetry

2.1.1

Partner potentials in quantum mechanical systems

In quantum mechanics the space of all possible states consists of all square integrable functions on the space the particles are in. Then, through the Schrödinger equation the Hamiltonian of the system, the operator related to the total energy, determines how the system evolves in time. The definitions of these objects we will use are as follows.

(12)

Definition 2.1.1 (State Space). Let H := L2(R) be the square integrable, complex-valued functions onR with inner product

hf|gi :=

Z

R f(x)g(x)dx.

Then we callHthe one-dimensional, one-particle state space.

Definition 2.1.2(Time-independent Hamiltonian). Let H : dom(H) → H, with dom(H) ⊂ Ha linear subspace, be an operator given by

H := − ¯h

2

2m d2

dx2 +V(x),

where V : RR, called the potential, is an everywhere defined smooth function, ¯h is the reduced Planck constant and m is the mass of the particle.1

Note that the domain is not as rigorously defined as should be, because we are differ-entiating in a space where the derivative is normally not well defined. To solve this, one can use Sobolev-space to construct a weaker definition of differentiation, based distribution rather than functions, to rigorously define the domain of the Hamiltoni-ans and all other operators we use in quantum mechanics. However, altough inter-esting, this process is highly technical and beyond the scope of the text. We refer to Fackler’s text [13] Mathematical Foundations of Quantum Mechanics for further detail. In this text, we will focus on time-independent Hamiltonians and ignore the much larger class of time-dependent Hamiltonians. The solutions of the time-independent Hamiltonian will therefore be time-indepedent as well. These solutions can be di-vided into two categories, bound states and scattered states. Bound state solutions are the square integrable solutions of the Schrödinger equation. These solutions can be un-derstood as being the probability density functions of the position2. Scattered states are unbound states and generally describe quantum waves being scattered by the po-tential. Most of this thesis is only concerned with bound states, Section4.2 being the exception.

For one-dimensional quantum mechanical systems, there is a nice and simple result that the eigenvalues of the Hamiltonian are non-degenerate3. This result will be used in subsequent results, so we state it here.

Lemma 2.1.1. Let H be a one-dimensional Hamiltonian. Then the eigenvalues of the normal-isable eigenfunctions are non-degenerate.

1Note that we use the notation V(x) both as the function V : x 7→ V(x)and as the linear operator

V : φ(x) 7→V(x)φ(x).

2It is entirely possible to use the momentum instead of the position, thus using as the basic wave

function the momentum wave function. We however only use the position wave functions.

3This only holds as long as we only look at the Schrödinger equation itself. If we take for example

symmetries on spins or electrical charges into account, then it is possible to have degeneracy in the one-dimensional case.

(13)

Proof. Let E be a eigenvalue of H and φ and ψ be two eigenfunctions of H with eigenvalue E. Using the Schrödinger equations for both eigenfunctions

= − ¯h 2 2m d2φ dx2 +V(x)φ, = − ¯h2 2m d2ψ dx2 +V(x)ψ,

we multiply the left equation with ψ, the right equation with φ and equate both Eφψ terms to get −¯h 2 2m d2ψ dx2φ+V(x)ψφ= − ¯h2 2m d2φ dx2ψ+V(x)φψ.

Subtracting the potential parts, dividing out the factors and using the product rule we get −E ¯h 2 2m d2ψ dx2φ= −E ¯h2 2m d2φ dx2ψ⇒ d dx  φdψ dx  − dx · dx = d dx  dφ dxψ  − dx · dx. Subtracting again the common terms (dx · dx) and integrating both sides gives

φdψ

dx =ψ

dx +c,

with c ∈ C a constant. Now use the fact that solutions of the Schrödinger equation go to zero for x →∞ to show that c=0. This gives us

1 φ dx = 1 ψ dx.

From this we conclude that both eigenfunctions are multiples of each other, therefore proving

the eigenvalue is non-degenerate. 

A more rigorous prove can be given by using the observation that the time-independent Schrödinger equation is in fact a Sturm-Liouville equation. If the space the functions are defined on is a compact interval, it can even be proven that every excited state has one more zero then its previous state (the state with the next lower eigenvalue) [3]. To show this result for non-compact intervals, we have to look at singular Sturm-Liouville problems, which are generally harder to solve. A reference is Krall’s book on analysis, Applied Analysis, chapter 12 [19].

In this proof we also used a second simplification, namely the assumption that bounded solutions of the Schrödinger equation go to zero in both the limits x → ±∞. Physi-cally, this is clear, because a bound state represents a trapped particle in some region, so it should not be able to escape to infinity that easily [17]. If it would escape with a large probability, the particle would have more energy than possible to be trapped in the first place. This assumption can be proven for large classes of potentials and holds for almost every potential that we normally encounter. For further information see Agmon’s paper on exponential bounds [1].

It is now time to define the main tools we will use to calculate the spectrum of H0. The

idea is to define for every Hamiltonian H0 a partner Hamiltonian H1 with the same

(14)

H0can be calculated from an eigenstate of H1using the adjoint of the linking operator

A. If one then knows the spectrum of H1, the excited spectrum of H0 can be easily

calculated. It turns out that the best way of defining the operator A and the partner Hamiltonian H1is by H0 =: A∗A and H1:= AA∗. The relation between the spectra of

the two can then be easily proved, as we will do in Theorem2.1.1. The next definition makes the linking operators, as we will call A and A∗ from now on, precise.

Definition 2.1.3(Linking Operators and Superpotential). LetHbe a state space and H : dom(H) → Ha Hamiltonian, with V(x)its potential. Then we define the linking operators

A : dom(H) → H; ψ7→ √¯h 2m dx +W(x)ψ, A∗ : dom(H) → H; ψ7→ −√¯h 2m dx +W(x)ψ, with V(x) =W2(x) −√¯h 2mW

0(x). The real function W(x)is called the superpotential4.

This is the main definition of the linking operators. However, we often like our Hamil-tonian to have a ground state at zero energy, especially when we are going to calculate the partner of the partner of a Hamiltonian. Therefore, for a given Hamiltonian H with potential V(x)and ground state energy E0we often write

H= ¯h 2 2m d dx2 +V0(x) +E0 = A ∗A+E 0,

with V(x) = V0(x) +E0 and V0(x) = W2(x) −W0(x). This essentially is shifting the

potential down to a ground state energy of zero. This also ensures that the ground state, although not of energy zero, is in the kernel of A.

Before we take a look at the superpotential, we first have to show the linking operators are well defined and are actually each other’s adjoint. This is done in the following lemma.

Lemma 2.1.2. The linking operators are each other adjoints.

Proof. We use the following definition of adjoint: hA f|gi = hf|A∗gi. Let f , g, bounded solutions of H, be arbitrary given. Then we have

hA f|gi = Z RA f(x)g(x)dx = Z R ¯h √ 2m d f dx(x) +W(x)f(x) ! g(x)dx = Z R ¯h √ 2m d f dx(x)g(x) +W(x)f(x)g(x)dx

4Although we speak of the superpotential of V(x), it can be shown their are many possible

super-potentials for a given potential. For further detail see section2.3. The solution we will use will be the one without nodes, i.e. the one that is derived from the ground state. We will explain this issue later in more detail.

(15)

and hf|A∗gi = Z R f(x)A ∗ g(x)dx = Z R f(x)  −√¯h 2m dg dx(x) +W(x)g(x)  dx = Z R− ¯h √ 2mf(x) dg dx(x) + f(x)W(x)g(x)dx.

Note that W(x) =W(x)(W(x)is real), thus if we subtract the second inner product from the first, we will lose the term with the superpotential. This means we get

hA f|gi − hf|A∗gi = √¯h 2m Z R d f dx(x)g(x) + f(x) dg dx(x)dx. Using the product rule we get

hA f|gi − hf|A∗gi = √¯h 2m

h

f(x)g(x)i∞

−∞.

Now note that both limits go to zero5, because both functions are solutions of the Hamiltonian. This proves that A and A∗ are each others adjoint.  The mathematical problem with this proof is in the definition of adjoints. Formally, the adjoint of a bounded linear operator T : X → Y, with X and Y normed spaces is a bounded operator T∗ : Y∗ → X∗, with X∗ and Y∗ the dual spaces of X and Y, which satisfies for every x ∈ X and y∗ ∈ Y∗ the equality y∗(Tx) = (T∗y∗)(x)[23]. The form of A∗ is thus dependent on the domain of A. In our case however, these operators are each others adjoint when the domain is well defined using Sobolev-space. This also solves another problem, because we did not clarify if im(A∗) ⊂ dom(A) and im(A) ⊂ dom(A∗).

Now this is settled, we can look at the other object defined, the superpotential. First look at the equation to get the superpotential: this equation comes from the chain rule when we calculate the product of A with A∗. This is shown in the following calculation: A∗=  −√¯h 2m d dx +W(x)   ¯h √ 2m d dx +W(x)  ψ = − ¯h 2 2m d2ψ dx2 +W(x) ¯h √ 2m dx −W(x) ¯h √ 2m dx − ¯h √ 2m dW(x) dx ψ+W 2(x) ψ = − ¯h 2 2m d2ψ dx2 +  W2(x) − √¯h 2m dW(x) dx  ψ

Equating this to H means V(x) =W2(x) −√¯h

2mW

0(x), which therefore gives the

rela-tion in Definirela-tion2.1.3.

(16)

The partner Hamiltonian we were talking about is now easily calculated by reversing the order of A and A∗, which is

AA∗ψ= − ¯h 2 2m d2ψ dx2 +  W2(x) + √¯h 2m dW(x) dx  ψ.

As potential of this new Hamiltonian we takeW2(x) +√¯h

2m dW(x)

dx



. This potential is creatively called the partner potential.

Definition 2.1.4(Partner potential). Let H be a state space and let H0 = A∗A+E0 be a

Hamiltonian, with E0its ground state energy6, written as a product of linking operators. Then

we call the operator H1 = AA∗+E0its partner Hamiltonian and the potential

V1(x) = W2(x) +√¯h

2mW

0(

x),

with W(x)the superpotential taken from V0(x), the partner potential of V0(x).

It should be noted that often we do not factorise the Hamiltonian H directly, but first subtract its ground state energy from the potential, as in H0 = A∗A+E0, and thus

V(x) = V0(x) +E0. The most important use of this is that the ground state is in

the kernel of A, because for adjoint we have that ker(A) = im(A∗)⊥ and ker(A∗) = im(A)⊥.

A very intersting mathematically corollary is that if we know the exact ground state wave function of a potential we know the potential and its superpotential as well. Lemma 2.1.3(Potentials and ground states). LetHbe a state space with H0= A∗A+E

(0)

0

and H1 = AA∗+E

(0)

0 partner Hamiltonians. Then the potentials and superpotentials can be

written, with n ∈ {0, 1}, in the form Vn(x) = ¯h 2 2m 1 ψ0(n) d2ψ(0n) dx2 , respectively Wn(x) = − ¯h √ 2m d dxln  ψ(0n)  .

Proof. We prove this for n =0. The other case follows straight forward from this one. For the potential we use the Schrödinger equation of the ground state ψ0(0)

E0(n)ψ0(0) = − ¯h 2 2m d2ψ(00) dx2 +V0(x)ψ (0) 0 +E (0) 0 ψ (0) 0 ⇒V0(x) = ¯h2 2m 1 ψ0(0) d2ψ0(0) dx2 .

For the superpotential we note that

E0(0)ψ0(0) = H0ψ0(0) = A∗0(0)+E0(0)ψ0(0), 6As with the potential, we use the notation E(n)

m both as the energy of the m-th bound state of Vn(x)

(17)

thus A∗(00) = 0 and thus Aψ0(0) = 0, because A and A∗ are adjoints7. From Definition 2.1.3we find Wn(x) = −√¯h 2m 1 ψ0(n) (0n) dx = − ¯h √ 2m d dxln  ψ0(n)  .

This gives the result. 

Note that if we know the superpotential of a system, we can directly calculate the ground state of it by using the proof of Lemma2.1.3. This is because the ground state is in the kernel of A, which gives us the first-order linear differential equation

0 = √¯h 2m

dx +W(x)ψ. The solution is simply

ψ0(x) = N0e− √

2m

¯h WI(x), (2.1)

where WI0(x) =W(x)and where N0is a normalisation constant, if the solution is

nor-malisable. If it is not normalisable, H0does not have a zero-energy ground state. What

this physically means is described in Section2.2. Mathematically, it means that every eigenstate of H0can be calculated from the eigenstates of H1, even the ground state.

The above equation also gives the reason why the superpotential is often used in con-trast to the normal potential: when we know the superpotential, we do not only know the potential but also its ground state. These two results are interesting, because it shows that the existence of a bound ground state is equivalent with the existence of a superpotential. This can also be shown in a different way. The equation of the super-potential is a Riccati equation and by a careful change of coordinates it can be shown this equation is equivalent to the time-independent Schrödinger equation. Now we have learned enough about the linking operators and the superpotential, we can fi-nally show why we want these partner Hamiltonians. It turns out that if φ is an eigen-function of H0 with eigenvalue E 6= 0, then Aφ is an eigenfunction of H1 with the

same eigenvalue, E. Noting that H1has no ground state at energy E

(0)

0 , it follows from

that that the ground state of H1corresponds with the first excited state of H0, the first

excited state of H1 corresponds with the second excited state of H0, etc. In this way,

every state of H1corresponds to an excited state of H0, with the same energy. In

Fig-ure 2.1 you can see a graphical representation of this degeneracy. The degeneracy is proven in Theorem2.1.1.

Theorem 2.1.1. Let H be a state space and let H0 and H1 be partner Hamiltonians. Then

their normalised, nonzero-energy eigenfunctions for ψn(0+)1and ψn(1), n ∈ N0, are related by

ψn(1) =  E(n0+)1−E0(0)− 1 2 (n0+)1, ψ(n0+)1 =  E(n1)−E (0) 0 −12 A∗ψ(n1), 7Remember: Im(A) =Ker(A).

(18)

Figure 2.1: Left we have the potential V0(x), with right its partner potential V1(x). A partner

potential has the same energy levels as its partner, except a possible zero energy ground state. In this figure, we thus have E(00) = 0. The linking operators A and A∗ are also seen, where A transforms an eigenstate of V0(x)into an eigenstate of V1(x)and A∗ transforms an eigenstate

(19)

where ψ(n0+)1and ψ(n1) have the same eigenvalue.

Proof. This proof follows Coopers explanation [7]. Let ψ(00) be the ground state. First, let

ψ(n0+)1, n ∈ N0, with E

(0)

n+1 > 0 be an arbitrary given eigenfunction of H0. Then Aψ

(0)

n+1is an

eigenfunction of H1, as we have

H1



n(0+)1= AA∗n(0+)1+E0(0)(n0+)1 = AH0−E (0) 0  ψn(0+)1+E0(0)(n0+)1 = AE(n0+)1−E0(0)ψn(0+)1+E0(0)(n0+)1 =En(0+)1(n0+)1. The proof for A∗ψ(n1) goes the same.

To show ψn(0+)1 ∝ A∗ψ(n1) and ψ

(1)

n ∝ Aψ

(0)

n+1, note that the proof above also states that every

eigenfunction of H1 corresponds to an excited eigenfunction of H0with the same energy and

that every excited eigenfunction of H0 corresponds to an eigenfunction of H1, also with the

same energy. From Lemma 2.1.1 we know that the energies of both Hamiltonians are non-degenerate and from the proof of Lemma2.1.3we know that the kernel of A is the ground state of H0 and the kernel of A∗ is empty (we only work with bounded solutions), so A is a

one-to-one function from the excited eigenfunctions of H0 to the eigenfunctions of H1and A∗ is a

one-to-one function from the eigenfunctions of H1to the excited eigenfunctions of H0. Both A

and A∗preserve the energy, so using the fact that the energies are increasing in n, we find that

ψ(n0+)1∝ A∗ψ(n1) and ψ

(1)

n ∝ Aψ

(0)

n+1.

For the normalisation constant we exploit the fact that A and A∗ are each others adjoint, as stated in Lemma2.1.2. Using the inner product we have

h(n0)| (0) n i = hψn(0)|A∗ (0) n i = hψn(0)|  H0−E (0) 0  ψ(n0)i = hψn(0)|  E(n0) −E (0) 0  ψn(0)i =En(0)−E (0) 0  hψ(n0)|ψ (0) n i = E (0) n −E (0) 0 .

For En(0) 6= 0. Taking the square root of this energy gives the factor by which Aψ

(0)

n has to be

divided to be normalised again. 

Theorem2.1.1holds for every state, except the ground state of H0. To understand this,

remember that the ground state of H0 is given by the superpotential of equation2.1,

which was derived from the fact that the ground state is in the kernel of A. The simple answer is thus that by multiplying with A, the ground state goes to zero, which is not normalisable.

A better explanation is the following. The eigenstate of H1with the same energy as the

ground state of H0should be in the kernel of A∗, because we have H1 = AA∗+E

(0)

(20)

and im(A∗) = ker(A)⊥. This means that this state, we call it φ, is of the form

φ= Ne

√ 2m

¯h WI(x). (2.2)

We also know that the ground state of H0 goes to zero for x → ±∞ (remember the

assumption). This means that WI(x) → ∞ for x→ ±∞, if we want the ground state of

H0, given by Equation2.1to be normalisable. Forcing Equation2.1to be normalisable

thus forces that φ →∞ for x→ ±∞. This means φ is not normalisable and not a valid bound solution.

In this subsection we have shown the basics of the mechanics we will use in this text, especially how one can construct a partner potential with almost the same spectrum of its predecessor. In the next subsection, we will give an example of a factorisation of a Hamiltonian and the partner potential you get by this process.

2.1.2

Example: infinite square well

In this example we will work out a simple system, namely the infinite square well. The state space of this system is H0 = L2([0, L]), L ∈ R>0, with the potential V(x) = 0.

The particle is thus contained in a small, compact ’container’ or well. This quantum mechanical system is often the first potential encountered in the study of quantum mechanics, thus we can safely quote the eigenstates and energies of this system [17]:

En = ¯h 2 π2 2mL2(n+1) 2, ψn(x) = r 2 Lsin π L(n+1)x  , n∈N0.

Separating the ground state energy out from V(x) and renumbering we get V0(x) =

V(x) −E0(0) = −E0(0) = −¯h2π2

2mL2, thus our Hamiltonian becomes

H0= − ¯h2 2m d2 dx2 +V0(x) +E (0) 0 .

Now V0(x)has a zero-energy ground state. Using Lemma2.1.3on the ground state of

V0(x), which is the same as the ground state of V(x), we get the superpotential

W(x) = −√¯h 2m π L cos(π Lx) sin(π Lx) = − q E(00)cos( π Lx) sin(π Lx) .

The linking operators therefore become

A= √¯h 2m d dx − q E0(0)cos( π Lx) sin(π Lx) , A∗ = −√¯h 2m d dx − q E(00)cos( π Lx) sin(π Lx) ,

(21)

and the partnerpotentials (using Definitions2.1.3and2.1.4): V0(x) =W2(x) − ¯h √ 2mW 0( x) = E0(0)cos 2(π Lx) sin2(π Lx) −E0(0) 1 sin2(π Lx) = −E(00), V1(x) =W2(x) + ¯h √ 2mW 0( x) = E0(0)cos 2(π Lx) sin2(π Lx) +E0(0) 1 sin2(π Lx) =E(00)1+cos 2(π Lx) sin2(π Lx) . We now have two Hamiltonians on[0, L],

H0 = A∗A+E (0) 0 = ¯h2 2m d dx2 +V0(x) +E (0) 0 and H1 = AA∗+E (0) 0 = ¯h2 2m d dx2 +V1(x) +E (0) 0 ,

that have, apart from the ground state of H0, the same energies. Their eigenstates are

also related to each other, because multiplying the excited eigenstates of H0 gives the

eigenstates of H1. By Theorem2.1.1for n∈ N0we have: ψ(n1) =  E(n0+)1−E0(0)− 1 2 (n0+)1 = ¯h 2 π2 2mL2(n+1)(n+3) !−12  ¯h √ 2m d dx − q E(00)cos( π Lx) sin(π Lx)  r 2 Lsin π L(n+2)x  = s 2 (n+1)(n+3)L (n+2)cos π L(n+2)x  −cosπ Lx sin π L(n+2)x  sin π Lx  ! . This example shows that the spectrum of one of the partner Hamiltonians can be used to calculate the spectrum of the other quite easily. However, you have to be lucky to have a partner you know everything about. To really benefit from this technique, we like to factorise H1 again, so we have another pair of linking operators, A1 and A1∗,

such that H1 =A∗1A1+E

(1)

0 . In this way we can solve more easily for the ground state

of H1 using the kernel of A1, and then calculate the first excited state of H0. How we

then go on to calculate the other excited states of H0is the topic of Subsection2.1.3.

2.1.3

Families of partner potentials

In Subsection 2.1.1, we saw how we can define partner potentials for a given Hamil-tonian, such that the partner Hamiltonians have the same eigenvalues and the eigen-states are related by linking operators. In this section we will take this idea one step further. Instead of just one partner Hamiltonian we define a family of them having,

(22)

excluding some initial states, the same eigenvalues and related eigenstates.

The idea is the same as in Subsection2.1.1, we only go further and calculate a sequence of partner Hamiltonians Hn, where every two adjacent Hamiltonians are each other’s

partner. So if Hn = A∗nAn +E0(n), then Hn+1 = AnA∗n+E

(n)

0 = A∗n+1An+1+E

(n+1)

0 ,

with An and A∗nthe linking operators between Hn and Hn+1.

For example, if we use A2 to calculate the ground state of H2, i.e. ψ

(2)

0 (remember

Equation2.1), we can go on and calculate the first excited state of Hn by multiplying

with A∗n, as is given by Theorem2.1.1: ψ1(1) =  E0(2)−E(01)− 1 2 A1ψ0(2).

If we now use the linking operator between H0and H1, namely A∗0, on ψ

(1)

1 then

The-orem2.1.1gives us the second excited state of H0: ψ2(0) =  E1(1)−E0(0)− 1 2 A∗0ψ1(1) =E1(1)−E0(0)− 1 2 A∗0E0(2)−E(01)− 1 2 A1ψ0(2) =E2(0)−E0(0)− 1 2  E(20)−E1(0)− 1 2 A∗0A1ψ0(2).

Of course, we could go on, starting with H3, H4 etc, but the main idea is clear from

this calculation. In this subsection, we explore the details of this method and prove this idea works, but the above calculation basically shows what is going on. In Figure

2.2the eigenstates and energies of a family of partner potentials is drawn.

Definition 2.1.5(Family of partner Hamiltonians). LetHbe a state space and H0a

Hamil-tonian on this space. Then the nth partner Hamiltonian Hn, n ∈ N, of H0 is, if the ground

state of Hn−1 is normalizable, recursively defined by Hn := An−1A∗n−1+E

(n−1)

0 , where

Hn−1 = A∗n−1An−1+E

(n−1)

0 is the previous partner Hamiltonian with E

(n−1)

0 its ground

state energy. Otherwise Hn is not defined.

If the Hamiltonian has a finite number of normalised states, then the family of partner Hamiltonians consists of only a finite number of Hamiltonians. Note that the partner Hamiltonian Hn misses its first n eigenstates with respect to H0. From this we get a

useful lemma, where we see that the potential and superpotential can be written in terms of their ground state.

The following result can be useful for calculations.

Corollary 2.1.1(Family of Partner Potentials). Let Hbe a state space and Hn, n ∈ N0, a

family of partner potentials, possibly finite. Then the potentials satisfy the relations Vn+1(x) = Vn(x) −2√¯h 2m d2 dx2ln  ψ0(n)  , with n∈ N0and ψ0(n) the ground state of Hn.

(23)

Figure 2.2: We see here the family of the potential V0(x). The linking operators are drawn

be-tween the eigenstates. Note that every partner has one bound eigenstate less then the previous one. The last partner (not drawn), V4(x)would have no bound states left. This figure is based

on a figure from [8].

Proof. Let n ∈ N0 be arbitrarily given. Let Wn(x) be the superpotential with Vn(x) =

Wn2(x) − √¯h 2mW 0 n(x) +E (n) 0 and Vn+1(x) =Wn2(x) + √¯h2mWn0(x) +E (n) 0 . Then we have Vn+1(x) = Wn2(x) + ¯h √ 2mW 0 n(x) +E (n) 0 =Wn2(x) −√¯h 2mW 0 n(x) +E (n) 0 +2 ¯h √ 2mW 0 n(x) =Vn(x) +2√¯h 2mW 0 n(x) =Vn(x) −2√¯h 2m d2 dx2 ln  ψ0(n)  .

Where we used Lemma2.1.3in the last equation. 

The relations between eigenvalues and eigenstates are equivalent with those of a single partnership, as the following theorem states.

Theorem 2.1.2(Degeneracy of a Family). LetHbe a state space and Hn, n∈ N0, a family

of partner Hamiltonians, possibly finite. Then the eigenvalues satisfy the relations Em(n+l) =Em(n+)l, n, m, l∈ N0,

(24)

and the eigenstates the relations ψ(mn+l) = l

i=1   Em(n++il−i)−E(0n+l−i)− 1 2 An+l−i  ψ(mn+)l, ψ(mn+)l = l

i=1   Em(n++li)i−E(0n+i)− 1 2 A∗n+i−1  ψm(n+l),

with n, m ∈ N0and l ∈N, for all defined Hamiltonians.

Proof. Let n, m ∈ N0 be arbitrarily given. Note that if this formula holds for l = 1, it also

holds for l > 1, because l > 1 is just using multiple times the case l = 1 with every time different values for n and m. We thus have to prove that Em(n+1) =E

(n) m+1and ψm(n+1) =  Em(n+)1−E(0n)− 1 2 Anψ(mn+)1, ψm(n+)1 =  Em(n+1)−E (n+1) 0 −12 A∗nψ (n+1) m .

However, this follows directly from Theorem2.1.1.  Theorem2.1.2tells us that if we know the ground states of every Hamiltonian in the family, we know all the (bounded) eigenstates of every Hamiltonian in the family. The ground states can be calculated from the linking operators, because the ground states of the Hamiltonians in the family are in the kernels of the operators An. This is a good

result, because one does not have to solve a second order differential equation with this technique.

The downside of this method is that you have to solve a non-linear first order differen-tial equation to get the linking operators, as Definition2.1.3requires a superpotential for the linking operators. In most cases, this slows down the calculation, so we would like to have a shortcut, to not have to calculate the superpotential or potential again every time we add another partner to the family. A possible shortcut would be to write the partner potential of a given potential in terms of the given potential, such that we only have to solve for the superpotential ones. This idea is explored in Chapter3.

2.2

Supersymmetric model

For some background information on super linear algebra and super Lie algebras, we refer to AppendixA.1.

We are now at a point where we can introduce a supersymmetric quantum mechanical model. Normally, a symmetry group is described by a Lie group, a manifold that also has a group structure. However, because there is a close correspondence between Lie groups and Lie algebras (the latter can mathematically be seen as the tangent space of the identity element or physically as the logarithm around the identity element),

(25)

we can also say that this Lie group generates the Lie algebra. In the general case, this algebra contains both the Poincaré group of Minkowski symmetries and a group of internal symmetries. The question asked by Coleman and Mandula was if there were internal symmetries that could generate the Lie algebra containing the Poincaré group. Their answer was no, because they only used commutating symmetries. However, when using the loophole, i.e. allowing anti-commutating symmetries, discovered by Haag, Lopuszanski and Sohnius, it is possible to have some internal symmetries gen-erating the total Lie algebra, or now better called the total super Lie algebra.

In our quantum mechanical case, we only use time translation in our Poincaré group of symmetries, so the representation of our symmetry group will only contain the Hamil-tonian, not the momenta or other continuous symmetries. As anti-commutating sym-metries, we will use two supersymsym-metries, which will be represented by an operator and its adjoint. Our super Lie algebra will therefore contain only three base vectors, which represent the Hamiltonian and the two supersymmetries. The supersymme-tries will be the odd vectors, named B and C, and the Hamiltonian the even vector, A. The commutation and anti-commutation relations are therefore given by

[A, B] = [A, C] = 0,[B, C] = A,[A, A] = [B, B] = [C, C] =0.

Note that, because B and C are odd, we have[B, C] = [C, B], in contrary to A, where we have[A, B] = −[B, A] and [A, C] = −[C, A]. As the commutation relations show, our super Lie algebra is generated by only the anti-commutating symmetries, B and C, because we have[B, C] = A. This is the super Lie algebra we will use.

The only task we have to do know is to define a representation of this super Lie al-gebra in terms of quantum operators, which is done in Definition2.2.1, and to show this actually is a representation of our super Lie algebra. As operators we will use a two-dimensional Hamiltonian composed of a one-dimensional Hamiltonian and its partner, and two charges, composed of a matrix multiplied with one of the two linking operators. It turns out that this choice is a perfect representation of our little super Lie algebra and is therefore a nice model for supersymmetric quantum mechanics.

Definition 2.2.1 (Charge operators). Let H0 be a state space and let H0, H1 be partner

Hamiltonians given by linking operators H0 = A∗A and H1 = AA∗. Define a new state

spaceH:= H0⊕ H0with Hamiltonian H :=H0⊕H1. Then the charge operators Q and Q∗

and the Hamiltonian H are given by

Q := 0 0 A 0  , Q∗ :=0 A ∗ 0 0  and H := A ∗A 0 0 AA∗  .

First notice that these charges are each others adjoint. Second, notice that Q∗Q+ QQ∗ = H and Q2 = (Q∗)2 = 0. These two relations are in line with the anti-commutating relations, because QQ+QQ = 2Q2 = 0, Q∗Q∗+Q∗Q∗ = 2(Q∗)2 = 0 and Q∗Q+QQ∗ =H hold. We have to use the anti-commutator, because B and C are odd, and thus Q and Q∗ should also be odd. The commutators of these charges with

(26)

the Hamiltonian are given by

HQ−QH =Q∗QQ+QQ∗Q−QQ∗Q−QQQ∗ =Q∗QQ−QQQ∗ =0−0=0,

HQ∗−Q∗H =Q∗QQ∗+QQ∗Q∗−Q∗Q∗Q−Q∗QQ∗ =QQ∗Q∗−Q∗Q∗Q=0−0=0.

The operators comply with the relations of the super Lie algebra, if we use the stan-dard commutator and anti-commutator as super Lie bracket. Note that we have used commutating relations and super Lie bracket almost as the same concept. They are however not. Commutating relations are only defined when there is a normal plication. The super Lie bracket is not a commutation relation, it is actually the multi-plication in the super Lie algebra. Luckily for us, the commutation/anti-commutation relations can be proven to comply with the axioms for a super Lie bracket. We will come back to this later in Theorem2.2.1.

Going back to our operators, we see that our Hamiltonian is two-dimensional, but we would rather describe it as having a bosonic and a fermionic part. The nth excited state can be interpreted as the n-particle state, where the degeneracy determines if one of the particles is a boson or fermion. The Q-charge alters a boson into a fermion and Q∗alters a fermion into a boson. For the energy this change does not matter, so there is a symmetry between bosons and fermions. As the commutation relations between Q, Q∗and H already showed, Q and Q∗commutate with H and are therefore symmetries of H. The exact degeneracy is showed in Lemma2.2.1.

Lemma 2.2.1(Degeneracy). Let H, H, Q and Q∗ as in Definition2.2.1. Then every non-zero eigenvalue of H is two-fold degenerate. The non-zero-eigenvalue, if it exists, is nondegenerate. Proof. First, let E6=0 and ψ∈ dom(H)8with Hψ=Eψ be arbitrarily given. Then we have for its components that:

Eψ(0) (1)  = = = H0 0 0 H1   ψ(0) ψ(1)  = H0ψ (0) H1ψ(1)  ,

thus H0ψ(0) = (0) and H1ψ(1) = (1). Looking at the first equality, this only exists if

E = E(n0+)1, n ∈ N0, thus ψ(0) = c1ψ(n0+)1, c1 ∈ C. Then using Theorem2.1.1 we get that ψ(1) =c2ψn(1), c2 ∈C. This shows that

ψ=c1 ψ (0) n+1 0 ! +c2 0 ψn(1) ! ,

thus it lies in the span of ψ

(0) n+1 0 ! and 0 ψ(n1) !

. This completes the first part.

For the second part, notice that if ψ is in the kernel, H0ψ(0) =0 andH1ψ(1) =0. We already 8See the assumption in subsection2.1.1.

(27)

saw that if a non-zero ψ(00) exists, that the kernal of H1 is empty. Thus the only way for a

non-zero ψ to be in the kernel of H is to be of the form

ψ=c1 ψ (0) 0 0 ! .

This makes the kernel one-dimensional and thus is the eigenvalue 0 non-degenerate.  To complete our model, we only have to prove our system is a super Lie algebra and therefore a true representation of a supersymmetric system.

Theorem 2.2.1. LetH0 be a state space with partner potentials H0and H1. LetH := H0⊕

H0and H = H0⊕H1. ThenHis a super vector space and the super Lie algebra generated by

the charges contains H.

Proof. The fact thatHis a super vector space is trivial, because of its definition.

The space spanned by Q, Q∗ and H is a super vector space V, with V0 := span(H) and

V1 := span(Q, Q∗). Note that from Q2 = (Q∗)2 = 0 and Q∗Q+QQ∗+H we see Q and

Q∗ are odd, because the composition of two odd function should, and is, even. H is obviously even.

We thus only have to show that this space is a super Lie algebra generated by the charges. We define the bracket of the super Lie algebra by[A, B] = AB− (−1)p(A)p(B)BA, where the multiplication is the composition of maps. This definition clearly is bilinear. The charges are odd, meaning the bracket of the charges equals [Q∗, Q] = Q∗Q+QQ∗ = H. H is thus generated by the charges. Note that the super vector space is closed under the bracket, because the bracket of a charge with the Hamiltonian is equal to the commutator of the two and therefore equal to zero.

We only have to show our commutator/anti-commutator is a super Lie bracket. For the super anti-linearity we have (A, B ∈V):

[A, B] = AB− (−1)p(A)p(B)BA

= −(−1)p(A)p(B)(BA− (−1)p(A)p(B)AB) = −(−1)p(A)p(B)[B, A].

For the last property, notice that we only have three generators, so we may just calculate the identity with these three:

[H,[Q, Q∗]] + (−1)p(H)p(Q)+p(H)p(Q∗)[Q,[Q∗, H]] + (−1)p(H)p(Q∗)+p(Q)p(Q∗)[Q∗,[H, Q]] = [H, H] + [Q, 0] − [Q∗, 0] =0.

The other permutations are similar, because every inner bracket is either the zero bracket or H. This means that the outer bracket is given by [H, H], [Q∗, H] or [Q, H] (the elements in the brackets can be switched). All these brackets are zero, so the identity always holds. This means

(28)

We therefore have a super Lie algebra of a Hamiltonian and charges, generated by the charges, that is constructed from the linking potentials and partner Hamiltonians of Subsection2.1.1. This is the reason why the method described in that section is called supersymmetric.

Now we have the supersymmetric quantum mechanical model, there is only one issue to address: what is the difference between having a zero-energy ground state and not having such state? The answer lies in the difference between broken and unbroken supersymmetry. As with every other symmetry, you would want to know if there are any states that are invariant under the symmetry. For example, the even states (ground state, second excited state etc.) of the infinite square well are invariant under the symmetry x 7→ L−x, the symmetry of mirroring the potential around x = 12L. For supersymmetry, there only is one candidate, the zero-energy state. This would directly be the ground state, because we can write

H =Q∗Q+QQ∗ = (Q+Q∗)2,

thus every eigenvalues is non-negative9. This means that the vacuum state, the phys-ical interpretation of the ground state, is invariant under this supersymmetry, if it has zero energy. This is callled unbroken supersymmetry. If the ground state has non-zero energy, then it is not invariant under supersymmetry, so we call this case broken super-symmetry. The physical significance lies in the fact that in the first case, bosons and fermions have the same mass [32]. If the vacuum energy is not invariant under this symmetry, it can be proven a massless fermion exists, the so-called Goldstone fermion [25]. Thus, having a zero-energy ground state is physically significant.

2.3

Singular potentials

Until now, we used that V(x) is regular, i.e. it has no singular points. This allowed us to build up a theory where we can proof a degeneracy theorem, Theorem2.1.1. It would be interesting to see if this theorem would hold even if V(x) has one or more singularities. This question turns out to be related to another topic, namely the defini-tion of the linking operators in Definidefini-tion2.1.4. Here we took the linking operators A and A∗such that H0 =A∗A+E0, with E0the ground state energy. This meant that the

ground state of this Hamiltonian is part of the kernel of A. We could however chose another term, in stead of E0. This choice determines the behaviour of our system and

gives rise to singular potentials10.

The main idea is to look for solutions of the Schrödinger equation without caring for normalisability. With e the parameter we use to modify our choices and φ a possible solution, the Schrödinger equation becomes

−d

2 φ

dx2 +V(x)φ=. 9This is also the case in quantum field theory [14].

10For more information on singular potentials in general, we recommend the review written by Frank,

(29)

Here we have taken ¯h = 2m = 1 for clearity. For e = E0 we get our ground state

solution back. Writing V0(x) = V(x) −E0then gives the situation of Definition 2.1.4.

This case is called unbroken supersymmetry, as was discussed in the last section. This is also the assumed case for this text (except this section).

The case e < E0 is called broken supersymmetry. Note that in this case V0(x) has a

ground state energy larger then zero, because it is just an increased V(x). Although a simple translation, this displacement has large consequences. This was also discussed in the previous section.

The case e >E0is the case where singular potentials come into play. If we write φefor

the solution with energy e, then we have V(x) −e = 1 φe d2φe dx2 = 1 φe2 d2φe dx2φe dφe dx 2 + dφe dx 2! =  − 1 φe dφe dx 2 − d dx  − 1 φe dφe dx  =We2(x) −We0(x),

where the first step is directly seen from the Schrödinger equation. This shows us that We(x) = −φ1

e

dφe

dx is a superpotential that gives V(x) −e. Note that the ground state

does not have zeros [4], but solutions after this state do. Therefore, We(x)has at least

one singularity, and consequently the partner potential V1(x)has at least one too.

The question remains if Theorem2.1.1holds. This is not the case. The problem is that the superpotential is part of the linking operator, making the linking operator singular at particular values of x. To have meaningful wave functions, every wave function should be zero at that particular point, so not to have singular wave functions. In gen-eral, this is not possible, as the example of the infinite square well shows.

However, partial degeneracy can be possible, if the potentials are symmetric around their singularity [9]. This can be done if we use the first excited state of V0(x),

be-cause it only has one zero11. In this case some wave functions arise that are symmetric around the singularity and have a zero there. These wave functions can nullify the singularity in the linking operator, making it possible to have a meaningful partner function. An example is the infinite square well from Example2.1.2. If we take L=π

the first excited state of the infinite square well becomes

ψ1(0)(x) =

r 2

πsin(2x).

The superpotential is then

W(x) = − 1 ψ1(0) 1(0) dx = − 2 cos(2x) sin(2x) . The partner potentials are

V0(x) = W2(x) −W0(x) = −4, V1(x) =W2(x) +W0(x) =4

cos2(2x) +1 sin2(2x) .

11For solutions with e > E

1, this will not be possible, because they have at least two singularities,

(30)

The second potential definitely has a strong singularity at x = 12π. Now note that

V1(x) is just twice the standard partner potential of V0(x), one at [0,12π) and one at

(12π, π]. At x = 12π it forces the wave functions to zero, as the ’doubled’ potential

shows. Now look how the eigenfunctions of V0(x)are transformed to ’eigenfunctions’

of V1(x)with the help of the linking operators:

n(0+)1(x)∝  d dx + 2 cos(2x) sin(2x)  sin((n+1)x) = (n+1)cos((n+1)x) −2 cos(2x) sin(2x) sin((n+1)x) ∝ ”ψ (1) n (x)”.

Remember that n =1 is the ground state of the infinite square well (convention), and we basically subtracted the first excited energy of the potential, so the energy of the first excited state should be zero. This means the first excited state should be nullified by the linking operator A, as is the case. Evaluating Aψ(n0)(x)at x = 12πgives the limit

lim

x→1 2π

n cos(nx) −2 cos(2x)sin(nx) sin(2x).

For n odd we have the first cosine zero, the second equal to 1, but the fractal becomes ±∞, depending on n, thus the singularity of the linking operators does not vanish. However, for n even, this limit does go to zero. In this case, the infinite square well can just be described as two infinite square wells combined, just as the partner poten-tial is described. There are some sign problems, but that can be fixed by the right factor for A. This means there is a partial degeneration. However, as we already stated, this is an exception, because in general, degeneracy is broken.

(31)

Chapter

3

Shape Invariant Potentials

In this chapter we will look at a class of potentials that can be solved easily with alge-braïc methods, called shape invariant potentials. First we describe the basic case of this class, how it is defined and how it is used. Then we give an example to illustrate a way of using them by calculating the excited states of a slightly modified Morse potential. Using a generalisation of shape invariance, called multiple-step shape invariance, we open up a huge class of potentials, only limited by the fact that those potentials can be rather exotic. We end the chapter with a short introduction of ways to find shape invariant potentials and with a remark on how some shape invariant potentials are related to each other by coordinate changes.

3.1

Easily solvable potentials

3.1.1

Concept of shape invariant potentials

In Chapter2we saw the definition of partner potentials and how they can be used to solve for the spectrum of a Hamiltonian. It became clear that this method, although useful, has its drawbacks, because we would still have to solve for the superpotential every time we add a new partner to the sequence. This chapter is about a class of potentials where this problem does not occur. These potentials, called shape invariant potentials (SIPs), have the property that their partner are of the same form. This means that when you have to solve for the superpotential to add a new state, the equation becomes a lot easier, because you already solved it for a similar potential. Before we go into details, we start with an example.

Shape invariant potentials, take V0(x, a), are dependent on one or more parameters,

a ∈ Rn, such that their partner has the same form as V

0(x, a), but only with a different

parameter: V1(x, a) = V0(x, f(a)) +R(a)1. Here is f : Rn → Rn a parameter change

and R : Rn → R a vertical shift. This parameter change is why this class is called

1Here we use R(a)both as a function R : a7→R(a)and as an operator R : φ7→R(a)

(32)

shape invariant, because we only change the settings of the potential, not the overal shape. We allow the partner potential to be vertically translated, because it turns out that we can calculate the eigenvalues using R(a).

Note that we do not have to stop here, the entire family of this potential can easily be calculated. This is because we can just separate the ground state out of the potential: if we start with a Hamiltonian of the form2

H0 = − ¯h 2 2m d dx2 +V0(x, a) +E (0) 0 (a) = A ∗ 0(x, a)A0(x, a) +E (0) 0 ,

the next two Hamiltonians in the family of H0become

H1= A0(x, a)A∗0(x, a) +E (0) 0 (a) = − ¯h 2 2m d dx2 +V1(x, a) +E (0) 0 (a) = − ¯h 2 2m d dx2 +V0(x, f(a)) +R(a) +E (0) 0 (a) = A0∗(x, f(a))A0(x, f(a)) +R(a) +E (0) 0 (a); H2= A0(x, f(a))A∗0(x, f(a)) +R(a) +E (0) 0 (a) = − ¯h 2 2m d dx2 +V1(x, f(a)) +R(a) +E (0) 0 (a) = − ¯h 2 2m d dx2 +V0(x, f 2(a)) +R(f(a)) +R(a) +E(0) 0 (a) = A0∗(x, f2(a))A0(x, f2(a)) +R(f(a)) +R(a) +E (0) 0 (a) Hn = − ¯h 2 2m d dx2 +V0(x, f n( a)) + n

i=1 R(fi−1(a)) +E0(0)(a) = − ¯h 2 2m d dx2 +V0(x, f n(a)) +E(n) 0 (a).

The last step will be proven in Corollary3.1.1. As an example, take the Hamiltonian3

H0 = − d dx2 +V0(x, a) = − d dx2 + a(a−1) cos2(x) −a 2,

with a ≥ 0, defined on[−12π,12π]. This is a special case of the trigonometric Scarf 1

potential [9] and has zero ground state energy. The superpotential of this potential is W(x, a) = a tan(x). The linking operators of Definition2.1.3are given by

A(x, a) = d dx +a tan(x), A ∗( x, a) = − d dx +a tan(x). 2As with R(a), E(n)

m (a)is a function as well as an operator, giving the m-th energy level of the n+1-th

potential in the family, because we started with V0(x, a). 3We set ¯h=2m=1 for clearity.

(33)

The partner Hamiltonians become H0 = A∗(x, a)A(x, a) = − d dx2 +W 2(x, a) −W0( x, a) = − d dx2 + a2sin(x)2 cos(x)2 − a cos(x)2 = − d dx2 + a(a−1) cos2(x) −a 2; H1 = A(x, a)A∗(x, a) = − d dx2 +W 2(x, a) +W0( x, a) = − d dx2 + a2sin(x)2 cos(x)2 + a cos(x)2 = − d dx2 + a(a+1) cos2(x) −a 2, so we have V1(x, a) = a(a+1) cos2(x) −a 2.

These two definitely look like each other. In fact, we can write V1(x, a)as

V1(x, a) = a(a+1) cos2(x) −a 2 = a(a+1) cos2(x) − (a+1) 2+ (2a+1) =V0(x, a+1) +R(a).

Here we took R(a) =2a+1. Now we separate out the ground state energy, such that

H1 = −

(d)

dx2 +V0(x, a+1) +R(a) = A

(

x, a+1)A(x, a+1) +R(a). The next Hamiltonian in the family now becomes

H2 = A(x, a+1)A∗(x, a+1) +R(a) = − d dx2 +W 2(x, a+1) +W0( x, a+1) +R(a) = − d dx2 + (a+1)2sin(x)2 cos(x)2 + a+1 cos(x)2 +2a+1 = − d dx2 + (a+1)(a+2) cos2(x) −a 2.

(34)

Again, this potential is just a parameter shift (or two) away from V0(x, a): V2(x, a) = (a+1)(a+2) cos2(x) −a 2 = (a+1)(a+2) cos2(x) −a 22a1+2a+1 = (a+1)(a+2) cos2(x) − (a+1) 2+2a+1 =V1(x, a+1) +R(a) = (a+1)(a+2) cos2(x) − (a+1) 22a3+2a+3+2a+1 = (a+1)(a+2) cos2(x) − (a+2) 2+ 2a+3+2a+1 =V0(x, a+2) +R(a+1) +R(a).

As we can see in this example, the family of a SIP is very easily calculated. In Definition

3.1.1and Lemma3.1.1we will make this idea precise.

Definition 3.1.1 (Shape Invariant Potential). Let H be a state space, H0 = A∗A+E

(0)

0

and H1 = AA∗+E

(0)

0 partner Hamiltonians and V0(x, a)and V1(x, b)their potentials with

parameters a, b∈ Rn. Then V

0(x, a)is shape invariant if it satisfies the relation

V1(x, a) = V0(x, f(a)) +R(a),

where f :Rn → Rn and R :Rn →R are continuous functions.

Note that we assume V0(x, a) has a zero-energy ground state, because we split the

Hamiltonian into a factorisation term and a ground state energy term. We will there-fore often use E0(0)(a) = 0. Later in Subsection3.1.3this will be used again, but more explicitly. In Section2.3we have gone deeper into this topic, studying potentials with non-zero ground state energy.

If we use this shape invariant condition multiple times, we get a family of partner po-tentials that all have the same form. This is the topic of Lemma3.1.1.

Lemma 3.1.1(Family of SIPs). LetHbe a state space and V0(x, a), a∈Rn, a shape invariant

potential with zero ground state energy for all a ∈ Rn, thus E0(0)(a) = 0. Then the family of Hamiltonians Hn, n ∈ N0, given by Hn = − ¯h 2 2m d2 dx2 +V0(x, f n(a)) +

n i=1 R(fi−1(a)), n∈ N and H0 = −¯h 2 2m d2

(35)

Proof. The case n=0 is obvious. So let n ∈N be arbitrarily given and assume the statement holds for every 0≤m ≤n−1. First, write the Hamiltonians Hn and Hn+1as

Hn = − ¯h 2 2m d2 dx2 +V0(x, f n(a)) +

n i=1 R(fi−1(a)) =: −¯h 2 2m d2 dx2 +V˜n(x, a) + n

i=1 R(fi−1(a)), Hn+1 = − ¯h2 2m d2 dx2 +  V0(x, fn+1(a)) +R(fn(a))  + n

i=1 R(fi−1(a)) =: −¯h 2 2m d2 dx2 +V˜n+1(x, a) + n

i=1 R(fi−1(a)).

We have to prove Hn and Hn+1 are partner Hamiltonians, thus ˜Vn(x, a) and ˜Vn+1(x, a) are

partner potentials. Note that for V0(x, fn(a))we have E

(0)

0 (fn(a)) =0, thus the ground state

energy of Hn equals E0(n)(a) = ∑ni=1R(fi−1(a)). This means we can write Hn as

Hn = − ¯h 2 2m d2 dx2 +V˜n(x, a) +E (n) 0 (a) = A ∗ n(a)An(a) +E (n) 0 (a).

Using the shape invariant condition on V0(x, fn(a))we get

Hn0+1 = An(a)A ∗ n(a) +E (n) 0 (a) = − ¯h 2 2m d2 dx2 +V0(x, f n+1(a)) +R(fn(a)) +E(n) 0 (a) = − ¯h 2 2m d2 dx2 +V˜n+1(x, a) +E (n) 0 (a) = Hn+1.

This means Hn and Hn+1are partner Hamiltonians. 

Corollary 3.1.1 (Energies of a SIP family). The eigenvalues of a family of SIP-partner Hamiltonians are given by

Em(n) = n+m

i=1 R(fi−1(a)), n, m ∈N0, n+m ≥1, and E(00) =0.

Proof. This follows directly from the proof of Lemma3.1.1.  From the previous discussion it is clear that we can very easily generate the family of partner Hamiltonians of a SIP. The reason why the spectrum of a SIP is very eas-ily calculated becomes clear when we note that if one solves for the ground state of V0(x, a), for every a, one directly gets the ground state of every other potential in the

(36)

family: if ψ(00)(x, a)is the ground state of V0(x, a), then ψ

(0)

0 (x, f(a))is the ground state

of V0(x, f(a)). Shifting the potential up with a constant does not change the behaviour

of the eigenstates, so ψ(00)(x, f(a))also is the ground state of V0(x, f(a)) +R(a), only

with a different energy. Using the shape invariant condition we therefore see that

ψ(00)(x, f(a))is the ground state of V1(x, a).

Now we have the ground state of every Hamiltonian in the family of partner Hamil-tonians, getting the complete spectrum of H0 is easy. The linking operators are all

clear because of the shape invariance condition on the superpotentials. So taking the ground state of a Hamiltonian, say H2, and multiplying it with the appropriate linking

operators, here A∗(x, a)A∗(x, f(a))ψ(00)(x, f(a)), we can get every excited state of H0,

here the second excited state. Theorem3.1.1makes this idea precise.

Theorem 3.1.1 (Degeneracy of a SIP-family). LetH be a state space and Hn, n ∈ N0, a

family of SIP-partner Hamiltonians, possibly finite. Then the eigenvalues satisfy the relations

Em(n+l) =E (n) m+l = n+m+l

i=1 R(fi−1(a)), n, m, l ∈N0, n+m+l ≥1,

E0(0)(a) = 0 and the eigenstates the relations

ψm(n)(x, a) = m

i=1   n+m

j=n+i Rfj−1(a) !−12 A∗x, fn+i−1(a)  ψ0 x, fn+m(a) , with n∈ N0and m∈ N, for all defined Hamiltonians.

Proof. Clear from Theorem2.1.2, Corollary3.1.1and the observation that ψ0(n)(x, a) = ψ0(0)(x, fn(a)).

 Although the condition on this class is strong, some rich families of potentials have been found in this class. Simple examples are the radial Coulomb potential and the harmonic oscillator. It is not known if all families of SIPs are found, nor if shape invariance is necessary for a potential to be analytically solvable. There are results that may indicate there are analytically solvable potentials that are not shape invariant [6].

3.1.2

Example: Morse potential

To show the power of SIP’s we will now calculate the bounded eigenstates of an im-portant potential, namely the Morse potential. This potential, normally given by [10]

V(x) = D 1−e−αx2

Referenties

GERELATEERDE DOCUMENTEN

Financial analyses 1 : Quantitative analyses, in part based on output from strategic analyses, in order to assess the attractiveness of a market from a financial

It states that there will be significant limitations on government efforts to create the desired numbers and types of skilled manpower, for interventionism of

The low score of the female bachelor students is due to the lower scores of the students of these female business economics majors (Table 11, Appendix VII). They outnumber their

important to create the partnership: the EU’s normative and market power to diffuse the.. regulations, and Japan’s incentive to partner with

Dependent variable Household expectations Scaled to actual inflation Perceived inflation scaled to lagged inflation Perceived inflation scaled to mean inflation of past

The fact that the LFPs that were recorded by (Magill, 2004) and (Van Dijk et al, 2012) had the same polarity regardless of electrode depth within the STN would seem to indicate

Als we er klakkeloos van uitgaan dat gezondheid voor iedereen het belangrijkste is, dan gaan we voorbij aan een andere belangrijke waarde in onze samenleving, namelijk die van

These questions are investigated using different methodological instruments, that is: a) literature study vulnerable groups, b) interviews crisis communication professionals, c)