• No results found

Fractals and Scale Invariance in Physics

N/A
N/A
Protected

Academic year: 2021

Share "Fractals and Scale Invariance in Physics"

Copied!
55
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MSc Physics & Astronomy

Theoretical Physics

Master Thesis

Fractals and Scale Invariance

in Physics

by Stan de Lange 11015233

Supervisor / Examiner

Dr. Vladimir Gritsev

Examiner

Prof. Dr. Kareljan Schoutens

Daily supervisor

Ward Vleeshouwers

(2)

Abstract

We study a variety of mathematical concepts through which physics on fractal objects can be realized. We first provide a review of theory of fractals in a physical context. We investigate q-deformation as an aspect of discrete scale invariance, in particular q-boson theory. We then study the inverse square potential in detail as exemplary case of scale-invariant physics, and show how discrete scale invariance follows from renormalization. In the same Chapter we calculate the adiabatic gauge potential of such scale invariant systems. An overview of supersymmetry is given, which relates to properties of the inverse square potential and whose shape-invariance encompasses an abstract sense of discrete scale invariance. Finally we provide a brief commentary on some of the articles that have motivated this research project.

(3)

Contents

1 Introduction 3 2 Fractals 5 2.1 Hausdorff dimension . . . 5 2.2 Walk dimension . . . 7 2.3 Resistance exponent . . . 7

2.4 Discrete scale invariant functions . . . 9

2.5 De Rham curves . . . 10

3 q-calculus and q-deformation 13 3.1 Preliminaries of q-calculus . . . 13

3.2 The quantum plane . . . 17

3.3 q-boson algebra . . . 18

3.4 q-analytic fractals . . . 21

4 The inverse square potential 22 4.1 E = 0 . . . 24

4.2 E > 0 . . . 24

4.3 E < 0 . . . 29

4.4 Practical instances of the inverse square potential . . . 32

4.5 Gauge potential and the quantum geometric tensor . . . 34

4.6 Calculation . . . 36

5 Supersymmetry 39 5.1 Shape invariance . . . 42

6 Recent related papers 45 6.1 Fractal strings . . . 45

6.2 Supersymmetry of the inverse square potential . . . 47

7 Conclusion 50

(4)

Acknowledgements

My first and foremost thanks go out to my supervisor, Vladimir Gritsev. His ideas have sent me down a path through physics and mathematics that have led me to understand and appreciate a wide variety of research fields, theories and problem solving techniques. Without his knowledge and patience, this thesis would have been a lot shorter and less interesting than it is now. I would also like to thank Ward Vleeshouwers for being my daily supervisor. He was at any time able to attend our meetings and delivered keen remarks and insights. Wouter Buijsman deserves credit for patiently sitting down with me and explaining how to numerically compute quantum state fidelity; unfortunately, this topic has not made it into the final product.

Less formally, I thank my fellow students Jelle Conijn, Ferran Faura Iglesias, Jasper Kager, and Hoang Vu. They have sat beside me in the Master room for the first half-year, keeping up spirits, discussing one another’s problems whenever necessary and encouraging me to take breaks. Bart Verdonschot and Karel Zijp are two friends whom I’ve seen less, but whose company I am nevertheless grateful for. All these people have made this year fun and memorable.

(5)

Chapter 1

Introduction

The word “fractal” was first coined by Benoit Mandelbrot in 1975 [1] and became the elemental idea of his book, The Fractal Geometry of Nature. He found that scientific models often rely on highly idealized mathematical objects like lines and spheres, even though the things one en-counters in nature are often irregular and fractured. Mandelbrot’s aim was to describe objects like clouds, branches, galaxy clusters and coastlines in a way that respects their inherent grittiness. The essential property that all these objects (and many more) have in common is their self-similar structure: whether we view the fractal object from up close or afar, it looks the same. One could look at a map of Great Britain such as Figure 1.1 and draw the coastline; but zoom in and one finds that the coastline has a microscopic structure that has been neglected. One can re-draw the coastline, measuring it to be longer than in the first instance, but another zoom reveals irregularities once more. Still, nature is imperfect and by enhancing too far (say, to a resolution of 10 meters) one will observe a regular coastline; but the point is that this self-similar property holds steadily for several magnitudes [2].

Figure 1.1: The measured length of the coast line of Great Britain strongly depends on the scale at which it is observed. This is here exemplified through the box-counting method (see Section 2.1). For a straight line, if the length of the boxes would be halved, the number of boxes would double, and for a flat plane it would increase fourfold. For irregular shapes, the number lies somewhere in-between. Image source: Kris Gurung [3]

Likewise, the behaviour of a Brownian particle is remarkably erratic: as it walks randomly through its ambient space (typically the 2D or 3D Euclidean space) it displays a coarse pattern at all scales. If we were to try to calculate the amount of time spent inside a demarcated box, we find that this quantity has a particular scaling relation to the box size; this gives a notion of dimension that is different from the familiar, integer-valued topological dimension and we will define it more

(6)

CHAPTER 1. INTRODUCTION

precisely in the subsequent Chapter.

Though Mandelbrot conceived his idea of fractals as irregular shapes in nature in the 1970s, there is also a more abstract notion of fractals that has a longer history. About a century ear-lier Karl Weierstrass described a function with self-similar properties: the eponymous Weierstrass function, and a few years later followed Georg Cantor’s publication on the Cantor set. Such ob-jects had been deemed troublesome, since they lacked intuitive mathematical properties such as density and differentiability. Their work provoked a flurry of activity through which fractals were rigorously defined and classified.

Figure 1.2: An elementary example of a mathematical fractal is the Cantor set. This graphic shows how the fractal is generated: one starts with a straight line, which will be divided into three equal parts. The middle part is removed, leaving two identical lines. This procedure is then repeated: remove the middle parts so there are four lines, and so on. The Cantor fractal is obtained when this done an infinite number of times. The curious property this object has is that making it three times as big yields two copies of the original. Image source: 127 “rect”, Wikimedia Commons [4]

In 1975–1976, Alexander Migdal and Leo Kadanoff developed a technique called Migdal– Kadanoff bond moving renormalization [5], [6]. This allows one to solve spin systems by deci-mating the spins in such a manner that the same system is retrieved with different coupling. It soon became apparent to theoretical physicists that this technique was quite natural (indeed an exact procedure) for analyzing spin systems defined on certain fractals: Mandelbrot, Gefen and Aharony’s published an article in 1980 in which the Ising model was solved on Koch curves and the Sierpi´nski figures [7], and two years later Griffiths and Kaufmann derived the Ising thermody-namics on diamond fractals [8]. In 1985, Domany, Kadanoff and others applied a renormalization technique to solve the tight binding problem defined on the Sierpi´nski gasket [9].

Around the same time an interest was sparked in the density of states on a fractal, as pioneered by Alexander and Orbach in 1982 [10]. Motivated by the physics of percolation clusters, Rammal and Toulouse released a paper that same year on the behaviour of random walkers on fractals. This led to an analytical investigation into specific fractals; Kigami is one of the most significant contributors in this field, cf. his book Analysis on fractals [11]. Other well-cited authors in this field are Michel Lapidus (cf. [12]), Martin Barlow ([13]) and Robert Strichartz ([14]).

In this thesis we will cover the various ways in which fractals as well as a more general notion of scale invariance has been explored in theoretical physics. This dissertation is set up in this manner: In the next Chapter I provide the mathematical background for fractals. This includes the various fractal dimensions and the notion of discrete scale-invariant functions. The third Chapter contains an overview of q-calculus as well as its applications in quantum mechanics. Then in the fourth, I analyze the inverse square potential, which is scale invariant but needs to be renormalized, after which it displays even more exotic properties. In the fifth Chapter I consider supersymmetric quantum mechanics with an especial interest in scaling shape invariant potentials. Lastly, I present and analyze the results of several specific papers on aforementioned topics, which have motivated this research project.

(7)

Chapter 2

Fractals

2.1

Hausdorff dimension

Mandelbrot’s concept of fractals is motivated by the question: “How long is the coast of Britain?” One could consult a map of the island, trace the coastline and measure its length. This answer will prove to be inadequate, for if one were to use a smaller-scale map of (part of) the coast one obtains a different value: by zooming in we have revealed a microscopic structure that was invis-ible on the larger-scale map. We therefore find that the coast is longer than originally thought. This procedure can be repeated by zooming in once more; we now identify wrinkles on top of the wrinkles and our answer becomes greater yet.

A pedantic, practical-minded listener will point out that this is merely a complication due to experimental error: one could simply walk along the coast and calculate its length thusly. For if one were to pave a road along the shores, surely there is a certain amount of asphalt that is sufficient. Even if the question is posed for more meticulous ends, he continues, the irregularities must stop at the atomic level.

This person would be correct, but his objections ignore the remarkable aspects of the problem. Despite the phrasing of the question, it is not the actual length of the coast that we are interested in, but rather the scaling property of the answer, for it turns out that fractal objects like this one have a linear log–log relation between the scaling and the measurement ratio. It is certainly true that this relation does not apply at scales too great or too small, but it does hold steadily over a scale factor of a thousand. At any point in this range, the coastline looks nearly the same.

It is this scaling property that characterizes a fractal. Familiar geometric objects, such as lines, triangles and spheres all scale according to their topological dimension: double the length a sphere’s radius, and its volume increases by that number to the power of three, for a triangle it increases by a power of two, and a line’s length quite tautologically increases by the scaling factor itself. Fractals are special, in that their scaling relation is not necessarily an integer. This quantity is called the fractal dimension and indeed we call something a fractal if the fractal dimension is strictly smaller than the topological dimension of the embedding space.

How does one measure this fractal dimension? One way is through box-counting. The idea is to cover the object with ‘boxes’ (any shape will do) of size r and count how many are needed, let this number be N . We expect the following relation:

N (r) = rdh, (2.1)

where dh is called the Hausdorff dimension, which for our purposes is the same as the

(8)

CHAPTER 2. FRACTALS

roughly constant over several orders of magnitude, and it tends to 1 for very large r and to d (the topological dimension) for every small r. Simple algebraic manipulation yields the following explicit formula for the Hausdorff dimension

dh(r) = lim r→0

ln N (r)

ln(1/r). (2.2)

Through this approach, the coastline of Britain has been measured to have a Hausdorff dimen-sion of approximately 1.25, as cited by Mandelbrot in his seminal 1967 paper.

The refined mathematical definition of the Hausdorff dimension also lends itself to ‘true’ frac-tals. The canonical example of this is the Sierpi´nski gasket, see Figure 2.1. This fractal is generated by starting off with an equilateral triangle, dividing it into four equal triangles and removing the one in the middle. This leaves three triangles all connected at single points, and from each of which we remove the middle triangle, etc. The fractal is obtained when this process is repeated to infinity. One can therefore say that the mass of the Sierpi´nski gasket S is equal to three times the mass of the same fractal, but half as large,

M (S) = 3M (S/2). (2.3)

M denotes the mass function, which refers to any extensive quantity and avoids confusion that the term ‘fractal volume’ would bring (because the area—specifically the Lebesgue measure—of the Sieripi´nski gasket is zero). When the boxes covering S are half the size, three times as many are needed, so the Hausdorff dimension of the Sierpi´nski gasket is ln(3)/ ln(2) ≈ 1.585, though it is in this case much simpler to infer this from Equation 2.3. Note that fractals like the Sierpi´nski gasket can be realized either as a decimated planar surface (2-dimensional in origin) or as intricate graph (1-dimensional).

Figure 2.1: The Sierpi´nski gasket, a triangular fractal that consists of three copies of itself. It is connected (unlike the Cantor set) and finitely ramified, meaning that removing an arbitrary section of the gasket requires a finite number of cuts to be made (in contrary to its square cousin, the Sierpi´nski carpet). This, in addition to its elementary construction and high degree of symmetry have made it the prototypical fractal to analyze and with which to test theories. Image source: Beojan Stanislaus, Wikimedia Commons [15]

Let us look at one more fractal, the Koch curve (Figure 2.2). Though a mere line in the topological sense, it is in fact a rich shape. We see immediately that rescaling the curve by a factor of 3 is the same as putting 4 copies of the curve together in the appropriate shape; hence

(9)

CHAPTER 2. FRACTALS

dh= ln(4)/ ln(3). (The Koch curve can also be seen to consist of two smaller curves: the scaling

factor is√3 which naturally yields the same value for the dimension.

Figure 2.2: This is the Koch curve. It is a continuous curve which consists of 4 copies of itself that are 3 times as small. Its Hausdorff dimension is therefore ln(4)/ ln(3) ≈ 1.262. Image source: Fibonacci, Wikimedia Commons [16]

2.2

Walk dimension

As mentioned in the Introduction, a random walk is an example of a (non-deterministic) fractal. Like the deterministic fractals, it lacks a characteristic length scale, which makes it impossible to tell the difference between a fast-moving walker that makes small steps (‘high-resolution’) and one that makes slow, long strides (‘low-resolution’). When taken to the scaling limit, the random walk in ordinary d-dimensional space becomes a Wiener process; and it has applications in thermodynamics and spectral geometry. The root mean square displacement rrmscan be shown

to be proportional to the square root of the time elapsed,

rrms≡phr2(t)i ∝

t. (2.4)

When this process is enacted on a graph fractal, the walker’s progress is inhibited by the gaps, which exist at all length scales. This results in anomalous diffusion, whereby the mean-square distance is given by a power law

hr2(t)i = t2/dw↔ t = (r

rms)dw. (2.5)

The constant dw in the exponent is called the walk dimension. It is in an intrinsic

prop-erty of the fractal: whereas the Hausdorff dimension relates distance to mass under scaling, the walk dimension relates distance to time scaling. This manifests in the probability distribution: P (r, t) = P (λr, λdwt). Its value can be evaluated by considering normalization on the lattice:1

R P (r, t)drdh = 1, and iteratively dividing the measure into parts of equal probability.

Thanks to its high degree of symmetry, the walk dimension of the Sierpi´nski gasket can be found by calculating the expected ‘escape time’ for a random walk on the graph [18] to equal ln(5)/ ln(2). This means that when the gasket is twice as big, it takes a random walker five times as long to escape the gasket from its starting point.

2.3

Resistance exponent

A third quantity is the resistance exponent ; this is encountered in the study of electrical circuits where it relates resistance at various length scales, but also in the study of Dirichlet forms on graphs. Consider a circuit that has the shape of a perfect Sierpi´nski gasket, with a length l = l0.

1The usual notion of integration over an area does not work here, since a fractal has a volume of zero. Instead

this definition relies on a more refined concept of the measure akin to a probabilistic calculation: the integral of f over a subdomain yields the expected value of f with respect to the said subdomain. [17]

(10)

CHAPTER 2. FRACTALS

We want to know how the resistance scales with the circuit’s size. We first posit that is effectively equivalent to a Y-shaped circuit with effective resistances r0, see Figure 2.3a; then there is an

effective effective resistance 2r0 between two endpoints.

If one scales the system by a factor 2, so that l0 = 2l0, one finds a circuit that is identical to

three copies of the original Sierpi´nski circuits connected at the vertices: the first shape of Figure 2.3b. Writing these in effective form, one obtains the second shape, which can be simplified by applying a delta–wye transform. The result is another Y-shaped graph, with resistances 5r0/3.

Thus doubling the length scales increases resistance by a factor 5/3, and the resistance exponent is therefore dΩ= ln(5/3)/ ln(2).

(a)

(b)

Figure 2.3: Since the effective resistance between the three end-points of a Sierpi´nski gasket is the same as the ‘Y’ topology, one can show that this fractal has a resistance scaling of 5/3. This gives rise to the resistance exponent dΩ.

The Hausdorff dimension, walk dimension, and resistance exponent are not independent quan-tities: they are all connected through the Einstein relation [19]

dw= dh+ dΩ− d + 2. (2.6)

This is a corollary of the acclaimed Einstein relation for kinetic theory, which unites the conductivity of a medium with its density and the diffusion constant:

σ = e

2nD

kBT

. (2.7)

The relation between the dimensions arises when one sees that conductivity is reciprocal to resistance (hence scaling with −dΩ), the diffusion constant comes from Brownian motion

(D = hr2(t)i/t, so scaling with 2 − d

w) and density is mass divided by volume (scaling with

dh− d). Putting this all together yields Equation 2.6.

Lastly, there is a spectral dimension, ds, which is defined according to the scaling of the

density of states (DoS) with system size. To see this, consider the density of states as a function, ρ, of energy, , with respect to a certain size, L; that is: ρL(). Scaling the system means

‘creating’ more space for states to occupy, so ρbL() = bdhρL(). At the same time, energy, being

the eigenvalue of the Hamiltonian, is inversely proportional to time, which scales with the walk dimension: bL= b−dw. If we now were to scale the measure as well, nothing should have changed:

ρbL(bL)dbL= ρL(L)dL. This yields the scaling of the DoS with respect to the energy

ρL(b) = bdh/dwρL(). (2.8)

(11)

CHAPTER 2. FRACTALS

ρ(b) = bdh/dw−1ρ() ≡ bds/2−1ρ(). (2.9)

This finally gives the DoS function

ρ() ∝ xds/2−1. (2.10)

The above definition is intentional, because this has the same dimensional dependence as for Euclidean space. Thus, the spectral dimension is defined as ds≡ 2dh/dw.

2.4

Discrete scale invariant functions

At the end of the last Section we encountered an objetc that was, in some sense, similar to a rescaled version of itself. This notion of (discrete) self-similarity is an essential property, and we shall now treat it with more care. To be precise we are interested in a function f that has the following scale symmetry:

f (x) = g(x) +1

bf (ax), (2.11)

where g is an arbitrary function and a is a nonzero constant [20]. Any f that satisfies this equation is called discretely scale invariant, or DSI. In the particular (but common) case of g(x) = 0 the solution can be found using the ansatz f (x) = xαF (x) for some other function F ; when this is the case the function is also continuously scale invariant (CSI).2Inserting the above expression yields: f (ax) = aαxαF (x). We thus require that α = ln(b)/ ln(a) and F (ax) = F (x), which can

be realized by defining F (x) = G(ln(x)/ ln(a)) with G periodic in x with period 1. Thus, the general solution of the equation is

f (x) = xln(a)ln(b)G(ln(x)/ ln(a)). (2.12)

If a represents the linear scaling factor and b represents mass scaling, Equation 2.11 (with g = 0) yields the mass of a scale invariant fractal. α can then be identified as the Hausdorff dimension.

Log-periodicity is a characteristic feature of scale invariant functions. In the case where g(x) is nonvanishing, the equation can be solved by iteratively inserting Equation 2.11 into itself: each substitution yields a g-dependent term as well as an f -dependent term through which the procedure can be repeated. After an infinite number of iterations, one finds

f (x) = ∞ X n=0 1 bng(a nx). (2.13)

One particular family of scale-invariant functions expressed in this form is the family of Weierstrass-type functions W (x) = ∞ X n=0 bncos(anπx). (2.14)

2Such CSI functions are rather trivial; a wider class can be obtained for multi-dimensional functions and relaxed

conditions. For instance, the logarithmic spiral r(θ) = c exp (kθ) is identical to any rescaling of the function, up to rotations.

(12)

CHAPTER 2. FRACTALS

To the surprise and frustration of 19th century mathematicians, this function has the remark-able property that it is continuous yet nowhere differentiremark-able. It is proven to have a Haussdorf dimension equal to 2 + ln(b)/ ln(a) [21].

The relation between f and g can be further analyzed by means of the Mellin transform. This transformation is defined as the two-sided Laplace transform B for exponential-valued functions:

{Mf }(s) = {Bf (e−x)}(s). (2.15)

As such, it is the natural transformation with which to study scaling functions. Defined with an extra gamma function for convenience, it takes the explicit form

{Mf }(s) = 1 Γ(s)

Z ∞

0

ts−1f (t)dt. (2.16)

Its inverse, subject to conditions imposed by the Mellin inversion theorem, is

{M−1φ}(t) = 1 2πi

Z c+i∞

c−i∞

t−sΓ(s)φ(x)ds. (2.17)

for some real number c. Let us now find the Mellin transform of Equation 2.11:

ˆ f (s) = 1 Γ(s) Z ∞ 0 dxxs−1f (x) = 1 Γ(s) Z ∞ 0 dxxs−1 g(x) +1 bf (ax)  (2.18) = 1 Γ(s) Z ∞ 0 dxxs−1g(x) + 1 Γ(s) Z ∞ 0 duus−1a−sbf (u) (2.19) =ˆg(s) +a −s b ˆ f (s). (2.20)

In the second line we performed the substitution u = asx. The resulting relation can now be rewritten to

ˆ

f (s) = a

sb

asb − 1ˆg(s). (2.21)

The transformed function ˆf has a set of complex poles {sn|n ∈Z}, given by

sn = 2πi ln(a)n − ln(b) ln(a). (2.22)

2.5

De Rham curves

A broad class of fractals is the class of de Rham curves. These are topologically one-dimensional shapes generated by a process that explicitly uses the self-similar property. One defines two con-traction mapping,3 d

0 and d1 on a plane. The Banach Fixed Point Theorem posits that each of

these contraction mappings has a unique fixed point which we call p0and p1respectively. The de

Rham curve is the unique subset C of M that is equal to the union of its two mappings, d0(C)∪d1(C).

3By definition a contraction mapping f is an automorphism on a manifold M f : M → M with the property

that for any x, y ∈ M , d(f (x), f (y)) ≤ kd(x, y) for some value of k between 0 and 1. That is, a contraction mapping maps points such that the distance between any pair of points is smaller by at least a factor k.

(13)

CHAPTER 2. FRACTALS

To see this, consider once more the Koch curve (Figure 2.2). It is the union of two upside-down copies of itself, scaled down by a factor 1/√3. The precise transformation can be realized on the complex plane by the following contraction mappings:

d0(z) = a¯z, (2.23)

d1(z) = a + (1 − a)¯z, (2.24)

where a = 1/2 + i√3/6. d0and d1will map the Koch curve to the left and the right halves,

re-spectively. Conversely one can generate the Koch curve by starting off with the interval I = [0, 1] and performing consecutive contractions: the nth stage of the Koch curve is then defined as the union of all 2n compositions of d

0 and d1. (For instance, the second stage Koch curve is

d0d0(I) ∪ d0d1(I) ∪ d1d0(I) ∪ d1d1(I).) The true fractal is obtained in the limit n → ∞.

Curves of generic values of a are called Koch–Peano curves; Figure 2.4 is an example. When Re(a) 6= 1/2, like here, the two halves will be of unequal size, and the mass is given as the sum of the two different scalings:

M (C) = M 1 κC  + M 1 λC  , (2.25)

where κ = |a| and λ = |1 − a|.

Figure 2.4: A Koch–Peano curve for an ‘asymmetric’ value a = 0.6 + 0.45i. Image source: “A Gallery of de Rham curves” [22]

Various generalizations are possible. One would be to relax the continuity condition; this allows one to generate the Cantor set on the unit interval through the maps d0(x) = x/3, d1= x/3 + 2/3.

Another is to preserve continuity, but use more than two—say, n—contraction mappings. This requires the conditions

(14)

CHAPTER 2. FRACTALS

for i = 0, ..., n − 2. This allows one to generate the Sierpi´nski gasket as the limit of the arrow-head curve.

Now that the basic tenets of fractal mathematics have been established, we will proceed to further investigate their discrete scaling properties by means of q-calculus.

(15)

Chapter 3

q-calculus and q-deformation

3.1

Preliminaries of q-calculus

q-calculus is a type of calculus that is natural to discretely scale invariant functions. In q-calculus one considers not infinitesimal differences, but the q-differential, which contains a discrete multi-plication [23]

dqf (x) = f (qx) − f (x). (3.1)

In particular, dqx = (q − 1)x. This expression can be used to construct the q-derivative (also

known as the Jackson derivative),

Dqxf (x) =dqf (x) dqx

=f (qx) − f (x)

(q − 1)x . (3.2)

Using the Taylor expansion of f (qx) it can be shown that in the limit q → 1 this expression is identical to the ordinary derivative. The q-derivative is a linear operator and obeys the product rule

Dqx(f (x)g(x)) = g(qx)Dxqf (x) + f (x)Dqxg(x). (3.3) The q-dilation in the first term sets it apart from the usual Leibniz criterium for differential operators. There exists, furthermore, no general chain rule for q-derivatives. The derivative of a monomial can be computed

Dqx(xn) =(qx) n− xn (q − 1)x = qn− 1 q − 1 x n−1≡ [n]xn−1. (3.4)

We see that q-derivatives act like an ordinary derivative if, rather than n, one works with [n], the q-analogue of n.1 This number (when n is integer) has a useful series expression

[n] = 1 + q + q2+ ... + qn−1. (3.5)

In this form it is clear that taking the limit q → 1 will return the undeformed quantity, n: this is a rule that holds everywhere in q-calculus. With this knowledge one would next like to define the

1In literature q-numbers are sometimes written [n]

q, but for our ends the meaning this index is quite obvious so

(16)

CHAPTER 3. Q-CALCULUS AND Q-DEFORMATION

q-analogue of the Taylor expansion. It can be checked, however, that Dq

x(x − a)n6= [n](x − a)n−1

for a 6= 0. This can be remedied by defining the q-analogue of the binomial in a slightly different way:

(x − a)nq = (x − a)(x − qa)(x − q 2

a)...(x − qn−1a), n ≥ 1. (3.6) Trivially, the expression equals 1 when n = 0. Furthermore, the binomial can be extended to all integers by defining

(x − a)−nq = 1 (x − a)n

q

. (3.7)

With this definition we can write down the expression for the q-derivative of the q-binomial

Dqx(x − a)nq = [n](x − a)n−1. (3.8) This holds for any integer n. Nota bene: Because of the definition of the q-binomial, the two terms in the binomial are noncommutative, i.e. (x + a)n

q 6= (a + x)nq. Therefore, a different identity

can be found to apply when taking the derivative to the second term

Dxq(a + x)nq = [n](a + qx)n−1. (3.9)

Now we will define two more special functions, the q-factorial and the q-binomial:

[n]! = [n] × [n − 1] × ... × [1], (3.10) n j  = [n]! [j]![n − j]!. (3.11)

The degree-N q-Taylor expansion around a is defined as

f (x) = N X j=0 ((Dxq)jf )(a)(x − a) j q [j]! . (3.12)

In ordinary calculus, the function that is equal to its own derivative is the exponential function. In calculus, it turns out that there are two of such functions, both of which can be called q-exponentials in their own right:

exq = ∞ X j=0 xj [j]!, (3.13) Eqx= ∞ X j=0 qj(j−1)/2x j [j]!. (3.14)

The latter obeys the slightly different identity DxqEqx= Eqqxbut justifies itself in the order am-biguity of Equation 3.9. These two functions are dual, in the sense that they obey the identities exqE−xq = 1, ex1/q= E

x q.

(17)

CHAPTER 3. Q-CALCULUS AND Q-DEFORMATION (x + a)nq = n X j=0 n j  qj(j−1)/2ajxn−j. (3.15)

The addition of the powers of q makes this expansion different from the usual binomial ex-pansion. However, if one where to consider two noncommutative variables x and y obeying the relation yx = qxy, the usual binomial simplifies to a similar form with q-numbers:

(x + y)n= n X j=0 n j  yjxn−j. (3.16)

This is one of the several instances where the natural preference for noncommuting variables becomes apparent. A similar effect occurs when one takes the q-exponential of two independent variables: exqe y q 6= e x+y q , (3.17)

unless the above noncommuting relation is obeyed. Moreover, it is the algebra of the dilation operator ˆMq and position operator ˆx

ˆ x ˆMqf (x) = xf (qx), (3.18) ˆ Mqxf (x) = qxf (qx),ˆ (3.19) ˆ Mqx = q ˆˆ x ˆMq. (3.20)

Operators that obey the above relation are said to live in an algebraic space called the quantum plane,2which we will briefly overview after defining two more concepts.

The inverse of the Jackson derivative is the Jackson integral. It can be derived by isolating F (x) in f (x) = DqxF (x): f (x) = MˆqF (x) − F (x) (q − 1)x = 1 (q − 1)x( ˆMq− 1)f (x) (3.21) F (x) = 1 ˆ Mq− 1 ((q − 1)xf (x)) (3.22) = ∞ X i=0 ˆ Mqi((q − 1)xf (x)) (3.23) = (q − 1)x ∞ X i=0 qif (qix) (3.24) ≡ Z x 0 f (x0)Dqx0. (3.25)

Clearly, the Jackson derivative of F (x) is the original f (x). The Jackson integral is also analogous to the ordinary integral in the sense that it measures area: given some function f (x)

2This name has its origin in quantum physics. Both classical physics and quantum describe nature through

states and observables. In the former, the observables always commute, while in the latter do not (which leads to the idea that only ‘good’ states can be measured in a meaningful way). q can in this sense be seen as a tuning parameter that modulates a theory between classical and quantum.

(18)

CHAPTER 3. Q-CALCULUS AND Q-DEFORMATION

and an x-value b it partitions the domain into pieces [qb, b], [q2b, qb], ..., [qn+1b, qnb] (with n → ∞),

each of size (q − 1)bqi. For each interval, f is evaluated at qib. As usual, in the limit q → 1 one retrieves the ordinary integral, as the intervals become infinitesimally small. By construction, Jackson integrals are only defined on [0, ∞) and not on the entire real line. Gluzman and Sornette pointed out a similarity with Equation 2.13 [24].

Figure 3.1: The geometric picture of the Jackson integral. Starting from the point b, one sums over rectangles whose widths decrease by a factor of q with each step. The closer q is to 1, the thinner the rectangles are and the more the Jackson integral resembles the usual Riemann integral. Image source: Mark Kac, Quantum Calculus [23]

What remains is to define the improper Jackson integral, for which one cannot simply let x → ∞. Instead one should observe how the pattern of Figure 3.1 continues for values greater than x. There is a complication, however: simply writing the integral as ‘from 0 to infinity’ will remove the reference-point x that we are in this calculus still quite dependent on. Thus, the integral must be written in two terms

Z x 0 Dqx0f (x0) + Z ∞ x Dqx0f (x0) = (1 − q) X i∈Z qif (qix). (3.26) Lastly one can consider complex q-analysis, which has first been done by [25]. Let z = x + iy and ¯z = x − iy. Then

dqz = dqx + idqy. (3.27)

(The analogous derivations for ¯z will be omitted.) Let us also consider the generic complex function f (x, y) which transforms under dq in both of its arguments

dqf (x, y) = f (qx, qy) − f (x, y). (3.28)

In order to define the complex derivative, the differentials need to be rewritten,

dqf (x, y) = MyqD q xf (x, y)dqx + Dqyf (x, y)dqy (3.29) = MyqDzqf (x, y)dqz + MyqD q ¯ zf (x, y)dqz.¯ (3.30)

(19)

CHAPTER 3. Q-CALCULUS AND Q-DEFORMATION

Where the complex derivative Dq

z is defined as Dqz=1 2(D q x− iM 1/q y D q y). (3.31)

This expression may seem odd, so some explanation is in order. First, note that for q → 1 it reduces to the usual complex derivative d/dz = (d/dx − id/dy)/2. To understand the scale operator Mq

y, observe that the complex q-derivatives act on powers of z in the Taylor expansion,

i.e. (x + iy)n. We have seen above that q-derivatives act differently depending on whether the

variable is the first or second term in the binomial, so when we take the q-derivative with respect to y, we get extra factors of q as in equation 3.9. Mq

y will effectively act to get rid of these factors

of q. The factor −i will take care of the factor of i that is spat out when (x + iy)n

q is q-derived to

y. Once this is appreciated, one can also immediately verify that Dq

z therefore acts on powers of

z in the usual way

Dqzzn = [n]zn−1. (3.32)

It is now possible to work with holomorphic functions, which are defined as

f (x, y) = f (z) → Dqz¯f (x, y) = 0. (3.33)

3.2

The quantum plane

Further investigation into the quantum plane will require rigorous algebra and deserves a detailed dissertation that is beyond the scope of this thesis; we will only treat the basic ideas with the purpose of uncovering q-deformed algebras as they occur in practical contexts such as q-deformed harmonic oscillators [26]. The quantum plane is inhabited by vectors called q-spinors whose components have the anticommutative property yx = qxy. Their components are denoted vα,

with α ∈ {1, 2}; these can concretely be realized as operators such as for Equations 3.18–3.20, although the algebra can be treated in an abstract way. The relation is enforced when the norm of the vector v = (x, y) with respect to a spinor metric  vanishes:

hv, vi = xα αβxβ= 0,  =  0q1 q 0  . (3.34)

The spinor is a natural formulation of the algebra as it efficiently expresses the defining relation into the language of vectors, with the important caveat that the entries are noncommutative. Suppose we have the spinor (X, Y ) obeying Y X = qXY , upon which is acted by a matrix M ,

M =a b c d 

, (3.35)

giving a new vector (X0, Y0). Acting to the left is the same as acting with MT to the right;

call this vector (X00, Y00). We want these new vectors to likewise obey the quantum relation, i.e. Y0X0 = qX0Y0, Y00X00 = qX00Y00; that is to say, the quantum plane be covariant under M . Convariance enforces the following (noncommutative) relations on the matrix entries [27]:

ca = qac, db = qbd, (3.36)

ba = qab, dc = qcd, (3.37)

(20)

CHAPTER 3. Q-CALCULUS AND Q-DEFORMATION

The q-determinant of the quantum matrix is defined as detq(M ) = ad − q−1bc, and by showing

it commutes with a, b, c, d each, it can be verified that detq(M )I is in the center of the quantum

matrix group. Determinants obey the standard multiplication rule detq(AB) = detq(A) detq(B).

By defining a particular form of multiplication, we find that quantum matrices obey a group-like relation. To wit, define the coproduct of two matrices, M1 and M2:

M1⊗M˙ 2= ∆12(M ) = A1 B1 C1 D1  ˙ ⊗A2 B2 C2 D2  (3.39) =A1⊗ A2+ B1⊗ C2 A1⊗ B2+ B1⊗ D2 C1⊗ A2+ D1⊗ C2 C1⊗ B2+ D1⊗ D2  =∆12(A) ∆12(B) ∆12(C) ∆12(D)  . (3.40) It can be checked that its components indeed satisfy the previously established (non)commutativity relations. The coproduct does not have an inverse, and it is therefore not a group. Instead, one can identify an antipode S for each element,

S(M ) = 1 detq(M )  d −q−1b −q−1c a  , (3.41)

which gives the space a very similar structure of a pseudogroup [27]. Next, one can parameterize M as M = e α eαβ γeα e−α+ γeαβ  . (3.42)

This indeed satisfies the aforementioned convariance restrictions if α, β, γ obey the following Lie algebra

[α, β] = ln(q)β, [α, γ] = ln(q)γ, [β, γ] = 0. (3.43) Now, it turns out that one can generate any T ∈ SLq(2) as

eγX1

q−2e

2αXzeβX+

q2 , (3.44)

similar to a Lie algebra, where the generators X+, X−, Xz satisfy the q-deformed algebra

[X±, Xz] = ±X±, [X+, X−] =J2XzKq. (3.45) This introduces a different kind of q-number,

JnKq =

qn− q−n

q − q−1 , (3.46)

which is symmetric under reciprocal,JnKq =JnK1/qand obeys the same classical limit limq→1JnKq = n. Equation 3.45 defines the q-deformed algebra slq(2).

3.3

q-boson algebra

We now turn our attention to the quantum harmonic oscillator and how it is affected by q-deformation. This theory is applied phenomenologically in nuclear physics and molecular spec-troscopy [28]. q-deformed boson operators were defined by Macfarlane [29]. He opted to follow the ‘symmetric’ school of q-calculus, which instead uses the symmetric q-derivative

(21)

CHAPTER 3. Q-CALCULUS AND Q-DEFORMATION

Dqxf (x) = f (qx) − f (q

−1x)

(q − q−1x) . (3.47)

and the identities that follow from it.3 We will also make use of it in this Section. The

advantage is that this formalism contains the additional symmetry q ↔ q−1 As such, the boson q-algebra obeys the following axioms:

a|0i = 0, (3.48)

a|ni =pJnK|n − 1i, (3.49)

a†|ni =pJn + 1K|n + 1i, (3.50)

N |ni = n|ni. (3.51)

The states |ni are, with no a priori representation, presumed to span an orthogonal basis of the ‘q-Hilbert space’. A closed form expression for |ni follows immediately

|ni = (a

)n

p[n]!. (3.52)

From this one can derive the following algebra:

[N, a†] = a†, (3.53)

[N, a] = −a, (3.54)

[a, a†]q = aa†− qa†a =JN + 1K − qJN K = q

−N. (3.55)

Note the peculiar third relation; the usual commutator yields no useful algebra, so authors instead opt to use the humorously dubbed q-mutator [30], which, as a result, is not generically a Lie algebra: the q-deformed Lie algebra is a generalization in the sense that one now uses the (generically non-alternating) q-mutator, and instead of the Jacobi identity one now needs to satisfy the quantum Yang—Baxter equation. The proper Lie algebra is retrieved in the undeformed case q = 1.

The fundamental justification of this space lies in the fact that it parallels the Bargmann rep-resentation of the undeformed quantum harmonic oscillator, viz. a†→ x, a → ∂x.4 q-deformation

is then naturally induced on the derivative

a∗ → x, a → Dxq. (3.56)

Furthermore, the number operator is identified with the generator of scale transformations

N → x∂x. (3.57)

It is a mild exercise to show that this representation indeed preserves the algebra:

[x∂x, x] = x, (3.58)

[x∂x, Dxq] = −D q

z, (3.59)

[Dzq, x]q = qx∂x. (3.60)

3For the purposes of this Section, that amounts to replacing any instance of [n] with

JnK.

4For representations of operators to be well-defined in the respective Segal–Bargmann space one should instead

(22)

CHAPTER 3. Q-CALCULUS AND Q-DEFORMATION

Alternatively, one can choose to define the creation and annihilation operators according to their relation with the position and momentum operators according to [31]

a = r 1 2(X + iP ), (3.61) a†= r 1 2(X − iP ), (3.62)

where X and P are now the deformed position and momentum operators, respectively:

X = x, (3.63)

P = −iDqx. (3.64)

Since these operators are observables, they should be self-adjoint. This forces us at last to con-sider the question of how the inner product on the ‘q-Hilbert space’ is realized. The problem is that the q-derivative is not commensurable with ordinary integration: as a consequence of Equation 3.3, partial integration does not hold; nor does the fundamental theorem of calculus (because it has its own fundamental theorem with respect to the Jackson integral). This obviously has far-reaching implications for the analytical aspect of q-deformed quantum mechanics including notions of norm and orthogonality. The idea of swapping integration in favour of q-integration has been raised, but this also suffers from complications (one of which being the fact that q-integrals are only defined on the positive real line), plus this choice alters the fundamentals of quantum mechanics even more. An ad hoc solution would be to simply keep the integral. X is trivially self-adjoint and P follows from a change of variables

hφ|P ψi = −i Z φ∗(x)ψ(qx) − ψ(q −1x) (q − q−1)x dx (3.65) = −i Z φ∗(x)  ψ(qx) (q − q−1)x− ψ(q−1x) (q − q−1)x  dx (3.66) = −i Z  φ∗(q−1x0) (q − q−1)x0 − φ∗(qx0) (q − q−1)x0  ψ(x0)dx0 (3.67) = i Z φ(qx0) − φ(q−1x0) (q − q−1)x0 ψ(x 0)dx0= hP φ|ψi. (3.68)

Wave functions are naturally presumed to vanish at the boundaries; more particularly, adop-tion of the symmetric q-derivative proved to be of the essence.

For the Bargmann approach, the suggestive dagger notation for the boson operators remains unjustified. When the creation and annihilation operators are given in the Bargmann represen-tation (in undeformed quantum mechanics), they work on holomorphic functions that live in the Bargmann–Segal space. The inner product on two functions is defined as

hf |gi = π−1Z C

¯

f (z)g(z)e−|z|2dz. (3.69)

But if one adopts the complex versions of Equation 3.56,5 the proof that hφ|aψi = haφ|ψi

would still fail due to lack of the chain rule and integration by parts.

5That would be defining a→ zMq

y, a → Dqz. Again, the scale operator Myq might be unexpected. Recall that

zn= (x + iy)n

q: a q-binomial. Therefore, z · zn= (x + iy)(x + iy)(x + iqy)...(x + iqn−1y) 6= zn+1. But when the

scale operator is included, one finds zMyqzn= (x + iy)(x + iqy)(x + iq2y)...(x + iqny) = zn+1as desired. Thus the

(23)

CHAPTER 3. Q-CALCULUS AND Q-DEFORMATION

3.4

q-analytic fractals

Since the Jackson derivative is an operator with a built-in property of discrete scaling, it has come to the attention of mathematicians and physicists looking for fractal functions. Erzan and Eckmann were among the first to entertain this idea [32]. They characterize fractal sets according to homogeneous functions, i.e. functions f (x) that obey f (qx) = qψf (x) for some ψ. In that case

the Jackson derivative acts on f as

Dxqf (x) = q

ψf (x) − f (x)

(q − 1)x = [ψ] f (x)

x . (3.70)

It is evident that f is an eigenfunction of xDq

x, and the equation

DxqA(x) = 0 (3.71)

is solved when Aq(x) is a log-periodic function, with period ln q. By using the product rule

(Equation 3.3) one obtains the general solution to Equation 3.70

f (x) = Aq(x)xψ. (3.72)

This is essentially the same function as in Equation 2.12. In this context it can be understood as Dxq being unable to discern the log-periodic factor and the power providing the ψ eigenvalue.

The function f has a representation as an infinite series,

f (x) =X

n∈Z

g(qnx)

qψn , (3.73)

which holds for any g provided it vanishes near the origin, as well as its first n0 derivatives,

where n0 is the smallest integer greater than ψ. In that case, and if ψ > 0, q 6= 1, Aq can be

written in terms of a Jackson integral

Aq(x) = x−ψ X i∈Z g(qnx) qψn = 1 1 − q Z x 0 Dqx0 g(x) xψ+1 + 1 1 − q Z ∞ x Dqx0 g(x) xψ+1. (3.74)

A nontrivial example of this is the Mandelbrot–Weierstrass function,

WM(x) =

X

n∈Z

b−nψ(1 − cos(bnx)), (3.75)

which is an adapted version of Equation 2.14 due to Mandelbrot, who sought to remove the absolute scale that the regular Weierstrass function contains due to the infinite sum starting at n = 0.

(24)

Chapter 4

The inverse square potential

Instead of searching for discretely scale invariant wave functions, one could also study quantum systems that are intrinsically scale invariant and see what sort of properties (energies, eigenstates, etc.) it has. This approach has been studied extensively, as dilations are one of the symmetries of a conformal theory. Consider therefore the following Hamiltonian

H = −~ 2 2m d2 dx2 + a x2. (4.1)

This is called the inverse square or 1/x2 potential. It shows up most prominently as the

centrifugal barrier term of the radial Schr¨odinger Equation in spherically symmetric systems, in which case it is always repulsive and can only assume quantized values l(l + 1)—but we shall investigate the generic case by replacing it by a real-valued a. This is because the interesting physics occur at noninteger (and nonpositive) values; furthermore, we will impose a boundary condition near x = 0 (i.e. r = 0), which is not an actual boundary in spherical coordinates. The Hamiltonian of Equation 4.1 has several peculiar properties, the most remarkable of which is the lack of a quantity with dimensions of energy constructible from the constants at hand (~, a and m). Also, the Hamiltonian is indeed, up to a prefactor, scale-invariant, as seen in Schr¨odinger’s equation by the substitution x → Lx:

H(x)ψ(x) = Eψ(x), (4.2)

→ H(Lx)ψ(Lx) = 1

L2H(x)ψ(Lx) = Eψ(L

−1x). (4.3)

What this means is that if ψ(x) is an eigenstate with energy E, then so is any rescaling √

Lψ(Lx) with energy L2E. This means that the symmetry at hand is continuous not discrete,

which was what we set out to investigate; this is something that we will return to. These states are also, of course, not orthogonal.1

The rescaling can be implemented by a unitary operator U [33],

U (λ)ψ(x) = √ λψ(λx). (4.4) It is unitary, because hU (λ)ψ|U (λ)ψi = Z ∞ 0 λ|ψ(λx)|2dx = Z ∞ 0 |ψ(λx)|2d(λx) = hψ|ψi, (4.5)

1It may be remarked that L > 0 because the Hilbert space is restricted to (0, ∞) due to the singularity at the

(25)

CHAPTER 4. THE INVERSE SQUARE POTENTIAL

and

U (λ1)U (λ2) = U (λ1λ2), U (λ)U (λ−1) = 1. (4.6)

The issue that is lurking much deeper (and that is indeed responsible for all of its pathologies) is the fact that the Hamiltonian is not self-adjoint. Understanding this requires a little mathematical background. Say that we are interested in an inner product between two functions, one of which is acted upon by an operator O: hφ, Oψi. Here, φ and ψ are elements of the space of square-integrable functions, L2. There exists an adjoint operator Othat, when acting on the other

function, yields the same inner product

hφ, Oψi = hO†φ, ψi. (4.7)

An operator is called Hermitian if O†= O. This property justifies the notation of ‘sandwich-ing’ an operator between the bra and the ket: acting either to the left or the right yields the same result. But O and O† may also differ in their domains. D(O) is the set of all functions ψ such that hφ, Oψi is well-defined, and D(O†) is the set of all ψ such that hOφ, ψi is well-defined. An operator whose action and domain match those of its adjoint is called self-adjoint.

By definition, D(O) ⊆ D(O†), so it may happen that D(O†) contains functions that D(O) does not. That is, it may be the case that ψ ∈ D(H) but Hψ /∈ D(H) (for instance because it doesn’t vanish on the boundaries); then hφ, H2ψi is ill-defined. But at the same time hHφ, Hψi may give a well-defined result. In that case, one can make H self-adjoint by extending D(H) to include Hψ and defining the inner product according to the adjoint. This is a self-adjoint extension. It is not generally possible to extend D(O) all the way to D(O†); all functions must mutually yield

well-defined inner products. One therefore wishes to extend the domain maximally, and whether this extension is unique follows from a mathematical procedure invented by Stone and Neumann. One could verify self-adjointedness by writing out the inner product of two wave functions explicitly and then integrating by parts to get the derivatives working on the other function [34]. The problem lies in the boundary terms (specifically the terms at 0, since terms at ∞ are always presumed to vanish due to normalizability). The solution, then, is to restrict the domain of wave functions so that boundary terms always vanish. For a full treatment of the topic see for instance [35].

We will opt not to take this rigorous approach; the issues that we encounter as a result of it are instructive and can still be fixed, albeit that we will have to jump through more mathematical hoops. First, we take Schr¨odinger’s Equation for Equation 4.1 and rewrite it, following the notation of Lima et al. [36]: − d 2 dx2ψ + α x2ψ = 2mE ~2 ψ, (4.8)

where α = 2ma/~2. The operator on the left-hand side is also called the Calogero operator, although we will insist on calling this the Hamiltonian, too. For reasons that will become ap-parent later, it will be elucidating to characterize the potential not by α, but by ν ≡pα + 1/4. The system is divided into three ranges of ν that characterize distinct phases: strongly repulsive (SR) with ν2∈ (1/2, ∞) (α ∈ (0, ∞)); weakly attractive (WA) with ν2∈ [0, 1/2) (α ∈ [−1/4, 0));

and strongly attractive (SA) with ν2 ∈ (−∞, 0) (α ∈ (−∞, −1/4)).2 The complete solution will

furthermore depend on the sign of the energy.

2Some authors (see [36]) put the boundary at ν = 1; this seems to be motivated by the choice of boundary

conditions; as we will impose ψ(0) = 0, and this choice will affect the space of solutions. Note that if the alternative definition were taken, the free theory ν2= 1/2 would be considered part of the weak regime.

(26)

CHAPTER 4. THE INVERSE SQUARE POTENTIAL

Strongly attractive Weakly attractive Strongly repulsive

α (−∞, −1/4) [−1/4, 0 [0, ∞)

ν2 (−∞, 0) [0, 1/2) [1/2, ∞)

4.1

E = 0

The zero energy case of Equation 4.8 is an Euler–Cauchy equation and straightforward to solve with the ansatz ψ(x) = xs. Plugging it in yields the condition

s(s − 1) = α. (4.9)

By writing α ≡ −(1/2 − ν)(1/2 + ν),3 the non-normalized general solution is obtained:

ψ(x) = Aν,0x1/2+ν+ Bν,0x1/2−ν. (4.10)

This function is clearly non-normalizable and thus unphysical; its inclusion serves to sketch a picture for the limit cases when E 6= 0. Note that the integration constants Aν,0 and Bν,0 have

dimensional units and can be combined to construct an intrinsic length scale L:

L−2ν ≡ Aν,0 Bν,0 . (4.11)

The above wave function is scale invariant when either Aν,0 or Bν,0 vanishes, such that the

length scale becomes trivial. A nontrivial superposition allows for α no smaller than the critical value ν∗ ≡ −1/4 lest α become imaginary; and it can be no larger than ν = 1/2 as this would

violate the boundary condition.

4.2

E > 0

In this case, the wave always has enough energy to escape to infinity; therefore, these are always scattring states. The task is then to solve the differential equation

− d 2 dx2ψ(x) + α x2ψ(x) = k 2ψ(x), (4.12)

where k ≡√2mE/~, the momentum of the wave. The general solution is expressed in terms of two Bessel functions of the first, Jν and J−ν:4

ψk(x) = Aν,k

xJν(kx) + Bν,k

xJ−ν(kx). (4.13)

The general solution can also be given in terms of Hankel functions, which are defined by Hν(1)(x) = Jν(x) + iYν(x), H

(2)

ν (x) = Jν(x) − iYν(x):

3Some authors (e.g. Griffiths) adopt the more straightforward convention α ≡ ν(ν − 1), by which the solution

has ν and −ν + 1 as exponents; most of the time the two conventions can be converted by subtracting the 1/2 from any instance of ν, though not always. Caution is advised.

4It is actually J

ν and Yν which are the linearly independent solutions of Bessel’s equation. However on the

specified range 0 < ν < 1/2 the functions do span the Hilbert space, and this convention will yield a slightly more symmetric result.

(27)

CHAPTER 4. THE INVERSE SQUARE POTENTIAL ψk(x) = Aν,k √ xHν(2)+ Bν,k √ xHν(1). (4.14)

The merit of this is due to the fact that we are studying scattering states, and for large x, the Hankel functions resemble plane waves:

√ xHν(1)(x) = r 2 πe i(x−π/4−νπ/2), (4.15) √ xHν(2)(x) = r 2 πe −i(x−π/4−νπ/2). (4.16)

With these approximations, we can write down the wave function for away from the potential in terms of the incoming and phase-shifted outgoing plane waves

ψk(x) ≈ Aν,k

r 2 kπe

−πν/2eiπ/4e−ikx+ ei(2δ(k)+kx), ieπνBν,k

Aν,k

≡ e2iδ(k). (4.17)

But here we encounter a peculiarity. Normally the boundary condition ψk(0) = 0 imposes

a constraint on the coupling Aν,k/Bν,k as we lose one of the degrees of freedom. Consider the

small-x expressions for the Hankel functions for positive ν (assumed without loss of generality)

Hν(1)(kx) ≈ − 2νiΓ(ν) π (kx) −ν+ O(x−ν+2), (4.18) Hν(2)(kx) ≈ 2 νiΓ(ν) π (kx) −ν+ O(x−ν+2). (4.19)

For |ν| > 1/2, the theory is well-behaved: the boundary condition necessitates fixing the coupling through Aν,k= Bν,k, which physically manifests as the scattering phase shift: δ(k) = π/4. When

real |ν| < 1/2 or when ν imaginary, however, we see that the wave function always tends to zero. With no specific value fixed by the theory in itself, δ(k) is left undetermined.

The problem of the inverse square potential can be thought of as due to our ignorance of small-distance physics: while the theory assumes that the inverse square potential is accurate over the whole domain, we do not know what actually happens very close near the origin, where the wave function is singular and energies tend to infinity. This is an unphysical assumption, so we posit that the potential in fact behaves differently. Let us therefore impose the boundary condition ψk(R) = 0, realized by placing an infinite potential:

2m ~2 VR(x) = ( α/x2, x > R ∞, x < R (4.20)

Let us take a look at the strongly attractive regime and define ν = i¯ν. The boundary condition now reads Ai¯ν,kH

(1)

i¯ν (kR) + Bi¯ν,kH (2)

i¯ν (kR) = 0, so it follows that

δ(k) = − arg H(1)ν (kR) − 1

4π. (4.21)

Further small-x approximations lead to

tan δ(k) ≈tan ξ + tan(π ¯ν/2)

(28)

CHAPTER 4. THE INVERSE SQUARE POTENTIAL

Here we find that the phase shift fluctuates log-periodically as we take R to the origin: this is limit cycle behaviour; we will encounter another instance of this for bound states. Due to the limit cycle, the system behaves identically at different scales (namely those values for R such that two values for kR differ by integer multiples of 2π).

As was said earlier, when we are in the weakly attractive regime 0 < ν2 < 1/2, the same problem occurs. We will treat this case in detail; to that end, let us introduce the following square well regularization [37]: 2m ~2 VR(x) = ( α/x2, x > R −λ/R2, x < R (4.23)

There are many other regularization schemes that can be chosen; each being some sort of func-tion or distribufunc-tion which vanishes for R → 0. Different regularizafunc-tion schemes highlight different aspects of the theory. The system will be analyzed through this regularized potential, and only in the final step shall be R be pushed to the origin, yielding what we will take to be the ‘true’ physical nature of Equation 4.1 when fitted to phenomenological data.

Figure 4.1: The chosen cutoff shape for the scattering case. Within the cutoff range a standing wave is formed, which (and whose derivative) must connect to the Bessel wave outside of it. In this plot, R = 1/4 and λ/R2= 5. The task is to first solve for the wave function for finite R, and then take this parameter to the limit 0.

The introduction of a cutoff automatically solves the problem of the system’s not having an intrinsic length scale: distances can now be measured with respect to R. To find the solutions, Schr¨odinger’s Equation must be solved in both domains and then connected such that the functions and its first derivative be continuous at R. The solution outside the regularized zone is given by Equation 4.13, and the solution inside is given by a sine wave: ψ(x) = C sin(pk2+ λ/R2x).5 The

continuity condition then takes on the form

p k2R2+ λ cotpk2R2+ λ = kRAν,kJ 0 ν(kR) + Bν,kJ−ν0 (kR) Aν,kJν(kR) + Bν,kJ−ν(kR) +1 2. (4.24)

This equation tells us which momenta are allowed as a function of the coupling. Suppose kR is small so that the following Taylor expansion can be made:

(29)

CHAPTER 4. THE INVERSE SQUARE POTENTIAL lim kR→0 Jν0(kR) Jν(kR) = ν. (4.25)

This identity allow us to investigate the two extreme cases: for Bν,k= 0 the equation reduces

to

λ cot√λ = ν +1

2 ≡ ν+, (4.26)

and for Aν,k= 0 we find

√ λ cot

λ = −ν +1

2 ≡ ν−. (4.27)

The values for λ for which these values are obtained we call λ+ and λ−, respectively. Since

the left hand side is monotonically decreasing, it is clear that λ− > λ+.

Let us now turn Equation 4.24 around and write it as a function of the coupling

Aν,k Bν,k = −kRJ 0 −ν(kR) − J−ν(kR)(p(k2R2+ λ) cot √ k2R2+ λ − 1/2) kRJ0 ν(kR) − Jν(kR)( √ k2R2+ λ cotk2R2+ λ − 1/2) (4.28) ≈ −22νΓ(1 + ν) Γ(1 − ν)(kR) −2ν √ λ cot√λ − ν− √ λ cot√λ − ν+ (1 + O(kR)). (4.29)

As R is taken to 0, the factor (kR)−2ν diverges. This would normally lead to trivial results, but from the regularization scheme we have another parameter, λ, with which we can counteract the divergence. The idea is to take λ as a function of R in such a way that the coupling Aν,k/Bν,kas a

whole remains constant. These lines of constant values can be seen as flow lines when represented in an RG plot ; they are lines of ‘constant physics’: the same observable values are produced. In the small-R limit, this means that the numerator must go to 0, i.e. λ → λ+, and this holds

true for arbitrary values of the coupling. This is why λ− is called an attractive RG fixed point.

More precisely, it is called the IR fixed point: since R is the length scale of the theory, R → 0 corresponds to large wave functions.

Furthermore, when Bν,k is relatively small, the denominator of the RHS needs to be small as

well: therefore these flow lines adhere closely to λ ≈ λ+ until (kR)−2ν becomes large, at which

(30)

CHAPTER 4. THE INVERSE SQUARE POTENTIAL

Figure 4.2: A plot of parameter space in the variables λ and kR: each point is associated with a certain coupling. The curves represent theories of equal coupling, starting with A/B = 0 above and going down to A/B → ∞ below. As R is lowered to 0, each line is attracted towards the infra-red fixed point (IRFP). In this plot α = −3/16. Image adapted from review article by Paik [38], with the names of quantities altered to conform to our notation.

Let us investigate the small-cutoff regime further. Consider Equation 4.29 once more, with k fixed and R small so that the linear term can be discarded. If we define γ(R) ≡√λ cot√λ it looks like Aν,k Bν,k ≈ CR−2νγ − ν− γ − ν+ , (4.30)

where C contains all R-independent factors. The couplings must stay the same as R is varied, that is to say d(Aν,k/Bν,k)/dR = 0. This condition leads to the RG equation

β(R) ≡ Rdγ(R)

dR = −(γ − ν−)(γ − ν+). (4.31)

This equation tells us in the first place that ν± are fixed points (in the weakly attractive

range): if γ equals either of these two values, it will no longer change with respect to the cutoff. Secondly, if γ lies between the fixed points, the RHS is strictly positive, meaning that γ increases monotonically as R increases. Conversely, γ monotonically decreases as the cutoff is pushed to 0—until it reaches one of the two fixed points. Any trajectory tends towards the IR fixed point: this is called IR flow. If R were to be taken to infinity, the theory would instead flow to the UV fixed point (UV flow ), but this has no physical singificance in this theory.

As ν2 decreases, the two fixed points will move towards each other, until they merge at the

critical value ν = 0. Once ν2< 0, the roots become imaginary and the beta function has no real

solutions; with no fixed point to be attracted to, the flow becomes periodic in the complex plane [39]. Equation 4.31 has a rather simple analytic solution:

(31)

CHAPTER 4. THE INVERSE SQUARE POTENTIAL

γ(R) = 1

2(1 − 2ν tanh (4Cν − ν ln R)), (4.32)

where C ∈ C is the integration constant. The real solution is of interest to us and is obtained by choosing ν = i¯ν, ¯ν ∈ R.

γ(R) = 1

2(1 − 2¯ν tan (4C ¯ν + ¯ν ln R)). (4.33) The RG cycle is thus given by a log-periodic function, with logarithmic period T = π/¯ν. What does this mean? When we started off, the system had continuous scale invariance, but as we renormalized the system in the WA range, we found that all RG flows are attracted to a single point. Only when λ is taken to the limit λ−does the system display nontrivial physical behaviour.

In the SA range, the RG flow is invariant under rescaling of the cutoff by discrete factors exp (π/¯ν). But since the system has no a priori length scale, this is equivalent to rescaling the system by the inverse amount. These scaling factors, now, are the only scaling factors that pre-serve the physics; in other words, because of the RG period, CSI has broken down into DSI. Since k was the scaling factor, the spectrum has also discretized: En∝ exp (−nπ/¯ν), n ∈ Z. This is an

infinite tower of energies with an accumulation point at zero [40].

4.3

E < 0

As mentioned before, the existence of bound states has catastrophic implications: if there is at least one eigenstate with negative energy then the energy spectrum is unbounded from below. Nevertheless, let us search for solutions. The equation for bound states differs from Equation 4.12 only in a sign: − d 2 dx2ψ(x) + α x2ψ(x) = −κ 2ψ(x), (4.34)

where now κ =≡√−2mE/~ is momentum. The general solution to this equation is

ψκ(x) = Cν,κ

xIν(κx) + Dν,κ

xKν(κx). (4.35)

Iνand Kν are the νth order modified Bessel functions of the first and second kind, respectively,

and κ ≡√−2mE/~. The first term diverges at large x for any of the allowed values of ν so we set Cν,κ= 0. Additionally, the Taylor expansion of Kν(kx) shows that the second term diverges

near the origin for real ν as well. The non-existence of bound states in the WA and SR ranges can also be demonstrated by the following argument. First factor the Hamiltonian

H = ~ 2 2m  − d dx− ν + 1/2 x   d dx− ν + 1/2 x  . (4.36)

It can be verified that these two factors and are each other’s Hermitian conjugates if and only if ν is a real number (assuming the boundaries vanish). In that case, let us write

A†= − d dx − ν + 1/2 x , A = d dx− ν + 1/2 x , (4.37)

(32)

CHAPTER 4. THE INVERSE SQUARE POTENTIAL so that H = ~ 2 2mA †A. (4.38)

The energy Ek of a generic eigenstate |ψki can be calculated

Ek= hψk|H|ψki = ~ 2 2mhψk|A †A|ψ ki = hAψk|Aψki (4.39) = Z ∞ 0 |Aψk(x)|2dx > 0. (4.40)

Since WA and SR yield no solutions, we proceed in the SA range, where ν = i¯ν. The general, normalized bound state solution is given by [34]:

ψκ(x) = κ r 2 sinh π ¯ν π ¯ν √ xKi¯ν(κx). (4.41)

This family of wave functions has a countably infinite number of roots, which is at odds with our familiar picture of a wave function with n roots being the nth excited state (with the ground state having no roots). Furthermore, the spectrum is still unbounded from below. This will not do. For scattering states the strategy was to renormalize by keeping the coupling fixed. This problem will require a different approach, since we have no free integration constants. Instead, we proceed by stipulating an infinite potential wall as we did in the positive energy case:

2m

~2VR(x) = (

α/x2, x > R

∞, x < R (4.42)

This gives the system an intrinsic length scale R. The infinite wall fixes a boundary condition in the form of a node at x = R

Ki¯ν(κR) = 0. (4.43)

As a result, only those values of κ for which the above holds are allowed. Ki¯ν has an infinite

number of nodes, accumulating at the origin. When the cutoff is implemented, the above condition requires a node to lie at x = R. This reduces the space of eigenstates to a discrete spectrum. Furthermore, since the number of nodes to the right of the cutoff is always finite, there is a minimum value for κ for which the only zero lies at the cutoff: this is the regularized ground state.

(33)

CHAPTER 4. THE INVERSE SQUARE POTENTIAL

Figure 4.3: The shape of the wave function Ki¯ν(kx). Since the rightmost zero lies at the cutoff (represented

by the dashed line), this is the ground state. The excited states are the other values for κ such that a node lies at x = R. The values used are R = 2 and ¯ν = 2.

In order to identify the discretized spectrum we approximate the Bessel function for small values of κR Ki¯ν(κR) ≈ − r π ¯ ν sinh π ¯ν sin  ¯ ν ln κR 2  − arg Γ(1 + i¯ν)  . (4.44)

The argument of the sine tells us for what values of κ the boundary condition is satisfied:

κn= 2 Re arg (Γ(1+i¯ν))−(n+1)π ¯ ν . (4.45)

The +1 is added to the n for our convenience: written in this way, the nth value of κ corresponds to a wave function with n crossings, so κ0 is the ground state momentum. The other momenta

form a geometric series

κn= κ0e

−nπ ¯

ν . (4.46)

This result is reminiscent of the log-periodicity observed in Equation 4.33. But if we now were to take R to zero, all momenta diverge. We can only save the ground state, which is fixed as per our renormalization condition. To counteract the divergence due to the 1/R factor, we must send ¯

ν → 0 at the same time. Through the identity Γ(1 + ix) ≈ −γx, where |x|  1 and γ denotes the Euler–Mascheroni constant, the ground state momentum reads

κ0= 2e−γ  1 Re −π/¯ν  . (4.47)

Though we were looking for strictly negative energies, it now appears that all energies but the ground state go to zero in this limit, and can therefore no longer be considered solutions:

κn= κ0e−nπ/¯ν → 0 (n > 0). (4.48)

Though RG flow is typically used for scattering problems, it can be used here to retrieve the limit ¯ν → 0 as well [41]. We are curious how ¯ν should depend on the cutoff so we write down its beta function:

(34)

CHAPTER 4. THE INVERSE SQUARE POTENTIAL

β(R) ≡ Rd¯ν

dR, (4.49)

The requirement RdE0/dR = 0 yields the beta function explicitly:

β(R) ≈ ν¯

2

π, (4.50)

Just like before, we see that as the cutoff vanishes β(R) is attracted to the fixed point, ¯ν = 0. In conclusion, the physics of the strongly attractive potential can be described by the well strength α = −1/4 and has a single bound state6

ψ0(x) = κ

2√xJ0(κx). (4.51)

Using all of this information, we can construct the follow table that tells us the nature of the spectra:

Figure 4.4: A graphic representation of the allowed energies in the parameter space. The horizontal line is the ν2-axis and the boundary values have been labelled. The dashed line in the center represents the

free theory. The lines directed by an arrow symbolizes the RG flow: renormalization in those areas leads to the fixed point ν = 0.

4.4

Practical instances of the inverse square potential

The problem of the inverse square potential recurs in several physical phenomena. Most notable is the Efimov effect, which was first predicted in 1970 [42]. It pertains to the interaction between three identical particles. When two of the bosons are close together, they exert an effective potential that decays proportional to the inverse square of the distance to the third boson. For small values of the cutoff (we are considering the bound state case, cf. Equation 4.46) this gives rise to a geometric series of energy levels given by

En= E0λ−2n, λ ≈ 22.694, (4.52)

where λ = exp π/s0, and s0 is the specific value for ¯ν approximately equal to 1.00624 [43].

Another instance of this potential is encountered when studying Q-operators in conformal field theory [44]. In his search for exactly solvable models, Baxter formulated a transfer matrix

Referenties

GERELATEERDE DOCUMENTEN

Using both real and simulated data, the following procedures are described and applied: multigroup confirmatory factor analysis, followed by alignment analysis of the same data set;

Scale-variant and scale-invariant image representations Training CNNs on large image collections that often exhibit variations in image resolution, depicted object sizes, and

Keywords: low-level visual properties, fractal geometry, Fourier slope, spatial frequency, consumer motivation, aesthetic liking, purchase intention, willingness to recommend,

For this model that means that before the crisis there was evidence for diseconomies of scale as the variable of growth of assets is significant and negative, whereas during the

Figure 4.5: Decay pathlength distribution in the signal (top panel) and background (bottom) area of invariant mass... 4.3 Sidereal

Figure 3.1: An example input image of seven connected components (A to G) in three different grey level values (0 to 2) and the constructed Max-Tree.. (from Salembier et

To better understand the effects of aging and SCN dysfunc- tion on scale invariance, we assessed scale invar- iance of motor activity over a broader range of time scales from minutes

In this article, instead of offering a comprehensive overview of algorithms under different low-rank decomposition models or particular types of constraints, we provide a unified