• No results found

Lax equations in the lower triangular $\mathbb{N}\!\times\!\mathbb{N}$ matrices

N/A
N/A
Protected

Academic year: 2021

Share "Lax equations in the lower triangular $\mathbb{N}\!\times\!\mathbb{N}$ matrices"

Copied!
70
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Universiteit van Amsterdam

M.Sc. Mathematical Physics

Lax equations in lower triangular

N×N matrices.

Author:

Jeffrey A. Weenink

Examination date:

August 24, 2020

Supervisor:

Prof. dr. Gerard F. Helminck

Second examiner:

Prof. dr. Sergey V. Shadrin

(2)

Abstract

A brief history on hydrodynamics and the Korteweg-de Vries equation precedes the derivation of the KdV hierarchy on the basis of the compatibility conditions of the Schr¨odinger equation with the use of pseudo-differential operators. Thereafter the algebra of lower triangular N × N matrices over a commutative algebra R is discussed, characterised by possessing a highest non-zero diagonal. The shift operator Λ therein, the equivalent to the differential operator, is paired with a maximal commutative subalgebra h of the finite matrices gln(k), k = R or C. Together they create two hierarchies h[Λn] and h[Λnn in the lower triangular

matrices in R, which are conjugated by appropriate lower triangular matrices Id plus negative diagonal elements. The evolution equations of the deformed generators are determined by Lax equations. Equivalence of the Lax equations and zero curvature forms are derived, as are the conditions required to relate it to Cauchy problems. Using the developed tools, the Lax equations are expressed as formal linearisation problems. Through this formalism a specific ring of smooth k-valued functions on an open subset in the bounded operators of l2(k) is constructed, wherein the Lax equations related to the two hierarchies are derived.

Title: Lax equations in lower triangular N×N matrices.

Author: Jeffrey A. Weenink

Supervisor: Prof. dr. Gerard F. Helminck Second examiner:Prof. dr. Sergey V. Shadrin Examination day: August 24, 2020

Korteweg-de Vries Institute for Mathematics University of Amsterdam Science Park 105–107 1098 XG Amsterdam the Netherlands

(3)

Contents

Abstract 2

1 Introduction to Lax pairs: a sprint from Korteweg-de Vries’ foundations

to modern relevance 4

1.1 History . . . 4

1.2 Deriving the Korteweg-de Vries equation . . . 7

1.3 The KdV & KP hierarchy . . . 10

2 Algebra of lower triangular N × N matrices 14 2.1 Notation . . . 14

2.2 Invertibility in LT(R) . . . 20

2.3 Integrable hierarchies in LT(R) . . . 25

2.3.1 Lax Equations . . . 29

2.4 Lax Pairs ⇐⇒ Zero Curvature . . . 34

2.5 Cauchy problems . . . 39

3 Linearisation & Constructing Solutions 46 3.1 Linearisation formalism . . . 46

3.1.1 Trivial ansatz . . . 47

3.1.2 Negative order wave solutions . . . 50

3.1.3 Relating the two hierarchy types . . . 54

3.2 Geometric construction . . . 55

Popular summary 62

(4)

CHAPTER

1

Introduction to Lax pairs: a sprint from Korteweg-de

Vries’ foundations to modern relevance

§1.1 History

This historical section up until the 20th century is based primarily on [Darrigol, 2005], a book entirely dedicated on chronicling the development of hydrodynamics. A much shorter article on solitons forms the basis of the revival of the KdV equation [Marin, 2009; Liao, 2018].

The mathematically-sound study of waves started after the development of general phys-ical principles. Daniel Bernouilli in 1740’s Hydrodynamica applied Huygens’ conservation of energy and study of pendulums (Horologium Oscillatorium) in equating potential ascent and actual descent, in modern terms kinetic and potential energy respectively. This made him the first to apply general theories to water to derive their dynamics, and allowed him to implicitly derive the one-dimensional Euler equation. It led to little glory, as his father Johann Bernouilli released his improvements in Hydraulica three years later in 1743. Some malice might have been involved, as Johann “consciously withheld Hydraulica from his son”, and dated it as having been written in 1732, most likely a falsification to discredit Daniel’s work [Mikhailov].

In a 1752 memoir Leonhard Euler derived his eponymous equations of compressible flu-ids in three dimensions, spurred on by the works of d’Alembert and the Bernouilli’s. His treatment is so modern it still mirrors the introductions used today. Joseph Louis La-grange further integrated these ideas through his analytical approach, most thoroughly in his legendary M´echanique Analitique. This unleashed contributions from the great French mathematicians Laplace, Poisson and Cauchy in the following decades. However these were derivations from a mathematical framework. Physical experiments of the more arcane pre-dictions were confirmed in the book Wellenlehre auf Experimente gegr¨undet by the Weber brothers. However real world observations would challenge the completeness of the analysis developed thusfar.

Interest in a hitherto unknown wave phenomena was kick-started when naval engineer John Scott Russell was studying boats in a canal, where horses drew the boats onto their own waves to reduce drag. By accident the boat stopped but the wave continued, “assuming the form of a large solitary elevation, a rounded, smooth and well-defined heap of water, which continued its course along the channel apparently without change of form or diminution of speed” [Russell, 1845, p. 13]. After elucidating that the contemporary works of Lagrangei,

(5)

Poisson, the Weber brothers, and Cauchy all failed to predict and describe his ‘wave of the first order’ (nowadays called a soliton), Russell challenges mathematicians:

“Having ascertained that no one had succeeded in predicting the phænomenon which I have ventured to call the wave of translation or wave of the first order to distinguish it from the waves of oscillation of the second order it was not to be supposed that after its existence had been discovered and its phænomena determined endeavours would not be made to reconcile it with previously existing theory or in other words to show how it ought to have been predicted from the known general equations of fluid motion. In other words it now remained to the mathematician to predict the discovery after it had happened i.e. to give an ´a priori demonstration ´a posteriori ”

The gauntlet was picked up by Joseph Boussinesq in 1772 [Boussinesq], basing his treat-ment on the more thorough experitreat-ments performed by compatriot Henry Bazin. Boussinesq employed the more advanced analysis of his time to include non-linear terms Lagrange was forced to exclude a century prior, allowing him to derive the velocity of a steady periodic waves which matched Russell’s observations. His eponymous equation was but a footnote. Lord Rayleigh obtained similar results, which prompted the extension of Diederik Johannes Korteweg and his student Gustav de Vries which now included the effects of surface tension. The work of Korteweg and de Vries was less restrictive than their predecessors’, allowing for a broader class of waves and finally answering Russell’s call in its entirety. It now appeared the solitary wave is not a temporary approximation and need not disappear at infinity in time or space, nor change its shape with the advancements of time and space—all necessary assumptions for Boussinesq and Rayleigh [Jager, 2011]. For the first the the ´a posteriori became the ´a priori Russell sought. They address their predecessors and their conclusion [Korteweg-deVries]:

“[Our contemporaries appear] inclined to the opinion that the wave is only stationary to a certain approximation. It is the desire to settle this question definitively[...]” “We believe, indeed, that from [our calculations] the conclusion may be drawn, that in a frictionless liquid there may exist absolutely stationary waves and that the form of their surface and the motion of the liquid below it may be expressed by means of rapidly convergent series.”

They found solutions of the wave height h relative to the frame ξ = x − ωt following the wave of speed ω. With “arbitrary constant” α small:

∂h ∂t = 3 2l ∂ ∂ξ  1 2h 2+2 3αh + σ 3 ∂2 ∂ξ2h  (1.1)

iIn Joseph-Louis’ own words [Lagrange, 1788, p. 492]: ¾On pourra toujours employer la th´eorie

pr´ec`edente, si on suppose que dans la formation des ondes l‘eau n‘est ´ebranl´ee & remu´ee qu‘`a une profondeur tr`es petite, supposition qui est tr`es plausible en ellemˆeme, `a cause de la t´enacit´e & de l’adh´erence mutuelle des particules de l’eau, & que je trouve d’ailleurs confirm´ee par l’exp´erience mˆeme `a l’´egard des grandes ondes de la mer ¿,i.e. the previous theory can only be applied to water subject to minute disturbances, which seems likely and fits with description of the sea

(6)

Note ∂h/∂t = 0 if you consider it in the wave frame of stationary waves, i.e. solitons. The constant σ = l3/3−T l/ρg “depends upon the depth l of the liquid, upon the capillary tension

T at its surface and upon its density ρ”, and the wave shape hδmeasures the derivation from

l [Korteweg-deVries, eq. (12), (13) & (17)]

h(x, t) = hδsech2 ξ r hδ 4σ ! , ω =pgl + 1 2 r g lhδ.

Remarkable is the dependence of ω on hδ, meaning larger waves overtake smaller waves in

time even as their shapes remain fixed. This means these stationary waves disperse towards infinity, and their particle-like behaviour would later earn them the title of solitons. In the context of the paper the solitary waves are actually only the limit of cnoidal waves, where the waves have the form of Jacobi elliptic functions. Nevertheless the mathematical explanation of Russell’s physical discovery drew little attention from colleges, until a problem connected to it appeared a century later.

With the development of the earliest computers, physicist Enrico Fermi, John Pasta, Stan Ulam and Mary Tsingou sought to contribute to the emergent fields of nonlinear physics and computer simulations of scientific problems by simulating a system of con-nected springs[Dauxois, 2008]. By adding a small nonlinear perturbation, the initial single energy mode should spread among all its possible modes, as stated by the equipartition theorem[Fermi, 1965]. Initially the energy did spread, but then surprisingly it returned to its initial state ii. This unexpected periodic behaviour became known as the FPU paradox.

With great surprise, Norman Zabusky and Martin Kruskal opened their revolutionary soliton paper in 1965 [Zabusky Kruskal]:

“We have observed unusual nonlinear interactions among ,,solitary-wave pulses” propa-gating in nonlinear dispersive media. These phenomena were observed in the numerical solutions of the Korteweg-de Vries equation.”

They resolved the FPU paradox in the defining properties of waves they dubbed “soli-tons”, in reference to their particle like behaviour:

“Because of the periodicity, two or more solitons eventually overlap spatially and in-teract nonlinearly. Shortly after the inin-teraction, they reappear virtually unaffected in size or shape.”

With the publication of Gardner, Green, Kurskal and Miura’s application of the inverse scattering transform to the initial value problem of the KdV equation [GGKM67], research exploded. Further work showed that the eigenvalues of the Sturm-Liouville operator are invariant if the the potential satisfied the KdV equation (see page 7 and equation (1.2))

ii“ [T]he return [to the initial energy state] is not complete. The total energy is concentrated again

essentially in the first Fourier mode, but the remaining one or two percent of the total energy is in higher modes”

(7)

[Miura, 1968], prompting Peter Lax to develop his eponymous Lax form of the KdV equa-tion and other related systems [Lax, 1968, eq. (1.14)]. Zakharov and Faddeev showed that the KdV equations constituted “an infinite-dimensional completely integrable Hamiltonian system” [Zakharov, 1971]. The same inverse scattering was used in [AKNS73] to solve its titular AKNS system.

Su and Gardner originality encountered the KdV equation in their 1960 study of hydro-magnetic waves in cold plasma, and derived it as a dispersive limit for a wide class of frame independent systems years later after interest in the Kruskal paper [Su, 1969]. Solitons and Lax equations have shown to occur in a number of physical systems. We have already en-countered its role in fluid mechanics, but the most devastating standing wave is a tsunami. In fact the origin of some tsunamis, where a small column of water is displaced due to an earthquake, is almost an exact scaling up of Russell’s experiments used to reproduce his canal observations. When tides rise, tidal bores are solitons that ascend the rivers. Solitons have also found a very practical application in communication, where their shape-retaining properties help counteract dispersion. Furthermore solitons occur in cloud waves, plasma, quantum magnetic flux (‘fluxons’), electrical transmission lines, among others [Marin, 2009].

§1.2 Deriving the Korteweg-de Vries equation The following two paragraphs are based on the work of [Miwa, 2012] (a streamlined version originally by [Lax, 1968]) and [Xiang, 2015], plus the references in specific sections.

Consider a one-dimensional Sturm-Liouville problem, expressed as real positive eigenval-ues k2 of the energy operator

H := ∂x2+ V (x), Hψ = k2ψ. (1.2)

The standard solutions can be modified on the ekx solution for V = 0, by a formal

perturba-tion: ψ = exp(kx)P

j≥0ψj(x)/kj



. We will end up doing something similar in chapter 3 during ‘linearisation’. The standard solution starts with constant ψ0, allowing the subsequent

terms to be determined by iterative integration of ordinary differential equations

ψj =

1 2

Z

Hψj−1dx (1.3)

To get a sense of things to come, we introduce time evolution through a differential operator G, i.e. G =X j≥0 gj(x)∂xj, dψ(x, t) dt = Gψ(x, t). (1.4)

The potential V (x, t) is now also time dependent. Here we explicitly call the symmetry variable t time, but it could be any symmetry of (1.2) that evolves according to some operator G. A specific alternative example is translation x → x + a which has corresponding operator G = ∂x.

(8)

Under the assumption ψ(x, t) is a solution, and should coincide with the previous ψ(x) = ψ(x, t = 0) as determined in (1.3). Having specified the evolution of ψ in time, we need to determine if the S-L equation continues to hold. So differentiate (1.2) with respect to t. Under the assumption that time is a symmetry of the system, ∂x∂t = ∂t∂x, this gives

(∂tH + [H, G]) ψ = 0, or ∂tV (x, t)ψ = [G, H]ψ (1.5)

where the last equation can only be said to hold on the space spanned by the (formal) solutions ψ. However since it works for any choice of k, we conclude the Lax equation

∂tH + [H, G] = 0. (1.6)

If the eigenvalues are time-dependent you get (∂tH − ∂xG + [H, G]) ψ = 0 which in some

literature is also considered a Lax equation [Liao, 2018], and we later will establish similar formulas named zero curvature equations. The informal scheme for any H and symmetry generator G is then

Hψ = λψ & ∂ψ

∂t = Gψ, imposes (∂tH + [H, G]) ψ = 0. (1.7) We can derive from this Lax form (1.5) the KdV equation with choice

G = ∂3+3V 2 ∂ +

3 4∂V.

Solving (1.6) involves expressing the commutator. Since ∂ acts on zero trivially, we only have to consider the terms where the partial derivatives have been applied to V . This gives

[∂2+ V, ∂3+ 3V 2 ∂ + 3 4∂V ] = −∂ 3 V − 3V 2 ∂V + 3 4∂ 3 V + [V,3 2∂V ], = −1 4∂ 3 V − 3 2V ∂V (1.8) giving us by (1.6) the evolution equation ∂

∂tV = 3 2V ∂V ∂x + 1 4 ∂3V ∂x3. (1.9)

This is the Korteweg-de Vries equation of wave height V . By rescaling the height V → hV , the space coordinate x → ax and time t → bt, each of the three terms can be sepa-rately rescaled to this standardised form. Hereby we call (1.6) the Lax representation of the Korteweg-de Vries equation.

If we reverse the roles of G, H in the Lax equation we need to solve instead

∂G/∂t = [H, G]. (1.10)

With H given, we seek an appropriate G = ∂3+ A∂ + B for which (1.10) holds. Like before

the equation is first said to hold on the space spanned by eigenfunctions ψ, and then by the spectrum we conclude the equation itself holds. Through this you can isolate the terms with

(9)

the same order of partial derivatives not applied to functions, i.e. ∂ ◦ f = ∂(f ) + f ∂, or for n ≥ 0 ∂nf = n X j≥0 n j  ∂j(f )∂n−j. (1.11)

You get, ignoring the indeterminate constants in A&B,

∂tG = ∂t∂x3 + ∂t(A)∂x+ A∂t∂x+ ∂tB + B∂t= [∂x2+ V, ∂ 3

x+ A∂x+ B] =

x2B − ∂x3V − A∂xV + ∂x2A + 2∂xB − 3∂x2V ∂x+ (2∂xA − 3∂xV ) ∂x2.

Gathering the ∂x2 gives 2∂xA = 3∂xV =⇒ A =

3 2V. Gathering the ∂x gives

3 2∂tV = 3 2∂ 2 xV + 2∂xB − 3∂x2V =⇒ ∂xB = 3 4(∂t+ ∂ 2 x)V.

Applying ∂x on both sides on the terms without ∂x on the right, gives

∂x∂tB = 3 4 ∂ 2 t + ∂t∂x2 V = 3 4∂ 2 x(∂t+ ∂x2)V − ∂x4V − 3 2∂x(V ∂xV ) 6∂t2V = −∂4xV − 6∂x(V ∂xV ). (1.12)

Equation (1.12) is the eponymous equation as originally derived by Joseph Boussinesq in 1872. Every factor can again be independently rescaled through V → hV , x → ax and t → bt.

In deriving the Lax form we assumed time independence of the eigenvalues. From a equation of the Lax form (1.5) we can conversely conclude the stability of the eigenvalues of H, so long as a trace-like function tr is well behaved by being cyclic and being expressible as polynomials in the eigenvalues k2.

For n ∈ N, d dttr(H n) = n−1 X j=0 trHj[H, G]Hn−j−1 = n−1 X j=0 trHn−1[H, G] = 0,

where each tr(Hn) is expressed in eigenvalues k2n, and together they allow us to conclude

∂tk2 = 0[Torrielli, 2016, 3.1].

Remark. The origin of integrability lies in Liouville’s study of the dynamical systems, evolv-ing through Hamiltonian equations of motion that could be solved by integrationiiion the given

boundary condition. Hence, integrability. Being integrable is then an ability to express so-lutions through these methods—not solvability: the ability to actually calculate soso-lutions, be it exactly or approximately. [Torrielli, 2016]

(10)

§1.3 The KdV & KP hierarchy To get operators commuting with H = ∂2 + V , we introduce negative powers ∂−1 of the

derivation ∂, where ∂∂−1 = Id. Expand equation (1.11) by prefixing the negative order terms, n ≥ 0 ∂nf = n X j≥0 n j  ∂j(f )∂n−j =X j≥0 n j  ∂j(f )∂n−j.

We wish to have ∂(∂−1f ) = f for all functions f . So assume ∂−1f is expressible as X

j≥0

FN −j∂N −j of degree N . We can compute the form from the given constrains

f = ∂(∂−1f ) = FN∂N +1+ X j≥0 (∂FN −j + FN −j−1) ∂N −j, ∴ N = −1, F−1 = f & ∂FN −j = −FN −j−1, ∂−1f :=X j≥0 (−1)j∂j(f )∂−j−1. (1.13)

This allows us define the pseudo-differential operators of order n,

A =X j≥0 fn−j∂n−j = n X j≥0 fj∂j+ X j>0 f−j∂−j.

We split any such operator A into two parts,

π+(A) = n X j≥0 fn−j∂n−j & π−(A) = X j>0 f−j∂−j, (1.14)

where the first expression is empty for negative n. The π+(A) is just a regular differential

operator.

With equation (1.6), we saw that the commutator [H, G] equalled the time derivation ∂tH = ∂tV . The last is a function, whereas generally any commutator of differential operators

like [H, G] would have higher order terms. Let us try to generalise this for given H = ∂2+ V , by determining the pseudo-differential operators G for which [G, H] is a function (i.e. a degree zero differential operator). This will produce the Korteweg-de Vries hierarchy.

So let us try to solve this for a degree N ≥ 1 pseudo-differential operator G:

[∂2+ V,X

i≤N

gi∂i]. (1.15)

(11)

application. [∂2+ V,X i≤N gi∂i] = (2∂gN)∂N +1+ (2∂gN −1+ ∂2gN)∂N+ N −1 X M ≥1 ∂2gM + 2∂gM −1− N −M −1 X j=0 gN −j N j  ∂N −M −jV ! ∂M + ∂2g0+ N −1 X j=0 gN −j N j  ∂N −jV ! = ∂2g0+ N −1 X j=0 gN −j N j  ∂N −jV !

The first equation to solve is ∂gN = 0, so gN is a constant—once normalised gN = 1. The

next equation is ∂gN −1 = −12∂2gN = 0, gN −1 constant, so without loss of generality we can

assume it is zero moduloiv

C[∂]. Then by induction we have found unique solutions {gN}N ≥M

modulo C[∂].

We can then calculate 2∂gM −1= −∂2gM+

N −M −1 X j=0 gN −j N j  ∂N −M −jV . Then by

integra-tion (proving it is an integrable system) we can determine gM −1 uniquely up to elements in

C[∂].

Hence for any N ∈ N we have a unique solution GN = Pi≤Ngi∂i, up to normalisation

and constants, whereby (1.15) is a degree zero function. Since the N -th power of the square root of H is also a solution of degree N , they must equal up to C[∂]. In fact since neither one contains C[∂] terms outside the leading ∂N, they are exactly equal: GN = π+(

√ HN).

Lemma 1.3.1. Any H = ∂m+P

j<mhj∂j, m ≥ 1 and the {hj} integrable, has an m-th root

H1/m.

Proof. You can derive the required relations by defining some G = ∂ +P

j≥0g−j∂

−j, and

solving Gm = H. The notational challenge in this calculation is the multiplicity of certain terms, e.g. ∂(g0)∂m−2 is obtainable in many different combinations of (∂ + g0)m, where we

track where the ∂g0 term comes from: m − 1 times if it is from the first ∂; m − 2 times if it

is from the second ∂; . . .; once if it is from the second to last ∂. So the multiplicative factor is (m − 1)m2.

This is wasted effort for all terms, so rather we give each jth power of ∂ a number of non-zero constants ni

j where the i indexes the number of ways you could obtain that power

of ∂v. In the above we calculated n2m−2 = (m − 1)m2. The other term due to g02∂m−2, is

ivother ground fields are possible

(12)

n1

m−2 =

m

2, and g−1 has multiplicity n 3 m−2 = m. Let us compute: Gm = ∂ +X j≥0 g−j∂−j !m = ∂m+ mg0∂m−1+ (n1m−2∂(g0) + n2m−2g02+ n2m−2g−1)∂m−2+ n1m−3∂2(g0) + n2m−3g0(∂g0) + n3m−3∂g−1+ n 4 m−3g 3 0 + n 5 m−3g−2+ n 6 m−3g−1g0 ∂m−3+ X k>3 (X i nim−k(. . .))∂m−k (1.16)

Define the degree of ∂ig

−j as i + j + 1 (i, j ≥ 0), and the degree of a product additive: the

degree 3 terms are ∂2g

0, g0g−1, ∂g−1 and g−2. Then you can see that the factors of ∂m−k is

made up of all possible degree k terms with non-zero factors {nim−k}i≥1.

Most notably that means g−a only occurs from equations in ∂m−b, b ≥ a, and induction

can be applied on the degree. Hence so long as the {g−a} are integrable, the system can be

solved by induction integration on the {g−b}b<a, producing a solution Gm = H.

Remark. Up to integration constants, the solution is unique.

You see the tools employed here and in finding the pseudo-differential operator of degree k that commutes with any operator H that fulfils the lemma condition, are one and the same.

Define for l ≥ 1,

Gl := π+(

√ Hl).

Then we get for each such Gl a corresponding evolution variable xl, the system of equations

Hψ = k2ψ, ∂ψ

∂xl = Glψ. As before compatibility of this equation gives us the compatibility

equation, and we arrive at the Korteweg-de Vries hierarchy, ∂H

∂xl

= [Gl, H], (1.17)

where any specific l gives the lth KdV equation.

Remark. Only the odd terms are worthwhile, since G2l = π+

H2l = Hl, and the commu-tator is zero.

The most well-studied generalisation is the KP hierarchy. Consider any

H = ∂ +X

j≥1

f−j∂−j,

the {f−j} integrable. The eigenvalue equations in k are Hψ = kψ. By the previous

proce-dure, l ≥ 1, we can compute Gl := π+(Hl). The compatibility in evolution variable xl gives

the KP hierarchy

∂H ∂xl

(13)

In particular we recover the KdV hierarchy if π−(H2) = 0.

For both hierarchies, we need to establish ‘time’ evolution along the variables xl is

com-patible. That is to say changing ψ along the different infinitesimal generators [Gl, H] and

[Gm, H] of their respective symmetries commutes:

∂ ∂xl ∂ ∂xm ψ = ∂ ∂xm ∂ ∂xl ψ , or ∂ ∂xl [Gm, H] = ∂ ∂xm [Gl, H] ∀l, m ≥ 1. (1.19)

Theorem 1.3.2 ([Miwa, 2012, p. 14]). For the KP hierarchy, (1.19) holds.

Proof. Since both sides are derivations, a polynomial function f in H respects the KP hiearchy equations:

∂f (H) ∂xl

= [Gl, f (H)].

Since both derivations ∂x

l(·) and [Gl, ·] respect the decomposition into degrees of ∂, we more

strongly state ∂f (π±(H)) ∂xl = [Gl, f (π±H)], and in particular ∂π+(Hm) ∂xl = ∂Gm ∂xl = π+( ∂Hm ∂xl ) = −π+ [Hm, π+(Hl)] . (1.20)

This means when we apply ∂x

l to [H, π+(H

m)] we can apply it to each argument according

to the defining property of derivations. ∂(Gm))

∂xl

= − ∂ ∂xl

[H, π+(Hm)] = [[H, Gl], Gm] + [H, π+([Hm, Gl])] (1.21)

To establish (1.19) we need to show the expansion commutes under exchange of m and l. For any A, B pseudo-differential operators which commute [A, B] = 0

[π−(A), π+(B)] = [A − π+(A), B − π−(B)] = 0 + [B, π+(A)] + [A − π+(A), π−(B)].

This allows us to expand the underlined part

π+([Hm, Gl]) = [Gm, Gl] + π+([π−Hm, Gl]) = [Gm, Gl] + π+([Hl, Gm])

With the substitution in (1.21), we can invoke the Jacobi identity [[X, Y ], Z] + [[Z, X], Y ] + [[Y, Z], X] = 0 through X = H, Y = Gl, Z = Gm to produce:

∂(Gm))

∂xl

= [[X, Y ], Z] + [X, [Z, Y ]] + [X, π+([Hl, Gm])]

= [X, π+([Hl, Gm])] − [[Z, X], Y ]

= [H, π+([Hl, Gm])] + [[H, Gm], Gl].

(14)

CHAPTER

2

Algebra of lower triangular N × N matrices

The convention 0 ∈ N is used, and calculations are done in a commutative algebra R over a field k = R or C. Proofs in the context of R will often generalise to the algebra Mn(R) of

n×n matrices, which can then be reinterpreted as being a result in R itself.

There are a lot of definitions and notation introduced in the first three paragraphs, some of it only relevant within its own section. A separate page of important definitions follows thereafter in an effort to relieve one of the need to retread sections.

§2.1 Notation

Let Ai,j ∈ R, for an N × N matrix:

A = (Ai,j) =      A0,0 A0,1 A0,2 . . . A1,0 A1,1 A1,2 . . . A2,0 A2,1 A2,2 . . . .. . ... ... . ..     

Define In→1 : MN(Mn(R))−→ M∼ N through carrying each Ai,j ∈ Mn(R) to n×n entries of

R: Ai,j gets placed in the block of (in + a, jn + b) entries, 0 ≤ a, b < n.

Given i, j ∈ N define [Ei,j] := [Ei,j]n,m = δi,nδj,m, with the Kronecker delta equal to one

for identical indices, and zero otherwise. Multiplication between them is straightforward, Ei,jEm,n = Ei,nδj,m.

For a matrix A, we call the m-th diagonal, m ∈ Z, the matrix of Ai,jEi,j for terms

j − i = m. Hence the first diagonal would be (A0,1, A1,2, A2,3, . . .). When convenient the

term can also refer to the collection of entries as an N × N matrix with those entries on the 0-th diagonal, zero elsewhere. It will then be accompanied by a secondary term that would place it at the right m-th diagonal, see the shift matrices Λ and Υ later on.

For any d : N → R we define the row vector ˘d := (d(0), d(1), . . .). Matrix A actsi. as ˘

dA(m) :=P

jd(j)Aj,m, resulting in another N row so long as the sum is finite; in these cases

˘

dA is well-defined. Define V as the vector space of the N rows.

Finite matrices Mn(R) can be seen as the finite R-linear maps Rh → Rh, either rows to rows or vectors to vectors. However the action of MN(R) on rows cannot be interpreted as R-linear RN→ RN. To make it well defined, there are five obvious options.

iWe still consider matrix multiplication as left-to-right since that, not row action, is our primary concern.

(15)

ˆ Restrict to the vector space f ⊂ V := RN, consisting of maps b : N → R with only

finitely many non-zero entries. Any A ∈ MN(R) is then some A : f → V through the row action, defined by (bA)j =

P

k=0bkAk,j, i.e. ˘bA. This describes HomR(f, V ). The

fatal flaw is a lack of matrix multiplication, which cannot be seen as a composition of HomR(f, V ) maps.

ˆ If the vector space V is unchanged, the matrices instead need to be restricted. The lower triangular matrices is the class LT consisting of A ∈ MN(R) such that above a positive diagonal the entries are zero, that is ∃M ∈ N such that ∀m > M , Ai,i+m = 0.

Informally, above a certain diagonal the matrix is zero. Composition here is well defined because the entry of a matrix product is the inner product of the first matrix’ row and the second matrix’ vector—the first term terminates after a finite number of entries. The result is also lower triangular. Visualised in figure 2.1(a). We will return to this class in a more thorough manner.

ˆ The case of upper triangular matrices is exactly the same, ∃M ∈ N such that ∀m > M , Ai+m,i = 0. Informally, below a certain diagonal, the matrix is zero. Matrix

multiplication is due to the second matrix column having only finitely many non-zero terms. Visualised in figure 2.1(c).

ˆ The unrestricted lower triangular matrices have rows i of finite length ni. If {ni− i}i∈N

were bounded, it is just the lower triangular matrices above—in particular the highest diagonal m mentioned would be the minimum bound of {ni − i}i∈N. The product

AB is well-defined as any row A has only finitely many non-zero entries: the row of (AB) then is a sum of B rows determined by the finitely many non-zero entries in the corresponding row of A. For row lengths ni, mj of A, B respectively, the ith row of AB

has at most row length maxj≤ni(mj). This behaviour is much broader as nias function

of i need no longer be linear in i. Visualised in figure 2.1(b).

ˆ Likewise the unrestricted upper triangular matrices have finite column length ni.

Vi-sualised in figure 2.1(d).

Given a d : N → R, define diag[k](d) as the matrix with the entries of d on the k-diagonal, zero elsewhere. In particular, (diag[k](d))i,j = d(i)δj−i,k for non-negative k or d(j)δi−j,−k for

negative k. As a first use, consider the oft-recurring sub-algebra of the diagonal matrices:

Dia(R) := {diag[0](d) | d(k) ∈ R, ∀k ∈ N}

In the case where the ring is R, but d : N → Mn(R), we use diag[k,n](d) as the diagonal

MN(Mn(R)) matrix reinterpreted as a MN(R) matrix, i.e. diag[k,n](d) := In→1◦ diag[k](d). For example n = 2, di =



4i+0 4i+1 4i+2 4i+3

(16)

(a) Lower triangular, everything is 0 above lead-ing diagonal.(LT)

(b) Unrestricted lower triangular, everything is 0 above the leading ‘curve’.

(c) Upper triangular, everything is 0 below the leading diagonal.

(d) Unrestricted upper triangular, everything is 0 below leading ‘curve’.

Figure 2.1: Illustration of matrix classes.

diag[0, 2](d) = I2→1      (0 1 2 3) (0 00 0) (0 00 0) . . . (0 0 0 0) (4 56 7) (0 00 0) . . . (0 0 0 0) (0 00 0) (10 118 9 ) . . . .. . ... ... . ..      =            0 1 0 0 0 0 . . . 2 3 0 0 0 0 . . . 0 0 4 5 0 0 . . . 0 0 6 7 0 0 . . . 0 0 0 0 8 9 . . . 0 0 0 0 10 11 . . . .. . ... ... ... ... ... . ..            .

(17)

Any element A ∈ LT can be re-expressed as sum of the diagonals {diag[i,n](d)}i<∞. Furthermore, define Λ :=X i≥0 Ei,i+1 =      0 1 0 0 . . . 0 0 1 0 . . . 0 0 0 1 . . . .. . ... ... ... . ..      , Υ := ΛT =X i≥0 Ei+1,i =        0 0 0 . . . 1 0 0 . . . 0 1 0 . . . 0 0 1 . . . .. . ... ... . ..        .

They multiplication between the elements is ΛΥ = Id =P

i≥0Ei,i and ΥΛ =

P

i≥1Ei,i 6=

Id.

This allows us to clarify the interpretation above of the k-th diagonal d (n = 1), given as diag[k](d). To aid the notation, let i×j be a i × j-sized matrix of zeroes for i, j ∈ {N} ∪ N.

For k ≥ 0, d ∈ Dia(R), we place d on the k diagonal via diag[k](d) := dΛk

=      d(0) 0 0 0 . . . 0 d(1) 0 0 . . . 0 0 d(2) 0 . . . .. . ... ... ... . ..           0 1 0 0 . . . 0 0 1 0 . . . 0 0 0 1 . . . .. . ... ... ... . ..      k =      N×k d(0) 0 0 . . . 0 d(1) 0 . . . 0 0 d(2) . . . .. . ... . ..     

Likewise Υ can be used for negative diagonals so long as care is provided to the ordering: diag[−k](d) := Υkd =        0 0 0 . . . 1 0 0 . . . 0 1 0 . . . 0 0 1 . . . .. . ... ... . ..        k      d(0) 0 0 0 . . . 0 d(1) 0 0 . . . 0 0 d(2) 0 . . . .. . ... ... ... . ..      =        k×N d(0) 0 0 . . . 0 d(1) 0 . . . 0 0 d(2) . . . .. . ... ... . ..       

These help to write down the matrix decomposition as A =P

j≥0djΛj  +P j<0Υ −jd j = P

jdiag[j](dj), with diagonal dj : N → R called the j-th diagonal. The degree of a matrix

is the highest value j for which the diagonal dj is not identically zero, and ∞ if it does not

exist. The zero element has degree −∞, however often it needs to be treated separately. This gives us

LT(R) := {A ∈ MN(R) | A has degree < ∞}. Lemma 2.1.1. LT(R) is an R-algebra w.r.t. matrix multiplication Proof. The main obstacle is showing the product is well-defined.

∀A, B ∈ LT(R), of degree a, b, one has

AB = X n,m,n0,m0 An,mBn0,m0En,mEn0,m0 = X n,,m,m0 An,mBm,m0En,m0 (2.1)

(18)

So (AB)i,j = PmAi,mBm,j. But ∀m − i > a, Ai,m = 0. So the sum is finite, and well

defined. Moreover (AB)i,j = 0 for j − i > a + b, since for m with m − i > a, Ai,m = 0

and j − m > b, Bm,j = 0, a non-zero product would require m ≤ a + i and m ≥ j − b.

Combined it is zero if there is no m with a + i ≥ m ≥ j − b, i.e. a + b ≤ j − i. Now the multiplication LT(R) × LT(R) → LT(R) is well defined, the notation (2.1) demonstrates it is also R-bilinear, associative and with unit P

jEj,j = Id.

Remark. A more straightforward proof is that LT ⊂ Hom(f, f ) etc.

Remark. Equation (2.1) and its discussion also makes clear deg(AB) ≤ deg(A) + deg(B), deg(Am) = m deg(A) for m ≥ 0.

Remark. Note that for the multiplication, associativity, the rightmost element B in AB can be a more general matrix in MN(R) ⊃ LT(R), if A ∈ LT(R). However the product is not necessarily in LT(R), so care is required.

With the transition to d(i) ∈ Mn(R), the roles above of Λ, Υ are replaced by Λn, Υn respectively for general values of n ≥ 1, and Dia(R) with

Dian(R) := In→1Dia(Mn(R)).

The decomposition hereunder is

A =X j diag[j,n](Aj) = X j≥0 AjΛjn ! + X j<0 Υ−jnAj ! . (2.2)

Definition 2.1.2. The Aj are now (n×n-block) diagonals, reinterpreted from MN(MN(R))

as MN(R), and the degree indexes the highest non-trivial (n×n-block) diagonal.

The algebra LT(R) is unchanged, In→1LT(Mn(R)) = LT(R). Once the Lax equations are introduced in a few paragraphs, we always consider specific maximal commutative sub-algebras h ⊂ gln(R)—all terms consistently refer to such diagonals and there should be little if any conflation of the terms.

Accordingly in the context of upper triangular matrices, elements have a lowest diagonal above which any terms are allowed. Again this is the degree, circumstances will dictate which is intended.

UT(R) := {A ∈ MN(R) | AT ∈ LT(R)}. Lemma 2.1.3. UT(R) is an R-algebra w.r.t. matrix multiplication

Let us now develop some useful tools and notations in LT multiplication. Given any d : N → R, k ∈ Z, define d[+k](i) := d(i + k). To allow negative values of k, d : Z → R will

(19)

order of these operations is of great importance. Namely for k > 0, (d[+k])[−k](i) is zero for

i < k and d elsewhere, as d[+k](i) = d(i + k), thus (d[+k])[−k](i) = d[+k](i − k) = 0 as i − k < 0.

Nonetheless the order of raising and lowering indices will often be suppressed. The same principles here and what follows can be used for d : N → Mn(R), since the brackets never

specify with respect to what ring over k it is defined. It will be almost always be Mn(R), though context is the biggest clue.

For any l, k > 0, ΛlnΥkn equals Λ(l−k)n if l ≥ k, and Υ(k−l)n if l < k. For any d : N →

Mn(R) interpreted as diagonal, using shorthand 0 = n×n, 1 = Idn×n

[Λln, d] =     N×nl 1 0 0 . . . 0 1 0 . . . 0 0 1 . . . .. . ... ... . ..          d(0) 0 0 . . . 0 d(1) 0 . . . 0 0 d(2) . . . .. . ... ... . ..      − dΛln =    N×nl d(l) 0 . . . 0 d(l + 1) . . . .. . ... . ..   −    N×nl d(0) 0 . . . 0 d(1) . . . .. . ... . ..    = diag[l,n](d[+l]− d) [Υln, d] =      ln×N 1 0 0 . . . 0 1 0 . . . .. . ... . .. ...           d(0) 0 0 . . . 0 d(1) 0 . . . 0 0 d(2) . . . .. . ... ... . ..      − dΥln =      ln×N d(0) 0 0 . . . 0 d(1) 0 . . . .. . ... . .. ...      −      ln×N d(l) 0 0 . . . 0 d(l + 1) 0 . . . .. . ... . .. ...      = diag[−l,n](d − d[+l]) Λlnd = d[+l]Λln & Υlnd[+l]= dΥln (2.3) The commutator of Λ, Υ, l, k ≥ 0, is [Λl, Υk] = X i,j≥0

Ej,j+lEi+k,i− Ei+k,iEj,j+l = diag[l − k](1) −

X

i≥0

Ei+k,i+l (2.4)

= diag[l − k](1 − 1+[min(l,k)])

(20)

Now, bringing it all together for diagonals d, e, integers l, k ≥ 0: [dΛl, Υke] = dΛlΥke − Υk(ed)Λl =      N×l d(0) 0 . . . 0 d(1) . . . 0 0 . . . .. . ... . ..           k×N e(0) 0 0 . . . 0 e(1) 0 . . . .. . ... . .. ...      − ↔ for l ≥ k = e[+l−k]dΛl−k − (ed)[−k] k×l k×N N×l IdN×N 

=diag[l − k](de[+(l−k)]− (de)[−k])

for l < k = Υk−ld[+k−l]e − k×l k×N N×l IdN×N



(de)[−l] = diag[l − k](d[+(k−l)]e − (ed)[−l])

§2.2 Invertibility in LT(R)

Just like multiplication with N× N-matrices, invertibility is not a straightforward property either. Surprisingly much from finite matrices carries over. We start from very general matrices in LT(R) and unrestricted lower triangular matrices to consider invertibility therein. The lack of structure on what cases allow invertibility forces simplifications, setting the stage for a few simple lemma’s.

Decompose an unrestricted lower triangular matrix A into si×sj blocks, i, j ≥ 0: A =

Asi,ji,sj, Ai,j ∈ Msi×sj(R). For si = 1, one recovers the previous notation MN(R), and ni = h

gives MN(Mn(R)). In this notation, A can be written down as

A = (Asi,sj i,j ) =        A0,0s0,s0 A0,1s0,s1 As0,20,s2 . . . A1,0s1,s0 A1,1s1,s1 As1,21,s2 . . . As2,s0 2,0 A s2s1 2,1 A s2,s2 2,2 . . . .. . ... ... . ..       

Remark. Note that even if Asi,sj

i,j is lower triangular, the same matrix A interpreted as having

R-entries can be unrestricted. Consider si = i,

 Ai,ii,i

a,b = (Idi×i)a,−b mod n, i.e. a ’flipped’

identity, and 0 elsewhere (Ai,ji,j6=i = i×j). In this notation, it could naively be considered a

zero-degree matrix w.r.t. this decomposition. However with R-entries, any diagonal k > 0 has a 1 due to all the non-zero Ak+i,k+ik+i,k+i, i > 0. Nonetheless these sort of lower triangular matrices have a well-defined product, as they have a highest ‘diagonal’ of si× sj matrices.

First let us limit ourselves to matrices with zero above the {Asi,si

i,i }, an analogue of sorts

to the zero diagonal in LT(R). In an effort to derive some properties for the desired invertible elements, we start with a few calculations in the finite matrices. Given Asi,ii,si ∈ Msi(R)

,

(21)

lower-triangular matrices. As0,s0 0,0 s0×s1 As1,s0 1,0 A s1,s1 1,1 ! (As0,s0 0,0 ) −1 s0×s1 ˆ As1,s0 1,0 (A s1,s1 1,1 ) −1 ! = A ˆA =1 0 0 1  . ˆ As1,01,s0 = −As1,11,s1 −1 As1,01,s0 As0,s0 0,0 −1

solves the equation. For larger PN

i=0si ×

PN

j=0si

matrices, consisting of N + 1 blocks of sizes {si}, we again start our solution with ˆA si,si+j i,i+j = si×si+j and ˆA si,si i,i =  Asi,ii,si −1 .

The product (A ˆA)i,j with i > j gives the equation

si×sj = N Y k=0 Asi,sk i,k Aˆ sk,sj k,j .

Even in the N × N case, this would be a sum of finitely many non-zero terms, as Asi,si+d

i,i+d =

Asj−d,,sj

j−d,j = 0 for d > 0, so only the i ≥ k ≥ j contribute. The data required to construct ˆA si,sj

i,j

then is the given A, and Ask,sj

k,j for k < i. Under assumption these have been constructed,

there is a. unique solution:

( ˆA)si,ji,sj = −Asi,ii,si −1 i−1 X k=j Asi,sk i,k ( ˆA) sk,sj k,j .

Then by induction all entries of ˆA can be found in finitely many steps. This constructs the off-diagonal elements { ˆAsi,sj

i,j }i>j for N × N matrices. With the ˆAA =

A ˆA = Id from the unique finite case, this ˆA is the unique left and right inverse in MN(R). Thus we have found the invertible elements of the form

A = (Asi,ji,sj) =        As0,00,s0 s0×s1 s0×s2 . . . A1,0s1,s0 A1,1s1,s1 s1×s2 . . . As2,s0 2,0 A s2,s1 2,1 A s2,s2 2,2 . . . .. . ... ... . ..        , such that Asi,si i,i ∈ Msi(R)

and no constraints on other terms. In the context of the Lax

equations we will explore in this thesis, we only need a subset of these matrices. For that end define P−, containing all degree zero invertible lower triangular elements in MN(Mn(R))

A = (Ai,j) =      A0,0 0 0 . . . A1,0 A1,1 0 . . . A2,0 A2,1 A2,2 . . . .. . ... ... . ..      .,

(22)

with Ai,i ∈ Mn(R)∗ ensuring invertibility.

This makes it a well-behaved multiplicative group P−. We will also utilise the following

subgroup

U− := {A ∈ P−| leading diagonal is Id, i.e. Ai.i = Id ∈ Mn(R)}. (2.5)

Remark. While large, this class is far from the entire class of left and right-invertible lower triangular matrices, but it contains all instances where everything below the zero-diagonal is unconstrained. Adaptability below the zero diagonal will be a recurring requirement, see the propositions in this chapter.

More explicitly, nonzero terms above the diagonal, e.g. Ai,i+1 6= si×si+1, put a restraint

on the elements below the diagonal to keep things invertible; the rows need to remain orthog-onal. A simple example hereof in the finite case M2(M2(R)) is:

 Id A0,1 A1,0 Id  =     1 0 0 0 0 1 1 0 0 1 1 0 0 0 0 1    

Since the second and third row equal, it is not invertible. Extended to MN(M2(R)) with

Ai,i = Id, and all other Ai,j = 2×2 beyond (0, 1) and (1, 0), it would remain degenerate.

Thus the group P− is thorough if we require unrestrained behaviour below the diagonal.

This was degree 0, let us consider the case of arbitrary degree, where the leading diagonal of A is invertible: ∃m ∈ N with either ΥnA or AΛn ∈ P−. Finite matrices again will provide

a convenient handle on the infinite case. Therefrom we can conclude an invertible diagonal is not equivalent to invertibility, but also what the exact condition is.

Any 2 × 2-matrix with matrix-entries A ∈ Mn×n(R)∗, B ∈ Mn×m(R), C ∈ Mm×n(R),

D ∈ Mm×m(R),

A B C D



is invertible i.f.f. det(D − CA−1B) 6= 0[Bernstein, 2009, prop. 2.8.3, p. 107], i.e. the rank of D − CA−1B is m. In the example in the remark above (n = m = 2), we see that D − CA−1B = Id − (0 1

0 0) Id (0 01 0) = (0 00 1), so the condition bore out,

 Id A 0,1 A1,0 Id  is not invertible.

Consider a matrix A = (Ai,j) ∈ MN(Mn(R)) of degree l with leading diagonal of invertible

elements: Ai,j = 0 for j−i > l & Ai,i+l ∈ Mn(R)∗, or in the previous notation si =n. We want

to study when it is invertible. In the case of MZ(Mn(R)) this would already be invertible; in our case the trivial counter-example is Λn, it has no left inverse [HPS12].

Defining (Ai,j)i,j<2l=: BB13 BB24 := B, Bi ∈ Ml(Mn(R)) ' Mnl(R). As the diagonal of B2

consists of Ai,i+l ∈ Mn(R)∗, and zeroes above, it is invertible. After the rows are exchanged, B1 B2

B3 B4 (

0 Id Id 0) =

B2 B1

B4 B3, this is invertible i.f.f. B2 and B3− B4B

−1

(23)

However this only considers the smaller 2l × 2l upper block of A. So for m ≥ 1 let (Ai,j)i,j<(m+1)l =: B1(m) B2(m) B3(m) B4(m)  := B(m),

with B2(m) the upper-right ml × ml block ((Ai,j)i<ml,0≤j−l<lm), and B3(m) the lower left l × l

block ((Ai,j)(m−1)l<j<ml,i<l), leaving B1(m), B4(m) to be the ml ×l, l ×ml blocks respectively.

Again B2(m) has as diagonal invertible elements Ai,i+l, with nothing above, so it is invertible.

Then B(m) is invertible i.f.f. B3(m) − B4(m)(B2(m))−1B1(m) is. So A is invertible i.f.f. all

B3(m) − B4(m)(B2(m))−1B1(m) are for sufficiently large m (‘eventually true’).

Concisely the l-diagonal being invertible is insufficient, one also needs clear conditions on the relation Bm

3 − B4m(B2m) −1Bm

1 . So if we want behaviour below the highest diagonal to

be unconstrained, the degree has to be zero: we end up in P−.

Remark. Still, this is not the most general form, where the Asi,ji,sj are of odd shapes. The argument generalises, save for the notational tapestry it would weave.

To illustrate the following concepts and lemmas, consider the powers of

∆ := Λn− Id, ∆k = X 0≤j≤k (−1)k−jk j  Λjn.

It is not invertible in LT; the upper triangular matrix P

j≥0−Λnj with − Idn×n on the

non-negative diagonals does work. To approximate the inverse, consider ∇ :=P

j≤−1Υ

−jn. The

relations resemble those for Λ and Υ:

∆∇ = Id , ∇∆ = Id − X k∈n N Ek0 , [∆, ∇] = X k∈n N Ek,0.

Since the degree is less than zero, ∇k is well-defined, and ∇k =Qk

i=1

P

ki>0Υ

kin = Υkn+

lower order in Υn. For any A ∈ MN(R), we say it has finite ∇-degree m if

A =X j≤m aj∆j+ X j>max(0,−m) ∇ja −j.

This gives the following result:

Lemma 2.2.1. Any A ∈ MN(R) is of degree m if, and only if, it is of ∇-degree m. Specifically for A =P

j≤mdiag[j,n](aj), one has a unique set of {αj}j≤m such that

A =X j≤m αj∆j+ X j>max(0,−m) ∇jα −j. Moreover am = αm.

(24)

Lemma 2.2.2. An A ∈ LT of degree l > 0 (resp. l < 0) with leading diagonal consisting of invertible elements from Mn(R)∗, has a right-inverse AR−1 of order −l s.t. AR−1A = Id[+nl] plus lower order, (resp. left-inverse AL−1 of order −l s.t. AAL−1 = Id[−ln] ).

Proof. For degree l > 0 write A as (Γ A), A invertible and some MN×nl(R) element Γ.

AR−1 := nl×N A−1  , gives AAR−1 = Id and AR−1A =  nl×N A−1  Γ A =  nl×nl nl×N (A−1Γ)N×nl Id  = Id[+nl]+ lower order

The negative orders l < 0 are similar, i.e.

AAL−1 = nl×N A  N×nl A−1 = nl×nl nl×N N×nl Id  = Id[−nl]

Theorem 2.2.3. Any Λ0 ∈ LT of n × n-block degree 1, Λ0 = a1Λn+

P

k≥0Υnka−k, and

invertible leading n×n diagonal a1 can be written as Λ conjugated with some P0 ∈ P−

Λ0 = P0ΛnP0−1

Remark. When P0 ∈ U−⊂ P− we call it U0.

Proof. Let (Λ0) := li,j ∈ Mn(R) and per conditions li,i+1 ∈ Mn(R)∗. To construct a solution

(P0)i,j =: bi,j ∈ Mn(R), (P0−1)i,j =: ci,j ∈ Mn(R). With ci,i = b−1i,i, the ci,j are uniquely

defined through the {bn,m}n≤i, m≤j. The result is

Λ0 =:        l0,0 l0,1 l1,0 l1,1 l1,2 l2,0 l2,1 l2,2 l2,3 l3,0 l3,1 l3,2 l3,3 .. . ... ... . ..        ? = P0ΛP0−1 =        b0,0 b1,0 b1,1 b2,0 b2,1 b2,2 b3,0 b3,1 b3,2 b3,3 .. . ... ... . ..             c1,0 b−11,1 c2,0 c2,1 b−12,2 c3,0 c3,1 c3,2 b−13,3 .. . ... ... . ..      li,j ? = (P0ΛP0−1)i,j = i X k≥j−1 bi,kck+1,j ∀i, j ∈ N. (2.6)

The goal is to choose the {bi,j}i,j, and thereby the {ci,j}. Easiest is the invertible leading

diagonal li,i+1 = bi,ib−1i+1,i+1. Not only can it be solved, there is a degree of freedom in

b0,0 ∈ Mn(R)∗.

We will now show we can solve any row i if the previous rows have been computed; for induction we have show that with given {bl,k}l≤i (and thus {cl,k}l≤i) we can compute the

(25)

row {bi+1,k} (and thus {ci+1,k}). The right hand side of equation (2.6) is the product of the

invertible bi,i times ci+1,j, together with known other terms {bi,kck+1,j}k<i. We get

ci+1,j := b−1i,i · (li,j− i

X

k≥j

bi,k−1ck,j).

In turn ci+1,j determines bi+1,j and induction can be applied.

Any entry bi,j is therefore computable in finitely many steps and is, up to b0,0 ∈ Mn(R)∗,

unique.

In the notation of Theorem 2.2.3 and the following remark, given U0 ∈ U− define

Λ0 := U0ΛnU0−1, (2.7)

Υ0 := U0ΥnU0−1. (2.8)

Conjugate equation (2.4) by U0, producing

[Λ0, Υ0] = U0[Λn, Υn]U0−1 = U0 n−1 X i=0 Ei,i ! U0−1 = n−1 X i=0 Ei,i ! = [Λn, Υn] §2.3 Integrable hierarchies in LT(R) With Λ, Λ0, Υ, Υ0 as above, the algebra LT can be split along degrees. Any A ∈ LT can be

written as, with aj ∈ Dian(R)

A = deg A X j≥0 ajΛnj+ X j>0 Υnja−j,

where of course in both cases the last sum terms runs over the j ≥ max(1, − deg A) in case the degree is negative; the higher indexed terms {aj}j>deg A are zero.

We want the same sort of expression of A w.r.t. Λ0 and Υ0, with now the diagonals in

DiaU0

n (R) := U0Dian(R)U0−1. This can be seen as expressing the ‘natural’ decomposition

of U0LT(R)U0−1, instead of LT(R)—even if they are the exact same algebras. If the αj =

U0ajU0−1 ∈ Dia U0 n (R), we get A = deg A X j≥0 ajΛnj + X j>0 Υnja−j and U0AU0−1 = deg A X j≥0 αjΛ j 0+ X j>0 Υj0α−j,

The notation can be made a little neater through defining for any d ∈ DiaU0

n diag0[j,n](d) := ( dΛj 0 for j ≥ 0 Υ−j0 d for j < 0 (2.9)

(26)

which allows us the shorter A = X

j≤deg(A)

diag0[j,n](αj).

Any lower triangular A can be expressed for some particularii a

j ∈ Dian, αj ∈ DiaUn0 as A = X j≤deg A diag[j,n](aj) or A = X j≤deg A diag0[j,n](αj). (2.10)

To separate any A as expressed above based on Λ, Υ, consider the projections {πn,∗0} where ∗ ∈ {<, >, ≤, ≥}, e.g. πn,≤0(A) = P

j≥max(0,− deg A)Υnja−j. Likewise, define the

pro-jections {π0 n,∗0} w.r.t. Λ0 with ∗ ∈ {<, >, ≤, ≥}, and π 0 n,>0(A) = P j>0αjΛ j 0. Due the

definition giving DiaU0

n 3 αj ∈ Dia/ n, πn,<0◦ πn,>00 is not identically zero for non-trivial U0.

From this point onwards, any A expressed asP αjΛ j

0+P Υ

j

0αj will employ αj ∈ Dia U0

n .

The algebra itself splits along these projections, let LTn,$0(Λ0) := π 0

n,$0(LT) and LTn,$0:=

πn,$0(LT).

LTn,≤0⊕ LTn,>0= LT = LTn,≥0⊕ LTn,<0

LTn,≤0(Λ0) ⊕ LTn,>0(Λ0) = LT = LTn,≥0(Λ0) ⊕ LTn,<0(Λ0)

All the work above was to ensure LTn,$0(Λ0) = U0LTn,$0U0−1. This would have failed

with αj ∈ Dian for $ ∈ {>, ≥}.

Since deg(AB) ≤ deg(A) + deg(B), deg(An) = n deg(A) (characteristic is zero), etc., the

splits above are closed under multiplication and Lie brackets, making them Lie subalgebras of LT. The choice of U0 only alters the complements of LTn,≤0(Λ0), LTn,<0(Λ0).

Now we will discuss for each of the decompositions introduced above the corresponding deformations of commutative Lie subalgebras and the equations that the generators of these algebras have to satisfy. We start with the description of the commutative algebras.

Let h be a maximal commutative subalgebra of gln(k), with the k basis {Eα}1≤α≤h.

These are subject to the relations expressed in Einstein-notation

EαEβ = h X γ=1 hγαβEγ = hγαβEγ Id = iαEα. By commutativity of h, hγαβ = hγβα. Examples of h generators {Eα}α

(I) Ei = Ei,i ∈ Mn(R) with relations EiEj = δi,jEi, PαEα = Id.

(II) Ei = Ei,i− Ei+1,i+1, Eh= Id.

iiα

(27)

(III) For 1 ≤ α ≤n Eα :=          0 1 0 0 . . . 0 0 0 1 0 . . . 0 0 0 0 1 . . . 0 .. . ... ... ... . .. ... 0 0 1 0 . . . 1 0 0 1 0 . . . 0          α−1 ∈ Mn(R).

The relations are hγα,β = δγ−1,α+β−2, iα = δα,n.

(IV) The dimension of h need not equal n. A great example showing a maximal com-mutative h ⊂ Mn(k) of dimension below n, was introduced in Courter in 1965 on page 227 for dim(h) = 13 <n = 14 . It has as basis E13:= Id and

X12 α=1eαEα =                         0 0 e1 0 e2 0 e3 e5 0 e6 0 e7 e9 e10 0 0 0 e1 0 e2 e4 0 e5 0 e6 e8 e11 e12 e3 0 e4 0 0 e3 0 e4 e1 e2 12×12 e7 0 e8 0 0 e7 0 e8 e5 e6 0 0 0 0                         .

In general we can also have dim h > n

(V) Any h ⊂ Mn(k), g ∈ Mn gives ghg−1 another maximal commutative subalgebra. (VI) For an example where dim h attains the maximal dimension. Consider in the

M2n(k) the matrices of the form for any A ∈ Mn(k)

¯

A := Idn×n A n×n Idn×n

 .

The generators are the identity and Ei,j := Ei,j, {Ei,j} the basis of Mn(k) and thus

i, j ∈ {1, . . .n}. Together they fulfil dim h = n2+ 1.

(28)

the upper bound there exists the classical result h ≤ bn2/4c + 1 [Schur, 1905; Mirzakhani,

1998] achieved in example VI.

We have a natural embedding in of Mn(R) elements into the n× n-block diagonals Dian(R) by choosing all blocks to equal A ∈ Mn(R):

in(A) :=        A 0 0 0 . . . 0 A 0 0 . . . 0 0 A 0 . . . 0 0 0 A . . . .. . ... ... ... . ..        .

The primary use is the elements

Fα := in(Eα) =        Eα 0 0 0 . . . 0 Eα 0 0 . . . 0 0 Eα 0 . . . 0 0 0 Eα . . . .. . ... ... ... . ..        for 1 ≤ α ≤h.

This has led us to a commutative algebra h[Λn] of LTn,≥0(R) defined as generated by Λn and the {Fα}1≤α≤h. We will develop the notion of Lax Hierarchy as related to this subalgebra.

Likewise we will develop a strict hierarchy based on the following commutative subalgebra of LTn,>0(R) h0[Λn] := k                    n×n Eα n×n n×n . . . n×n n×n Eα n×n . . . n×n n×n n×n Eα . . . .. . ... ... ...               1≤α≤h      = h[Λn]Λn.

We consider the twisted versions of the Lie algebras h[Λn] and h0[Λn] by conjugating the

whole algebras by a constant element U0 ∈ P− as in theorem 2.2.3. This gives algebras

hU0 0] := U0−1h[Λn]U −1 0 of LTn,≥0(R), (2.11) hU0 0 [Λ0] := U0−1h0[Λn]U0−1 = h U0 0]Λ0 of LTn,>0(R), (2.12)

Eα := U0FαU0−1 the distinguished generator. (2.13)

This brings us to the two main settings in which Lax Equations will be considered by further deforming with U− and P−.

Definition 2.3.1. The collection of L and the Kα constitute an U−-deformation, L :=

U Λ0U−1 = U U0ΛnU0−1U

−1, U ∈ U

−. For generators, Eα we get Kα := U EαU−1. Lastly,

(29)

In this setting, we continue to keep the relations KαKβ = U EαEβU−1 = hγαβKγ, IdN =

Λ0 = iαK α.

L − Λ0 ∈ LTn,≤0 & Kα− Eα ∈ LTn,<0

Definition 2.3.2. The Mα constitute a P−-deformation, Mα := P ΛαP−1, P ∈ P− and

M := iαMα = P iαΛαP−1 = P Λ0P−1. To complete expressions, W = P Υ0P−1.

Remark. M takes a back-seat to the Mα, as opposed to the first setting where L and Kα

both secure first class seats.

One of the first deformations will be by U−(R) = {Id +A | A ∈ LTn,<0} 3 U0. A deeper

description can be found through the bijection:

LTn,<0−−→ Uexp − log

−→ LTn,<0

Here log ◦ exp(A) = A, a well-defined computation since their infinite sums contain parts of degree < 0, and hence every value is finitely computable. This establishes an informal relation between the two, though the bijection requires commutativity to relate group actions in group U− and algebra LTn,<0.

We cannot establish a similar relation between LTn,≤0 ←→exp

log P−: log has multiple values

if our field is C; exp need not converge within the given R and is not injective,, e.g. exp(A + 2πiI) = exp(A). Nonetheless we informally associate the group P− with the algebra LTn,≤0.

The U−, P−will be used to deform the basic commutative algebra k[Λ0], Λ0 = U0ΛnU0−1 ∈

MN(R). To create more complex cases, we return to the Mn(R).

2.3.1

Lax Equations

A Lax equation is an equality of two k-linear derivations of MN(R), ∂γ(Am) = [Am, γ]. The

guiding principle in finding such equations will be splitting LT, to LTn,<0⊕ LTn,≥0 for U−

-deformation {Kα}&L, and LTn,≤0⊕ LTn,>0for P−-deformation {Mα}. Then we want our ∂γ

to maintain the decomposition outside the kernel where ∂γ(·) = 0, so for any A ∈ DiaUn0(R)

there is some corresponding B ∈ DiaU0

n (R) with For L, ∂γ(A(Λ0)) = B(Λ0), ∂γ(LTn,<0(Λ0)) ⊂ LTn,<0(Λ0), ∂γ(Ln+1) = n X j=0 Lj∂γ(L)Ln−j, ∂γ(Yn+1) = n X j=0 Yj∂γ(Y)Yn−j. (2.14) For M, ∂γ(A(Λ0)) = B(Λ0), ∂γ(LTn,≤0(Λ0)) ⊂ LTn,≤0(Λ0), ∂γ(Mn+1) = n X j=0 Mj∂γ(M)Mn−j, ∂γ(Wn+1) = n X j=0 Wj∂γ(W)Wn−j. (2.15)

The derivations are then completely determined by their values on the degree 0, 1 and -1 elements, with a special interest in Λ0, Υ0. As seen in how our U0, U and γ are determined,

(30)

manipulations are made with intent to keep the highest term Λ in either M or L, whereas the twisting of negative degree terms is entirely unconstrained. The twisting is done in two steps (U0 and then U/P ), to allow varying behaviours on each step. This is explicitly

expressed in the additional requirement ∂γ(Λ m 0 ) = 0.

In a way all these properties are inherited from the model this hopes to extend. See [HPP19; HPP15; Helminck, 2017a; Helminck, 2017b, etc.]. There the derivations {∂γ} on

the algebra of pseudodifferential operators expressed as Z × Z lower triangular matrices that correspond to our LT are inherited from the R-derivations ∂γ : R → R, which are extended

to work termwise on MZ(R), ∂γ[(A)i,j] :=          . .. . .. . .. . .. . .. . .. ∂γAi,j ∂γAi,j+1 ∂γAi,j+2 . ..

. .. ∂γAi+1,j ∂γAi+1,j+1 ∂γAi+1,j+2 . ..

. . . ∂γAi+2,j ∂γAi+2,j+1 ∂γAi+2,j+2 . ..

. .. . .. . .. . .. . ..          . (2.16)

Example (h = R, n = 1 for convenience)

To illustrate what is beyond simply applying the derivatives entrywise as in (2.16), it is useful to show a derivation distinct from per term extension, which could be applied to any ring R and a collection of derivations {∂γ : R → R}γ∈Γ. Let us state the most

obvious example, where R = k[[t1...., tm]] for some field k, and the expected derivations

∂i := ∂t

i, 1 ≤ i ≤ m. Then for some diagonal matrix M ∈ MN(R) with entries M (j) ∈ R,

consider the following class of derivations:

(Di(M )) (j) := ∂i+j mod n(M (j))

More compactly, the natural derivations are applied cyclically. In fact, consider any g : N → kn, g(i) = Pm

k=1g(j)k∂k. This gives the broadest (Dg(M )) (j) :=

Pm

k=1g(j)k∂k(M (j)). So you get a ring of derivations of at least countably infinite

dimension on MN(R)—significantly more than the countable finite you’d get from entry-wise extension of {∂i}.

Back to the more general ring R with derivations {∂γ : R → R}γ∈Γ, this procedure

gives you the choice of what derivation to use on each entry. On A ∈ LT(Λ0) it would

work as follows,

Dg(A) :=

X

i≤deg A

diag0[i](Dg(ai)).

Given V ∈ U−, any Dg as above gives rise to a DVg(·) := V Dg(·)V−1. This no longer

satisfies the Λ0-degree conditions, though it stays diagonal w.r.t. V Λ0V−1.

(31)

The h

U0

0

] hierarchy for {K

α

}, L

Define for j ≥ 0, 1 ≤ α ≤h Kj,0 := π 0 n,≥0(iαKαLj) = Λ j 0+ lower order in Λ0 (2.17) Kj,α:= π 0 n,≥0(KαLj) = KαLj − π 0 n,<0(KαLj) = KαΛ j 0+ l.o. in Λ0 (2.18)

Lemma 2.3.3. Consider any Q ∈ LTn,≥0(Λ0) such that [Q, KαLl] is of lesser degree than

Ll, for all 1 ≤ α ≤h, l ≥ 0. Then Q is a R-sum of the Kj,β

Proof. In particular the requirements on [Q, KαLl] imply the degree l + deg(Q) term in

[Q, K

αLl] = [Q, Ll] is zero, and we will use KαLl = EαΛ l

0+ lower degree terms to simplify.

Any finite degree Q of LTn,≥0(Λ0), deg Q = r, can be written as

Pr

j≥0U0qjU0−1(Λ0)j,

qj ∈ Dian. The requirement on Q’s relations with L for the highest order term is as follows:

[Q, Ll] = [U0qrU0−1Λ r 0+ π 0 n,<r(Q), Λ l 0+ . . .] = U0[qrΛ nr, Λnl]U−1 0 + <l+r X j . . . Λnj0 = U0 qrΛn(l+r)− ΛnlqrΛnr U0−1+ . . . = U0(qr− qr[+l])Λn(l+r)U −1 0 + . . .

By the requirement the order is less than l, qr− q [+l]

r = 0 for all l ≥ 1 implies qr = q [+l] r , qr is of the form        ˜ qr n×n n×n n×n . . . n×n q˜r n×n n×n . . . n×n n×n q˜r n×n . . . n×n n×n n×n q˜r . . . .. . ... ... ...        , ˜qr∈ Mn(R).

The α terms separately impose qrΛnrEαΛnl = EαΛnlqrΛnr. Since EαΛn = ΛnEα, this is

˜

qrEα= Eαq˜r[+l] = Eαq˜r. The only elements commuting with h in Mn(R) are from the R-span

of h itselfiii, so ˜q

r ∈ Rh ⊂ Mn(R). Vice versa any ˜qr ∈ Rh has the properties to make the

degree of Q disappear if its highest term is U0in(˜qr)U0Λr0:

in(˜qr) = in(˜qr)[+l] , q˜rEα = Eαq˜r 1 ≤ α ≤h.

Now Q0 = Q − qrL r

0 is of degree r − 1, and the same argument applies to find Q is some

element in the R-span of Kj,β.

This establishes the form of the Q allowed for the commutator term in the Lax equation: they are sums of the centraliser Ki,α of the algebra hU0[Λ0]. By linearity it suffices to look

only at these generators.

iiiElse we could add one matrix to h which would commute with h, but by self-commutation this would

(32)

Definition 2.3.4. The Lax Equations of the hU0

0]-hierarchy are the series of relations

i ≥ 0, α, β ∈ {1, . . .h}:

∂i,α(L) = [Ki,α, L ] = [L, πn,<00 (KαLi)] (2.19)

∂i,α(Kβ) = [Ki,α, Kβ] = [Kβ, π0n,<0(KαLi)] (2.20)

The data (R, {∂i,α}i≥0, 1≤α≤h) is called a setting for the hU0[Λ0]-hierarchy. The matrices

{Kα}, L form a solution of the hU0[Λ0]-hierarchy.

As an equality of derivations, it follows ∂i,α(Lm) = [Ki,α, Lm] and crucially

∂i,α(LmKβ) = [Ki,α, KβLm]. (2.21)

Lemma 2.3.5. For a setting (R, {∂i,α}i≥0, 1≤α≤h) the V ∈ U−, such that ∪i,α∂i,α(V ) =

{0}, transform by conjugation the solutions {Kα}, L of the hU0[Λ0]-hierarchy, into solutions

{V KαV−1}, V LV−1 of the hU0V[V Λ0V−1]-hierarchy.

Proof. First for all appropriate i, α, β:

∂i,α(V LV−1) = ∂i,α(V )LV−1+ V ∂i,α(L)V−1+ V L∂i,α(V−1) = V ∂i,α(L)V−1

= V [L, Kiα]V−1 = [V LV−1, V KiαV−1]

∂i,α(V KβV−1) = ∂i,α(V )KβV−1+ V ∂i,α(Kβ)V−1+ V Kβ∂i,α(V−1) = V ∂i,α(Kβ)V−1

= V [Kβ, Kiα]V−1 = [V KβV−1, V KiαV−1]

Then for projections πn,≥00 w.r.t. V U0ΛnU0−1V−1, instead of the usual π0n,≥0w.r.t. U0ΛnU0−1 =

Λ0, V πn,≥00 (Ki,α)V−1 = V KαΛ i 0+ i−1 X j=0 k(i,α)j Λj0 ! V−1 = π0n,≥0(V Ki,αV−1).

This would fit the settings where the new terms are simply conjugated with V , e.g. Kα0 := V KαV−1, L0 := V LV−1, etc., just as they were with U0.

The strict h

U0

0

0

]-hierarchy, in {M

α

}

Recall the Mα are a P−-deformation , Mα := P EαΛ0P−1, P ∈ P− and M := iαMα =

P iαEαΛ0P−1 = P Λ0P−1. To complete expressions, W = P U0ΥU0−1P−1.

Define for j > 0, 1 ≤ α ≤h Nj,α:=π0n,>0(MαMj−1) = MαMj−1− πn,≤00 (MαMj−1) = KαΛ j 0+ lower order (2.22) Nj,0 :=π0n,>0(Mj) = Mj − π0n,≤0(Mj) = Λ j 0+ lower order (2.23)

Lemma 2.3.6. Any Q ∈ LTn,>0(Λ0) with the property that ∀l ≥ 1 , 1 ≤ α ≤h the

(33)

Proof. The major difference with lemma 2.3.3 on page 31 is between M and L, where one now has to account for the DiaU0

n terms in the former. However as your Q decomposition

will also reflect that, we nonetheless can conclude

[Q, Ml] = (. . .)qdeg Q− q [+l] deg Q



(. . .)Λn(l+deg(Q))+ lower order.

The rest is the same.

The Q allowed for the commutator term in the Lax equation are sums of the KαLi. By

linearity it suffices to look only at these generators.

Definition 2.3.7. The Lax Equations of the strict h-hierarchy are the series of relations i ≥ 1, α, β ∈ {1, . . .h}:

∂i,α(Mβ) = [Ni,α, Mβ] = [Mβ, πn,≤00 (MαMj−1)] (2.24)

The data (R, {∂i,α}i≥1, 1≤α≤h) is called a setting for the strict h-hierarchy. The fitting P, U0

and resulting Ni,α determine a solution.

The last identity shows that [Ni,α, Mβ] is of degree ≤ 1 like ∂i,α(Mβ).

By linearity iβ∂i,α(Mβ) = ∂i,α(M) = [Ni,α, M]. Both sides are derivations, so for m ≥ 1

these lead to additional equalities ∂i,α(Mm) = [Ni,α, Mm] and

∂i,α(MβMm−1) = [Ni,α, MβMm−1] (2.25)

Lemma 2.3.8. For a setting (R, {∂i,α}i≥1, 1≤α≤h) the V ∈ U−, such that ∪i,α∂i,α(V ) = {0},

transform by conjugation the solutions {Mα} of the strict hU0[Λ0]-hierarchy, into solutions

{V MαV−1} of the hU0V[Λ0]-hierarchy.

Lemma 2.3.9. For a setting (R, {∂i,α}i≥1, 1≤α≤h) with solutions Ni,α, we have for any V

invertible with ∪i,α∂i,α(V ) = {0}, the bijection of solutions by dressing Mα with V .

Proof. With any appropriate i, α, β:

∂i,α(V MβV−1) = ∂i,α(V )MβV−1+ V ∂i,α(Mβ)V−1+ V Mβ∂i,α(V−1) = V ∂i,α(Mβ)V−1

= V [Mβ, Niα]V−1 = [V MβV−1, V NiαV−1]

Define a new projection π0n,>0 w.r.t. V U0ΛnU0−1V −1

, rather than the usual πn,>00 w.r.t. Λ0 = U0ΛnU0−1. Then V πn,>00 (Ni,α)V−1 = V  KαΛ i−1 0 + lower order  V−1 = πn,>00 (V Ni,αV−1)

Both these steps work without alternation for M = iβM β.

∂i,α(V MV−1) = [V MV−1, V NiαV−1] & V π0n,>0(Ni,0)V−1 = π

0

n,>0(V Ni,0V−1).

This then is simply a solution {V Ni,αV−1}, P, V U0 in the same setting. The {∂i,α} are

Referenties

GERELATEERDE DOCUMENTEN

Wanneer we er bijvoorbeeld vanuit gaan dat de gevolgen van klimaat- verandering beperkt kenbaar en contro- leerbaar zijn, liggen extra voorzorgs- maatregelen veel meer voor de hand,

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

Al  snel  bleek  dat  het  vooropgestelde  en  goedgekeurde  puttenplan  niet  helemaal  kon 

This heuristic can be adapted to show that the expected optimal solution value of the dual vector packing problem is also asymptotic to n/2!. (Actually, our

Welke maatregelen worden er in het ziekenhuis genomen Om geen verspreiding van multiresistente bacteriën naar andere patiënten te krijgen worden er isolatiemaatregelen ingesteld.. De

Given the missional nature of the research question in this study, the intent in this section is to understand, with the help of literature, how Waymaking, and

prestaties waarop de zorgverzekering recht geeft en die de verzekerde buiten Nederland heeft gemaakt aan als kosten van het cluster ‘geneeskundige geestelijke gezondheidszorg

Van het overblijvende oppervlak wordt ruim de helft 29% van het bosreservaat opgevuld door de tweede boomlaag met zomereik en ruwe berk.. Een kwart van het bosreservaat bestaat uit