• No results found

Asymptotic properties of matrix differential operators

N/A
N/A
Protected

Academic year: 2021

Share "Asymptotic properties of matrix differential operators"

Copied!
28
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Asymptotic properties of matrix differential operators

Citation for published version (APA):

Brands, J. J. A. M., & Hautus, M. L. J. (1980). Asymptotic properties of matrix differential operators. (Memorandum COSOR; Vol. 8017). Technische Hogeschool Eindhoven.

Document status and date: Published: 01/01/1980

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

providing details and we will investigate your claim.

(2)

Department of Mathematics

PROBABILITY THEORY, STATISTICS AND OPERATIONS RESEARCH GROUP

Memorandum COSOR 80-17 Asymptotic properties of matrix

differential operators

by

J.J.A.M. Brands and M.L.J. Hautus

Eindhoven, November 1980 The Netherlands

(3)

by

J.J.A.M. Brands and M.L.J. Hautus

ABSTRACT

For given matrices A(s) and B(s) whose entries are polynomials in s, the validity of the following implication is investigated:

v

y lim A(D)y(t)t-+oo

a ..

lim B (D) y (t)t-+oo

a .

Here D denotes the differentiation operator and y stands for a sufficiently smooth vector valued function. Necessary and sufficient conditions on A(s) and B(s) for this implication to be true are given.

A similar result is obtained in connection with an implication of the form

v

y A(D)y(t)

a ,

t-+oolim B(D)y(t)

a ,

C(D)y(t) is

1. INTRODUCTION

bounded .. lim E (D) Y(t) O. t-+oo

In 1914 E. Landau published a paper [3] of which one of the results can be stated as follows: "If f is twice differentiable on (0,00), lim f(x) exists

x-+oo

and f" is bounded, then lim f' (x) = 0". This work was inspired by a paper x-+oo

[2] of C.H. Hardy and J.E. Littlewood in which almost the same result can be found, the only difference being the extra condition that f" is continu-ous. Still earlier, an analogous result was given in [1], published in 1911. There has been considerable interest since 1914 in quantitative results

(i.e. results about order of growth and best constants), also initiated in [2] and [3]. For a survey see [4] .

Our aim is to generalize the qualitative results, mentioned at the beginning of this section, to vector functions and linear differential operators.

Specifically, we want to investigate questions of the following type:

Let p, q and r be polynomials and D the differentiation operator d/dt. When is i t true that, if for any sufficiently often differentiable function y we

(4)

write~ (s). a pes) E ~[sJ, s:2fficiently d --2 etc. All dt

have p(D)y(t) + 0 (t + 00), q(D)y(t) is bounded then r(D)y(t) +

a

(t + 00) ? This problem will also be extended to the case where p, q and r are matrices with polynomial entries. A complete characterization is given of polynomial matrices for which the above question can be answered in the affirmative. Using this criterion, one can for instance answer questions like: If y and z are func-tions on [0,00) and y' - 3y - z" +

a

(t + 00), y" - y +

a

(t + 00) and z is bounded, does it follow that z" +

a

(t + 00) ? (Here we assume that the deri-vatives mentioned exist.)

Our main results are given in Theorems 1 and 2. In section 2 these theorems are formulated and some examples are given.

The results will also have significance for the theory of observers of linear systems, and this will be reported upon in a subsequent paper.

2. RESULTS

In this paper ~ denotes the set of complex numbers, ~+ the set of complex

+

numbers z with Re z > 0, a the closure of ~ , ~ = ~ \ 0, and IT the set of complex numbers z with Re z = O. We denote by ~[sJ the ring of polynomials in s over ~, and by ~(s) the set of rational functions in s over ~, which is the quotient field of ~[sJ. If A

=

~ then ~A(s) denotes the set of rational functions in ~(s) which have no pole in A. If A = {a} where a E ~, we simply

~oo(s) denotes the

proper

rational functions in ~(s), i.e. if q(s) E ~[sJ then p(s)!q(s) E ~ (s) iff degree pes) ~ degree q(s).

00

The elements of ~a(s) are called

stable

rational functions. ~ (s):=

0,00

~ (s) n ~ (s). Let Wc ~(s) and res) E ~(s). Then by r(s)W we denote the

sub-a 00

-set of rational functions {r(s)w(s)

I

w(s) E W}. For example (s - a)~ (s) is

. ct

the set of all rational functions with a zero in ct. If S is some set then sk kXR,

denotes the set of k-column vectors with entries in S, and S denotes the set of k x t matrices with entries in S.

m

In Theorems 1 and 2, the symbol y exclusively denotes a function (0,00) + ~ ,

smooth to allow for the differentiations. D denotes

d~'

D2 denotes order symbols are with respect to t tending to infinity.

(5)

THEOREM 1.

equivalent.

nxm rXm .

Let

A E ~ [s], B E ~ [s].

Then the

follow~ng

statements are

(11 ' VY A(D)y 0(1) ... B(D)y 0(1) (12 ) V A(D)y 0(1 ) ... B(D)y 0(1 ) y ( 1 3) Vy A(D)y o (1) .. B(D)y 0(1 ) (2) 3 MA

=

B rxn ME~ (s) a,oo (3) (i) V 3 MA B (lEa ME~rxn(S) (l (ii) 3 rxn MA

=

B

ME~oo (s) (4) (i) V V Ap E

a:

n(s) .. Bp E

a:

r (s) m (l (l (lEa PE~ (s) (ii) V Ap E

a:

n(s) ... Bp E ~oor (s)

.

m 00 PE~ (s)

REMARK. We actually have (3) (i) ~ (4) (i) and (3) (ii) ~ (4) (ii) •

REMARK. If rank A(s)

=

m (s E a) then (3) (i) holds trivially. If rank B(s)

=

m (s E a) then rank A(s)

=

m (s E a) is obviously necessary for

(3) (i) to hold.

REMARK. Theorem 1 (and also Theorem 2) remains true if matrices A, B, C and E are rational instead of polynomial and expressions such as A(D)y

=

0(1) are interpreted in distributional sense. For further details we refer to the beginnings of sections 4 and 5.

In all examples illustrating Theorems 1 and 2 i t is assumed, without further indication, that all occuring functions are sufficiently smooth, and all order symbols are meant for t + 00 •

EXAMPLE 1. Let P. E ~[s], i J. 1, •.• ,k. Then we have V y 0(1) } 0(1) .. y 0(1)

(6)

if and only if

(* )

v

aE:O

or, equivalently, gcd(P1 (s), .•. ,Pk(s» has no zeros in 0 •

This result is a consequence of Theorem 1 since, obviously, condition (3) is fulfilled iff (*) holds.

EXAMPLE 2.

y' - Y o(l)

}

y' - 3y - Z" = o(l) .. y o(l)

.

z o(l)

To prove this we introduce

s - 1 0

o

1 A s 3 -s2

0]

,...

y =

[:]

Then the result can be stated as follows:

A(D)y 0(1)" B(O)y

=

0(1)

Since A(s) has rank 2 for all s E 0, condition (3) (i) is obviously satisfied. Also (3) (ii) offers no difficulty.

THEOREM 2.

Let

A E: ~nxm[s], B E: ~kXm[s], C E: ~txm[s], E E: ~rxm[s].

In the

conditions

be~ow

we

wi~~

pefep to the equation

MA+NB+LC E .

(7)

(1)

v

y [ A(D)y B(D)y C(D)y

:(1)}

= 0(1) .. E(D)y 0 0

(1)]

(2) (i)

For every

a E

rrequation

(*)

has a solution

rxn rxk s-a r xl

ME ~o (s), N E ~ (s), L E ---1 ~ (s).

o,~ s+ o,~

(ii)

Equation

(*)

has a solution

ME

~rxn(s),

N E

~rxk(s),

L E

-!-1

~rxi(s)

• o 0,00 s+ o,~ (3) (i)

For

M E +

every

a E ~ rXn ~ (s), N E a

equation

(*)

has a solution

~rxk(s), L E ~rxi(s) .

a a

(ii)

For every

a E

rrequation

(*)

has a solution

rxn rxk rxi

ME ~ (s), N E ~ (s), L E (s - a) ~ (s)

a a a

(iii)

Equation

(*)

has a solution

rxn rxk 1 rxi M E ~ (s), N E ~ (s), L E - ~ ( s ) . ~ s 00 V m PE~ (s)

v

V

+ m aE~ PE~ (s) (4) (i) (ii) (iii) V aE

:rr

V m PE~ (5) n Ap E ~ (s) a Bp E ~k(s) a Q, Cp E ~ (s) a Ap E ~n(s) a k Bp E ~ (s) a (s-a)Cp E

~i

(s) a Ap 0 Bp E ~k(s) ~ 1 ~R,(s) -:.cp E s 00 r .. Ep E ~ (s) a r .. Ep E ~ (s) a ",r(s) .. Ep E "'00

(8)

REMARK. We actually have (3) (i) .. (4) (i), (3) (11) .. (4) (11), (3) (iii) ~ (4) (iii) • EXAMPLE 1. y' - 3y - Z" 0 y" - y = 0 (1) z 0(1)

.. z'"

0(1)

It is easily checked that conditions (3) of Theorem 2 are satisfied. We remark that the above statement is not true if y' - 3y - Z"

=

0 is re-placed by y' - 3y - Z" = 0(1). For then condition (3) (iii) cannot be satis-fied.

EXAMPLE 2.

C

[0

1

s] ,

-1

Equation (*) reduces to NB + LC = I. We observe that B exists if s ~ ~ 1, since det B = 1 - 52. We will check condition (3):

For (3) (i): If

~ ~

1 take N

=

B-1, L

=

0 •

If ex 1 then [ : ] = 1 0 -1 5 -1 0

o

s -1 0 1 5 has

(9)

[C

B

]

maximal column rank for s = 1, hence has a left inverse with entries in

~1

(s) • For (3) (ii) take M= B-1, L

=

0 • For (3) (iii) take M

=

B-1, L

=

0 . -1

Obviously B is proper.

Let n ~ 2 and p a polynomial of degree n, say p(s) =

n

+ ans with an ~ 0 • Let x

o

'

x1, •••

,x

n be scalar functions EXAMPLE 3. a

o

+ a1s + on (0,00). Then 0(1) 0(1 ) . . Xl

=

0 (1) • = 0(1) + a x = 0(1)

n n

PROOF. We shall apply Theorem 2 with

1 0 0

.

..

-s 1

.

A 0, B

.

,

C =

[a

O

,a

1

...

an]

,

E [0 1

0

...

0] .

.

a

·.

.

.

0

·

.

.

·

.

• 1 O· •• 0 -s

For (3) (i) (ii) take N = EB-1 and L = [oJ in equation (*) of Theorem 2.

For (3) (iii) take L [s/p(s)J and N

=

(E - LC)B-1 •

0

REMARK. The above premise implies also x

2 = o(1), ... ,xn_1 = 0(1). This can

either be proved by induction using the above result or by a direct appli-cation of Theorem 2.

A SPECIAL CASE. Let p be a polynomial with degree p ~ 2. Then

p(D)y = 0(1) } y = 0(1)

(10)

3. PRELIMINARIES

In order to avoid troublesome details about differentiability and to make an algebraic treatment possible, we use the tool of distribution theory. We in-troduce some notations and recall a few definitions. (For details see for example [5J.) By C we denote the set of functions f : ::R-+ 0: such that f (t) = 0

(t ~ 0), f is continuous on (0,00) and lim f(t) exists. Let

L

denote the set

HO

of distributions u with supp u € [0,00).

C

b is the set of distributions u E

L

such that u

=

v on (0,00) for some bounded function v €

C

1 and

Co

the set of distributions u €

L

such that u

=

v on (0,00) for some v E

C

with the property v(t)

=

0(1) (t -+ 00). Clearly

Co

c

C

b c

L .

If u €

L,

vEL then the convo-lution u

*

v € L will simply be written uv instead of u

*

v and, similarly,

u2 instead of u

*

u. Well-known distributions are 1

=

0 and s

=

~

with supp 0

=

supp s

{oJ.

For every u E

L

we have

u

=

suo Let u E

L.

Then

supp uc {O} if and only if u is a polynomial in s. As a consequence, every u E

C

b can be written as u

=

p + v where p is some polynomial in s and v is some bounded function in

C.

Also, every u E

Co

is the sum of a polynomial in s and a function v €

C

with the property that v(t)

=

0(1) (t -+ 00). Let a E 0:.

-1 at

Then (s - a) can be identified with f E

C

defined by f(t)

=

e (t > 0), f (t)

=

0 (t ~ 0).

LEMMA 3.1-

The following

fo~

statements are equivalent:

(1) V (s - a)u E

C

b .. u E

C

b uEL (2) V (s - a)u €

Co ..

U E

Co

ueL ( 3) VuEL (s - a)u E

Co

. . U €

C

b ' (4) Re a < 0 •

PROOF OF LEMMA 3.1. Obviously (1) ..(3) and (2)" (3).

=

p(sl + v, with P poly-- p(a»/(s poly-- a). Then

-1

- a) V) (t) =

Proof of

(4)" (1). Let (s - a) u E

C

b, say (s - a)u nomial in s and v €

C

bounded. Define q(s) (p(s)

-1 -1

(s - a) p(s)

=

q(s) + p(a) (s - a) E CO' Furthermore «s jtea(t--r)V(T)dT

=

0(1) (t-+oo ) . Hence (s - a)-l v E C

b•

o

(11)

Proof of

(4)" (2). Suppose again that (s - a)u = pes)

+

v but now with vet)

=

0(1) (t ~ 00). A standard estimation procedure yields that

«9 - a)-lv ) (t) f\a(t-T)V(T)dT = 0(1) (t

~

00), so that u E CO.

o

Proof of

(3)" (4). Suppose -1 v (t) (t + 1) exp (i t Im

I

«s - a)-lv ) (t)

I

=

leatl bounded.

that Re a ~ O. Let v E

C

be defined by a) (t > 0). Clearly v E CO' and

ft(T + 1)-1

le-aTldT~

I t (T + l)-ldT is

o

0

un-o

REMARK. The proof of (3) -(4), just given, has the following form: A function u is given, analytic on (0,00) which proves

I (4)" 3

u analytic on (0,00) 1(3)· Therefore Lemma 3.1 holds if VuEL is replaced by V

u analytic on (0,00)'

LEMMA 3.2.

The following two statements are equivalent.

(1) (2)

V

UECb

(s - a)u

E Co .. U E Co .

Re a ~ 0 . 00

f

e-aTv(T)dT, whence u(t)

o

PROOF OF LEMMA 3.2.

Proof of

(2) .. (1). I f Re a < 0 then (1) is a consequence of Lemma 3.1-Now we suppose that Re a > O. Let u E

C

b be such that (s - a)u E CO' i.e. (s - a)u = p + v with p polynomial in s and v E

C

such that vet)

=

0(1)

-1 -1

(t+ 00). Hence u = q(s) + b(s - a) + (s - a) v where b = pea) and

-1 .

q(s)

=

(s - a) (p(s) - pea»~ E <r[s] • Since q(s) E

Co

and u E

C

b we must -1 -1 have that b(s a) + (s - a) v E

C

b, i.e. t eat[b +

J

e-aTv(t)dT]

o

must be bounded. It follows that b _. eat foo e-aT v(T)dT = 0(1) (t ~ 00).

t

Pro

0

f of

(1) - (2) • but u I.

Co'

If Re a = 0 then u : = (s - a)-1 E C b, (s - a) u E CO' IJ

(12)

REMARK. The distribution u

=

(s - a)-l, representing an analytic function on (0,00), proves that 1(2)" 1(1). Therefore Lenuna 3.2 remains valid if

V C is replaced by V •

U( b u analytic and bounded on (0,00)

LEMMA 3.3. (1) (ii) V 2 C b .. Co • u ( Co SUE SU E V 2 Co Co U E C b s Y E . . SU E

In both cases we can assume that u

3.3 PROOF OF LEMMA 3.3. 2 s u

=

q(s) + g, where p(s) tions in C. Let q(s)

=

qo t f(t)

=

qot + q1 +

f

(t

-o

differentiable, and Lenuna

p(s) + f,

and q(s) are polynomials and f and g are func-n

••• + ~s • Then i t follows that

(t > 0). Hence f is twice continuously can be reformulated in terms of twice diffe-rentiable functions. We shall prove the following result: Let y be a twice differentiable complex-valued function on (0,00); K and M positive non-increasing functions on (0,00), such that K(t)M(t)

=

0(1) (t -+ (0), and

Iy(t)

I ::;

K(t), Iy(t) I ::; M(t) on (O,co). Then Iy(t)

I ::;

2 (K(t)M(t»

~

= 0(1) (t -+ 00).

Proof. By Taylor's theorem there exists for every h > 0, t > 0 a number

e

E (0,1) such that

Y

(t) (y (t + h) - Y(t) ) Ih - ~hy·(t +ff'h)

Substi tuting h = 2

I

K(t) 1M (t), and taking absolute values, we obtain Iy(t)

I ::;

2/K(t)M(t)

=

0(1) (t -+ co).

REMARK. We obtain LandauIs result if we replace K(t) by 0(1) (t -+ co) and M (t ) by 0(1 ) (t -+ (0) •

LEMMA 3.4. Let r(s) E <I:(s). Then r(s) E C

b i f and only i f every pole a of r (s ) has the property that either Re a < 0 or Re a = 0 and the order of a

is one.

(13)

PROOF OF LEMMA 3.4. Let a

1, ••• ,an be the poles of res) of orders k1, •.• ,kn , respectively. Then res) pes) + q(s) for some pes) E ~[sJ and

n ki

tt(s) =

L

L

c(i,j) (s - a )-j with c(i,k.)

'I

0 (i = 1, •••,n). Hence

i 1. i=l j=l n a it k i j-l (r (s» (t)

L

e

L

c(i,j) t (t > 0) (j-l) :

.

. i=l j=l

If all poles have the property mentioned in the lemma then clearly res) is a bounded function on (O,~).

Now suppose res) E C

b• Let p := max {Re(ai ) Ii = 1,2, .•.,n} , and

\I : : max {k. - 1 Re(a.) = pl. Let i be such that Re(a.) = p and k. - 1 \I.

1. 1. 1. 1. Then we have F(T) := T-1 T

J

1 -\I -a t t e i (r(s» (t)dt c(i,\/+l)\I: + 0 (1) (T ~ 00) ,

where c(i, \1+1)

'I

O. The suppostion that either p > 0 or p = 0, \/ > 0 leads to

T

IF(T)

I

~;

f

t-\/e-pt

I

(r(s» (t) Idt = 0(1) (T

~

00) ,

1 a contradiction.

LEMMA 3.5.

Let

res) E ~(s).

Then

res) E

Co

if

and onLy

if

res) E ~a(s).

o

PROOF OF LEMMA 3.5. We proceed as in the proof of Lemma 3.4 ascribing the same meaning to the various symbols. If res) E ~a(s) (i.e. Re(a

i ) < 0 for i=1,2, •.• ,n) then trivially (r(s»(t) = 0(1) (t ~oo) whence res) E

Co.

Now suppose that res) E

CO.

Then of course (r(s» (t)

=

0(1) (t ~ 00). Let i

c<i,

\/+1) be such that Re(a

i) = p and ki - 1 = \/ . Then tri~ially F(T) = \/: + 0(1) (T ~ 00). The supposition that p ~ 0 leads (by standard methods) to

(14)

LEMMA 3.6.

Let

p(s) E ~[sJ, q(s) E ~[sJ.

Then the following four statements

are equivalent.

(1)

(2) ( 3) (4)

VUEL p(s)u E

C

b

*

q(s)u E

C

b VUEL p(s)u E

Co

*

q(s)u E

Co

V

ueL p(s)u E

Co •

q(s)u E

C

b -1

r(s);= q(s) (p(s» E ~ (s).

0,00

PROOF OF LEMMA 3.6. Obviously (1)

*

(3) and (2)" (3).

Proof of

(4)" (1) A (2). Let a

1, ••• ,an be the poles of r(s) of orders k1,···,k

n respectively and Re a1 < O, .•.,Re an < O.

n k. ,

\ \~ - J ,

Then r(s) = c + L l,.. c" (s - ai' . S~nce q(s)u = r(s) (p(s)u), (1) and i=1 j=1 ~J

(2) follow by application of Lemma 3.1.

Proof of

(3)* (4). Suppose that (3) holds. Then degree p(s) ~ degree q(s). For assume that degree p(s) < degree q(s) =: n. Let v E

C

be such that v(t) =

t-n+~eit2

(t

~

1). Then skv E

Co

for k = 0,1, ••• ,n-1, and

n

s v I.

C

b . Hence p(s)v f

Co

but q(s)v I.

C

b, a contradiction. Without loss of generality we may assume that gcd(p(s),q(s» = 1. For suppose that p(s) = P1 (s)d(s) and q(s) = q1 (s)d(s) such that gcd(P1 (s),q1 (s» = 1. Putting d(s)u = v in (3) we obtain the equivalent form

v

vEL

We henceforth assume that gcd(p(s) ,q(s» = 1. Suppose that p(a) = 0 with Re a ~ O. We consider the two cases Re a > 0 and Re a = O. First we assume

at

Re a > O. Let u E

C

be such that u(t)

=

e (t > 0). Then (p(s)u) (t)

=

0 (t > 0). Hence p(s)u f CO' But q(s)u I.

C

b since q(a)

#

O. Next, we assume at

that Re a

=

O. Then we choose u E

C

such that u(t)

=

e log(t + 1) (t > 0). Then (p(s)ti) (t) = (P1 (s) (s - a)u) '(t) = (P

1(s)';:i) (t) where u(t) = (t + 1) -1 eat (t > 0). Clearly (~t(S)~)(t) ~ 0 (t ~ 00). Hence p(s)u E

CO'

But

(q(s)u) (t) - q(a)e log(t + 1) (t ~ 00), whence q(s)u I. C

(15)

REMARK. For the same reasons as explained in the remark after the proof of Lemma 3.1 we may conclude that .Lemma 3.6 also holds if V

L

is replaced

UE

by

V

u analytic on (0,00)·

REMARK. Lemma 3.6 remains true if stated for rational functions p and q. For, let p = Pl/P 2 and q = Ql/q2' where Pl,P2,Ql,Q2 are polynomials then the substitution u = P2(s)Q2(s)v reduces the rational case to the poly-nomial case.

REMARK. Several times we shall refer to Lemma 3.6 while we use in fact the following matrix-vector version the proof of which is obvious.

Let

M(s) E ~nxm(s).

Then the following four statements are equivalent.

( 1 ) V M(s)u E Cn b ' UECm b (2) V M(s)u E Cn

UEC~

0 (3) V M(s)u E n Cb '

UEC~

(4) M(s) E ~nxm(s) a,oo

The following elementary result in algebra will be instrumental for the proofs of our main results.

LEMMA 3.7.

Let

R

be a principal ideal domain

(PID)

which is not a field

3

and let

Q

be the quotient field of

R.

Let

A

be an

n x m

matrix with entries

n~ n .

in

R3

i. e.

A E R

and let

b E R •

Then the two foUow1.-ng statements are

equivalent.

(1) 3 Ax b . xcRID (2) V n UEQ T m T A u E R ~ b u E R •

PROOF OF LEMMA 3.7. Suppose (1) holds. Let x E Rmbe a solution of Ax = b and let u E Qn. Then x A uT T = b u. If A uT T E Rm then obviously bTu E R. Hence

(1) implies (2) . Now we suppose that (2) holds. We write A in its Smith

(16)

n

Q and A = UDV, where U E Rnxn, V E Rmxm are invertible over R, and D E Rnxm has entries d,. satisfying d,. =

a

for i ~ j. (Special divisibility

pro-1J 1J

perties of the entries d

ii of D are not mentioned since they are irrele-vant for our proof.)

-1 n T n

Observe that U b =: C E R • Furthermore, observing that U Q V-TeRn) = Rn we have that (2) is equivalent to the statement

v

n Wf:Q [I r. E R, (i

=

1, ... ,n). 1. m Then x E R , and Let d

i , i l,2, •.• ,n be defined by di = dii (the diagonal entries of D) for 1 ~ i ~ k := min{n,m}, and d

i =

a

for i > k • If d,1 ~

a

then we aooly~~

(

*

) W1.'th wT : = , ... , ,d(0

a

-1 T -1, h .th t

i ,0, ••. ,0) , where di 1S t e 1. componen. We obtain that c, = rid. for some r

i E R • If d. =

a

then we apply (*)

1. 1 1

with wT =(O, •.• ,O,w.,O, •.. ,O). It follows that

V

Q

ciw~

E R. Hence

1 W.E 4

C. = O. So, in either case we have c. = r.d, for §ome

1 1. 1. 1. -1

L te y = (r ,r , ..• , r )T E R . Wem de 1ne xf ' = V y.

1 2 m

Ax

=

UDVx = UDy = Uc

=

b. Hence (2) implies (1).

REMARK. In fact we shall mostly use the following matrix version of Lemma 3.7 the proof of which is obvious.

Let

R

be a

PID

and

Q

the quotient fieZd of

R.

Let

A E Rnxm

and

B E Rrxm

Then the foZZowing two statements are equivaZent.

(1) V MA

=

B .

MERrxn

(2) 3 Au E Rn _ Bu E Rr •

m

UEQ

Let R be a commut.a~tive integral domain with

un-tty'and

Q

the quotient--f]~e"'lu

of R. Let S c R have the following properties:

(i) if a E S,b E S then ab E S. (multiplicative subset)

(ii) 1 E S,

a

I-

S • R

s

:= {p/q c Q

respect to S.

(17)

~ (s) is isomorphic to ~B(s) by the iso-a,oo

~ (s) is a PID. By a similar method one a,oo

LEMMA 3.8. (see [8J p. 58)

If

R

is a

PIO

and

S

a muLtipLicative subset of

R ~ith 1 E

sand

0

i

s.

Then

R

is also a

PIO. s

PROOF OF LEMMA 3.8. Let I be an ideal in R • We define I := {a E R

I

a/s E I s

for some s E S}. It is easily seen that I is an ideal. Since R is a PIO

we have I

=

xR for some x E R. Hence I

=

~ R

n

1 s

APPLICATIONS OF LEMMA

3.B.

It is well-known that ~[sJ is a PlD. Let A£ ~,

A

f

0,

and S := {p(s) E ~[sJ V

SEA p(s)

#

oJ.

Clearly, S is a

multipli-cative subset of ~[sJ, and 1 E S, 0

i

S. Hence ~A(s) = (~[sJ)S is a PIO.

In particular ~ (s) is a PIO.

a

Let B := {w E

~

I

I

w1-

~

I

S;

~}.

morphism r(s)r+ r(---l). Hence s +

can prove that ~oo(s) is a PIO.

4. PROOF OF THEOREM 1

We shall prove a slightly modified version of Theorem 1 in which (1 1), (1

2) and (13) are changed into

n r

A(s)u E

Co

~ B(s)u E

Co '

and (1

2) and (13) analogously. After the proof we shall explain why there is no loss in generality in doing so.

Furthermore, this modified Theorem 1 remains valid if A and B are rational, since this case can be reduced to the polynomial case by the substitution

-1

v

=

(q(s)) u in (1

1), (12) and (13) where q E ~[sJ is such that q(s)A(s) and q(s)B(s) are polynomial.

The proof of Theorem 1 consists of two main parts. In the first part we prove (11) ~ (2) ~ (3) <=t (4) and ( 2) .. (12) . ,In the second part we prove (1

3) .. (4) • This is sufficient since trivially (11) ~ (13) and (12) ~ (13).

(18)

First part of the proof. The proof of (2)'" (3) is trivial. Proof of (2).. (11) 1\ (12)' r M(s)v €

Co '

by Lemma 3.6, similar. Let A(s)u since M(s) n =: V E CO' rxn E a: (s) • a,oo

Then B(s)u

=

M(s)A(s)u The proof of (2)'" (12) is

Proof of (4)'" (2). Let p(s) E o:m(s) be such that A(s)p(s) E o:na,oo(s). Then

n r

V

A(s)p(s) E a: (s), whence

V

B(s)p(s) E 0: (s). It follows that

a€a . a a E a - a

r n r

B(s)p(s) E a:a(8). Also A(s)p(s) E a: (s) whence B(S)p(S) E a:oo(s). We

con-00

clude that B(s)p(s) E o:ra,oo(s). Hence (2) follows by an application of Lemma 3.7.

Proo

f

0

f (

3) .. (4). Let a E a. Then, by (3) (i) there is a matrix M s

( )

E o:rxn(s) a such that M(s)A(s)

=

B(s). Applying Lemma 3.7 (the matrix version) we deduce

(4) (i). The proof of (3) (ii) .. (4) (ii) is similar.

since q(a)

F

O.

m

Let a E a. Let p(s) E 0: (s) be such that

n E a:[s] in such a way that A(s)p(s)q(s) EO: [s]

n

Clearly A(s)u E CO' Then, by (11)'

r ' by Lemma 3.5, we have B(s)p(s)q(s) E o:o(s).

Proof of (11)'" (4) (i) • n

A(s)p(s) E a: (s). We choose q(s) a

and q(a)

F

O. Let u := p(s)q(s). B(s)u

=

B(s)p(s)q(s) E CO' Then,

r It follows that B(s)p(s) E 0: (s)

a

Now we suppose that pes) m

Pi(s) E 0: [s] and P2(s)

have degree B(S)Pl (s) ~ E o:r (s) .00

in contradiction with (11)'

m ( n

E 0: (s) such that A(s)p(s) E O:oo(s). Let

E o:[s] be such that p(s)

=

Pl (S)/P2(s). Then we degree A(S)Pl (s) ~ degree P2(s). Hence B(s)p(s) E C~. Suppose that degree B(s)p(s) > k. Then clearly B(S)y

=

is unbounded,

m

Proof of (11)" (4) (ii). First we prove that (11) implies 'Ip(s) EO: [s] degree A(s)p(s) ~ degree B(s)p(s). Suppose that degree A(s)p(s)

=

k.

2 d9.y

1)-k-~ei(t+1)

• Then

~

=

0(1) (t + 00) if 9.

~

k and

dt Let Yk(t) := (t +

9.

d Yk

---- is unbounded if 9. > k • We define Y

=

PY

k• Then we have A(s)y

=

dt.\',

A(S)p(S)Y k B(s)p(S)Y

k

(19)

Second part of the proof.

T m

LEMMA 4. 1 . Let p (s) = (p 1 (s) , ••• , Pm (s) ) E II [s].

R,

We define

q(s)

=

(s + 1) gcd pes),

where

ged pes)

=

gcd (Pl (s), ••• ,Pm(S»

and

R, = degree pes) - degree gcd pes).

Then

VUEL p(s)u E

~ ~

q(s)u E

CO'

PROOF OF LEMMA 4. 1 . Apply twice the equivalence (11) ~ (2) •

rJ

m

Proof of

(13)" (4) (i). Let a e: cr. Let pes) E a: (s) be such that n

A(S)p(s) E II (s). We choose q(s) E a:[s] such that q(a)

#

0 and

~ a k

pes) := A(s)p(s)q(s) E lIn[s] • We define q(s) := (s + 1) gcd p(s), where k is such that degree q(s) = degree pes). Then, by the foregoing Lemma 4.1,

(1 3) is equivalent with V uEL C C r q(s)u E O " B(s)p(s)q(s)u E b

By Lemma 3.6 it follows that B(s)p(s)q(s) / q(s) E a:r (s). Since q(a)

#

O· ~,oo

we have B(s)p(s) E a:r(s). a

Proof of

(13) .. (4) (ii). Replace (11) by (13) in the proof of (11)" (4) (ii) .

Now we can give the explanation promised at the beginning of this section. The only place where could be loss of generality is in the proof of (1

3) .. (4). In the proof of (1

3)" (4) (i) we appeal to Lemma 3.6 which remains

valid if V

L

is replaced by V • (See the remark after the

UE u analytic on (O,ro)

proof of Lemma 3.6). In the proof of (13)" (4) (ii) we use (13) only for functions analytic on(O,ro).

5. PROOF OF THEOREM 2

Again, as in the proof of Theorem 1, there is no loss of generality in proving a slightly modified version of Theorem 2 in which (1) is changed as follows: (1) V A(s)u e: a:n[s] uELm

C

k B(s)u E

..

E(s)u E

C

r 0 0 C(s)u E CR.b

Moreover, for similar reasons as given in the beginning of Section 4, the modified Theorem 2 remains true if A, B, C and E are rational.

(20)

power of s + 1 .) -1 rX£ L(s) E s <1: 00 (s) be such that (s + ~ -1 C(s)

=

s C(s) we +

PY'oof of (3)" (4). Let a

Ea:.

By (3) (i) there exist matrices

rXn rxk rx£

M(s) E <I: (s), N(s) E <I: (s) and L E <I: (s) such that M(s)A(s) +

a a a

N(s)B(s) + L(s)C(s)

=

E(s). Applying Lemma 3.7 (the matrix version) we deduce (4) (i).

rXn rXk

Let a E IT. Then, by (3) (ii) there are matrices M(s) E <I: (s), N E <I: (s)

rX9. a a

and L(s) E (s - a) <I: (s) such that M(s)A(S) + N(s)B(s) + L(s)C(s) = E(s). a -1

Defining L(s) (s - a) L(s), C(s)

=

(s - a)C(s) we get

H(s)A(s) + N(s) B(s) + L(s)C(s) E(s)

and (4) (ii) follows by an application of Lemma 3.7.

Now we turn to the proof of (3) (iii)

*

(4) (iii). Without loss of generality we may assume that A(s), B(s), C(s) and E(s) are matrices over <1:

00 (s). (This

can be achieved simply be dividing the matrices by a sufficiently large

rxn rxk

By (3) (iii) there are matrices M(s) E <I: (s), N(s) E <1:

00 (s),

such that M(s)A(s) + N(s)B(s) + L(s)C(s)

=

E(s). Let j E ~

-j ~ rXn ~

1) M(s) =: M(s) E <I: (s). Introducing L(s) = sL(S),

00

have (s + l)jM(S)A(s) + N(S)B(s) + L(S)C(S)

=

E(s). By an application of Lemma 3.7 we obtain

(s + 1) jA(S) m

V'p(s) E <I: (s)

00 B(s)

C(s)

pes) E

<I:~+k+r(s)

.. E(s)p(s) E

<I:~(s)

.

It follows that (4) (iii) holds.

+ m

PY'oof of (l) .. (4) (i). Let a E <I: . Let pes) E <I: (s) be such that

n k £

A(s)p(s) E <I: (s), B(S)p(s)

a

E <I: (s) and C(s)p(s) E <I: (s). We choose

a . a

q(s) E <I:[s] such that A(s)p(s)q(s), B(s)p(s)q(s) and C(s)p(s)q(s) are

vec-tors over <I:[s] and q(a) ~ O. Then if u := p(s)q(s) we have (A(S)u) (t) = 0 (t > 0) and B (s)U GCO' C (s ) u E C

b . Hence E (s) u E CO. By Lemma 3. 5 we con-elude that E(s)p(s)q(s) E <I: (s). Since q(a) ~ 0 it follows that

cr r

E(s)p(s) E <I: (s). a

(21)

Proof of (1) ... (4) (ii). Let a. E n . Let p(s) E a:m(s) such that

n k t

A(s) p (s) E a: (s), B (s) p (s) E a: (s) and (s - a.)C (s) p (s) E a: (s). Again,

a. a. a.

we choose q(s) E a:[s] such that A(s)p(s)q(s), B(S)p(S)~(s) and

(s o.)C(s)p(s)q(s) are vectors over a:[s] , and q(o.)

#

O. Put u p(s)q(s). Then (A(s)u) (t)

=

0 (t > 0), B(s)u E

Co

and by Lemma 3.4

-1

C(s)u E (s - a.) a:[s] c

C

b• Hence E(s)p(s)q(s) E

CO.

By Lemma 3.5 we have E(s)p(s)q(s) E a:t(s). Hence E(s)p(s) E a:t(s), since

q(~) #

O.

a a.

Proof of (1) ... (4) (iii). Let p(s) E a:m(s) be such that A(s)p(s)

=

0,

k -1 t r

B(s)p(s) E a:oo(s) and s C(s)p(s) E a:oo(s). We have to prove E(s)p(s) Ea:oo(s).

First we show that

V

q(s) E a: [s]m

A(S)q(S)

o ...

degree E(s)q(s) ~

max{degree B(s)q(s), degree C(s) - 1} • m 1 + max{degree B(s)q(S), degreeC(s)q(s)

- 1l

Let q (s) E a: [s]. Define v := 1)-V ei(t+l)2 djy Let y (t)

=

(t + (t ~ 0). Then

-

v

=

0(1) (t -+ 00) i f j < v

,

\) dtj dV djy and V\)

=

0(1) (t -+ 00) and - - . #\) 0(1) (t -+ 00) if j ~ \) dt dtJ

Let y

=

q(s)y\) . Then A(s)y E a:[s], B(s)y E

CO'

C(s)y E

C

b. It follows that degree E(s)q(s) < \) since E(s)y E

Co .

Now let Pi (s) E a:m[s], P2(s) E a:[s]

such that p(s)

=

Pi (s)/P2(s). Then degree E(S)Pl (s) ~ max {degree B(S)Pl (s), degree C(s)Pl (s) -

1l

:0; degree P2(s). The last inequality is a consequence of the assumptions on p(s). Hence E(s)p(s) E a:r .

00

To complete the proof of Theorem 2 we only have to show that (4)'" (2) and (2)'" (1). However, in trying to prove these implications directly, great difficulties are met. Therefore we first prove Theorem 2 for the special

case A

=

0 (denoted by Theorem 2 (A

=

0». The proof of the original Theorem 2 is then completed as follows. First we show that condition (4) is equivalent

(4 ) is equivalent with

state-\)

by showing that (1 ) .. (1) and

\)

clarifies the situation.

with a condition (4 ) which has the same form as (4) and in which A does not

\)

occur. By Theorem 2 (A

=

0) this statement ments (1 ) and (2 ). The proof is finished

\) \)

(2 ) ... (2). Finally a figure is given which

(22)

Proof of Theorem 2 (A = 0)

After the foregoing we only have to show that (4) (A

=

0) • (2) (A = 0) Clnd (2) (A = 0) • (1) (A = 0). Several lemmas which are referred to in the sequel of the proof, can be found at the end of this section.

Proof of (4) (A

=

0) .. (2) (A = 0). Let a E We want to show that there exist matrices such that NB + LC

=

E. By Lemma 3.7 i t is

s - a

IT • Define

S+1

C(5) =:

rxk ~

N(s) E ~ (s) and L(s) E

o,a>

equivalent to prove that

C(5) • ~rxJ/,(5)

o,a> .

B(5)p(S) E

~k

(5) II C(s)p(s) E

~J/,

(s) .. E(S)p(s) E

~r

1'5).

o,a> o,a> o,a>

Let p(s) E ~m{s) be such that the premise is satisfied and let ~ E ~+ .

k J/,

Then B(s)p(s) E ~~(S) and C(s)p(s) E ~8(s). It follows from (~) (i) (A = 0) that E(s)p(s) E ~8(s). If we suppose 8 E IT then B(s)p(s) E ~8(s) and

(s - a)C(s)p(s) E

~~(S).

Hence, by (4) (ii) (A 0) we have E(S)p(s) E

~~(S).

k 1 J/,

Also,B(s)p(s) E ~a>(s) and - C(s)p(s) E ~ (s), whence, by (4) (iii) (A

=

0)

S a>

r r

E(s)p(s) E ~a>(s). It follows that E{s)p(s) E ~o,a>(5). So we can conclude that (4) (A 0) .. (2) (i) (A

=

0). The proof of (4) (A

=

0) .. (2) (ii) (A

=

0)

is similar.

Proof of (2) (A = 0)"(1) (A

=

0). First we want to show the existence of a matrix N(s) E

~rxk(s)

such that N(s)B(s)

=

E(s). By a well-known theorem

in the theory of linear algebra i t is equivalent to prove that

V m B(s)p(s)

=

0 • E(s)p(s)

=

0 •

p(s) E ~ (s) m

Let p(s) E ~ (s) be such that B(s)p(s)

..~ O. By (2) (i) (A = 0) we have 3 rxk N (s)E~ (s) a o,a> 3 s-ct rX!/' L (s)E -1 ~ (5) a s+ o,a> N (s)B{s)a +. aL (s)C{s)

=

E(s)

We multiply on the right with p(s). Clearl~for each a E IT which is not a pole of p, we have E(a)p{a) O. It follows that E(S)p(s) O. Let

rXk

N{s) E ~ (s) be such that N(s)B{s)

=

E{s). We denote by a

1,a2, .•. ,av the poles of N(s) on IT and their orders by J/,1,J/,2""'!/,v • We define

v

w(s) := IT ~i(5)

i=l

(23)

there exists NO(S) -1

(s + 1) LO(S)C(s)

~ rXk m

Il ~ 0 is taken so large that N(s) := <p(s)N(s) E

a::c: ""

(s). Let u E L be

k R, , Il

such that B(s)u E Co and C(s)u E C

b • We put z

=

<p(s)u, w (s + 1) z and B(s) =

<P~S)

B(s). Since

B(S)UEC~, C(S)UEC~,

we have, by (2)(i)(A = 0)

or (2) (ii) (A = 0), and applying Theorem 1, E(s)u EC~ • Then, by Lemma 3.6,

r ~ ...

we have E(s)z

=

<p(s)E(s)u E C

b Therefore, N(s)B(s)z

=

N(s)B(s)z

=

r ... k

E(s)z E C

b • Furthermore, B(s)z = B(s)u E Co • It follows from Lemma 5.2

... rxk r R,

and N(s) E

a::c:

(s) that E (s)z E Co • Since IjJ(s) E

a:

(s) we have

, ""

a, ""

B(s)w = $(s) B(s)u E

C~

, and C(s)w

=

$(s) C(s)u E

C~

• By (2) (ii) (A

=

0)

rxk rXR,

E

a:

(

s), La (s ) E

a:

(

s) such that NO (s ) B(s ) +

a,""

a,""

E(s). Then

-1

NO(S)B(S)W

+

(s

+

1) LO(S)C(s)w

=

E(s)w

Clearly, (s + 1)-IlNO (S)B(S)W E C~ , I and (s + 1)-11E (s)w = E(s)z E C~ •

Multiplying (*) by (s + 1)-11 we see that (s + 1-11-1 LO(S)C(s)w E

C~

• R,

Also, LO(S)C(s)w E C

b • By Lemma 5.3 it follows that

-1 r

(s + 1) LO(S)C(s)w E Co • Using this result in (*), observing that NO(S)B(S)W E C~, we see that E(S)WE C~ •

Now we use (2) (i) (A

=

0) for a

=

a

1• We can write N1(s)B(s) +

s - a 1 rxk rxR,

1 L1 (s) C (s) = E (s) where N1(s) E

a:

(s) and L1(s) E

a:

(s). Clearly

s +

a,""

a,""

-1 s - a 1 -1 (**) (<P1(s» $(s) N 1(s)B(s)u

+

s + 1(CP1 (s» $(s)L1(s)C(s}u -1 (CP1(s» $(s)E(S)u. v

=

E(s)w • r

Since the first term on the left side belongs to

CO'

and the right side

r -1 r

belongs to C

b, we have that V:= (;Pl(s» 1/J(S)L1(s)C(s)u E Cb• MUltiplying (**) from the left with <Pl(s) we obtain

R,1+ 1 ( s - a1) N 1(S)B(S)W + s + 1 I R,1+1

(S -

a ) r Cr have s +

~.

v r Since N

1(s)B(s)w E Co and E(s)w E 0 we E Co

.

We already

r s - a1

Cr know that v E C

b• Applying Lemma 5.4 we obtain s + 1 v E 0

.

Using this

v

r

(24)

eliminate successively all factors

~i(s),

and we obtain E(s)u E

C~

.

Thus we have proved Theorem 2 (A

=

0).

Completion of the proof of Theorem 2.

Lemma 5.1 enables us to transform conditions (4) of Theorem 2 into an equivalent set of conditions, which have the same form as (4) in which A

=

0 and B is replaced by B(s), a matrix to be specified below. Denoting

C

(s) := s-1C (s) (4) (iii) reads V m p (s)

Ea::

(s) A(s)p(s)

=

0 ) B (s )p (s )

E

a::~

(

s ) C(s)p (s) €

a::R.

(s) co • E(S)p(s) € ~r(s) co

By Lemma 5.1 there is a v € ~ such that (4) (iii) is equivalent with

V m p (s)E~ (s) (s + l)vA(s)p(s) €

~n(s»)

B (s) p (s) €

~~

(s) .. E(s) p (s) €

a::~

(s)

C

(s)p(s) E

~R.

(s) co

(4) (i) and (ii) do not change by replacing A(s) by (s + l)v A(s). Hence,

taking B(s) :=

[

(S + l)VA (S)]

E a::

( k ) n+ xmCsJ B(s)

we get an equivalent form of

(4), in which A does not occur. Let us call this condition (4 ). Byv

Theorem 2 (A

=

0) we have that (4 ) is equivalent to corresponding

state-v

ments (lv ) and (2v ). For the completion of the proof of Theorem 2 we have to prove only (1v) .. (1) and (2 ) .. (2). Statement. v (1v) reads

(s + 1)vA(s)u €

en

0 (1v) V L B(s)u E

C

k • E(s)u €

C

r u€ 0 0 C(s)u E CR. b Trivially, (1 ) .. (1) • v

(25)

In (2 ) (i) and (2 ) (ii) i t is stated that there exist matrices v rx(n+k) v x~ N(s) E ~ (s), L(s) E ~r (s) such that cr,~ cr,~ N (s) [ (s + 1) vA (s) ] B (s) ~ + L(s)C(s) E (s) , in case rxn E ~ (s) , cr,oo and where C(s)

=

s - a

1 C(s) in case (2 )(i), and C(S)

=

(s + 1)-1C(S)

s + v

(2) (ii). Splitting N(s) as follows N(s) = [N

1(s) N2(s)

J

,

N1(s)

rxk

v

N

2(S) E ~cr,~(s), and defining M(s)

=

(s + 1) N1(S) we obtain (2) (i)

(2) (ii).

The following figure will clarify the course of things in the proof of Theorem 2.

nxm kXm ~xm

LEMMA 5.1.

Let

p(s) E ~ [sJ, Q(s) E ~ [sJ,

and

R(s) E ~ [s].

Then

the foZlowing statements are equivalent.

(i) V m p(S)E~ (s)

r

P(s)p(s) - 0 } Q(s)p!s) E

~~(S)

• R(S)p(s) € «:(S) ] (ii) 3 V m VE:N p(s)E~ (s) . (s+1)Vp(s)p(s)

E

~~(S»)

Q (s ) p (s ) E

~k

( s ) 00 .. R(S)p(s) E

~~(s)

~

(26)

PROOF OF LEMMA 5.1. The implication (ii) -(i) is trivial. The proof of (i) - (ii) proceeds as follows. Let V(s) E.~mxr[sJ be such that the columns of V(s) form a basis of the null-space of pes). The matrix V(s) is charac-terized by the property: P(s)p(s)

=

0 if and only if p(s) = V(s)q(s)

for

r

some q(s) E ~ (s). Hence,statement (i) is equivalent to

V r q(s) E ~ (s) k t Q(s)V(s)q(s) E ~oo(s) .. R(s)V(s)q(s) E ~oo(s) • tXk

By Lemma 3.7 we conclude that there exists a matrix N(S) E ~ (s) such

00

that N(s)Q(s)V(s)

=

R(s)V(s), i.e. (N(s)Q(s) - R(s»V(s) =

o.

It follows txn

that N(s)Q(s) - R(s) = M(s)P(s) for some matrix M(s) E ~ (s). Choose

~

-v

txn ~

v

v ( ]IIsuch that M(s) := (s + 1) M(s) E ~ (s). Then M(a) (s + 1) pes) + 00

N(S)Q(s) = R(s). Now (ii) follows by an application of Lemma 3.7.

LEMMA 5.2.

Let

r(s) E ~rr (s).

Then

,00

o

r (s)U E C

b .. r (s) u E CO'

PROOF OF LEMMA 5.2. We use induction with respect to the number nCr) of poles of res) in ~+. If nCr) = 0, the result follows immediately from

+

Theorem 1. In the general case, let a E ~ be a pole and define s - a

r

1(s) :=

S+1

r(s) • Then r1(s)u E C

b and u E CO' Since n(r1) = nCr) - 1, the induction

hypo-thesis yields r

1

(s)u E

CO'

Now, if we define v := S:l r(s)u, then v E

C

b, (s - a)v = r

1(s)u E

CO'

Hence, by Lemma 3.2, we have v E

Co •

It follows that

r(s)u r

1(s)u + (a + l)v E

Co .

LEMMA 5.3.

Let

n

be a positive integer. Then

n n-l

V (s + 1) u E

C

b .. (s + 1) u E

Co .

U E:

Co

(27)

PROOF OF LEMMA 5.3. We proceed by induction. For n = 1 the statement is trivial. Suppose the result has been proved for n - 1. If u E CO'

(s

+

l)nu E C

b' i t follows from Lemma 3.6 that sku E Cb for k = O, ••. ,n. n-1

In particular, (s + 1) u E C

b• By the induction hypothesis, this implies

n-2 2

w := (s + 1) u E CO. Now, since s w E C

b' WECO i t follows from Lemma n-1

3.3 that sw E Co and hence (s + l)w = (s + 1) u E CO. IJ

LEMMA 5.4. Let n be a positive integer and a E

:no

Then

PROOF OF LEMMA 5.4. By Lemma 3.5 we have that Lemma 5.4 is valid for u

t; 0

a

(s +

polynomial in s. So we have to consider only the case that u is a bounded -at

function in C. Let ~ E C be defined by ~ (t) = e (t > 0). We define

a a

v = ~aau where a denotes pointwise multiplication rather than convolution, i.e. v(t) =e-atu(t) (t > 0). It follows that sv = ~ o(s - a)u and snv

n

(s)ri a

(8 - a) u. Hence, we have v E Cb and s+l v E CO' We put

-n n n \ ,

1) v =: w. Then s WECO' and (s + 1) w E C

b• By Theorem 1 we have

~ n-2 2

that S w E C

b for i = O,l, ••. ,n. Let y := S w. Then y E Cb and s y E CO. By Lemma 3.3 we obtain sy E CO. Hence sn-1 w E CO. Iterating we obtain

~ n-1

8 WECO for i = 1, .•. ,n. Hence s(s + 1) WECO. It follows that

8 - a

(28)

REFERENCES

[1] J.E. Littlewood, The converse of Abel's theorem on power series, Proc. London Math. Soc. 9 (1911) 434-448.

[2J C.H. Hardy & J.E. Littlewood, Contributions to the arithmetic theory of series, Proc. London Math. Soc. 11 (1913) 411-478.

[3J E. Landau, Einige Ungleichungen fur zweimal differenzierbare Funktionen, Proc. London Math. Soc. 13 (1914) 43-49.

[4J I.J. Schoenberg, The elementary cases of Landau's problem of

in-equalities between derivatives, Amer. Math. Monthly 80 (1973) 121-158.

[5J J.P. Marchand, Distributions, an outline, North-Holland Publishing Co, Amsterdam 1962.

[6JB. Hartley & T.O. Hawkes, Rings, modules and linear algebra, Chapman and Hall LTD, London 1970.

[7J M. Newman, Integral matrices, Academi~ Press, Vol. 45 in Pure and Applied Mathematics, New York, London 1972.

[8J P.B. Bhattacharya &S.K. Jain, First course in rings, fields and vector spaces, Wiley Eastern Limited, New Delhi, Bangalore 1977.

Referenties

GERELATEERDE DOCUMENTEN

‘Tijdens een diner op de Nebuchadnezzar peinst Mouse over de vraag op welke manier The Matrix heeft besloten hoe kip zou smaken, en vraagt zich daarbij af of de machines het

Since oPaToH and onmetis are HP-based and GPVS-based ordering tools, respectively, the better quality orderings produced by oPaToH confirm the validity of our HP-based GPVS

As the size of a batch increases, both the number of lookup operations and the total communication volume among tablet servers decreases, since as more rows of A k are added to a

onsists of diagonal matri es

b) the Aharonov-Bohm double-slit experiment shifts the interference pattern for light.. c) the commutator of a matrix A and its exponential exp A is

ˆ Suppose an ensemble of systems with wave function |ψi. Suppose a unitary operation is applied to particle 2 for each system.. NO b) the Aharonov-Bohm double-slit experiment shifts

realtranspose is a package for L A TEXthat allows writing a transposition by actually rotating the characters provided. This follows the geometric intuition of a

bestemmingsplannen regelen dat bouwvergunningen voor kwetsbare objecten in de plaatsgebonden risicocontouren worden geweigerd uit oogpunt van externe veiligheid (= het opzoeken