• No results found

Realization of the Markov parameter sequences using the singular value decomposition of the Hankel matrix

N/A
N/A
Protected

Academic year: 2021

Share "Realization of the Markov parameter sequences using the singular value decomposition of the Hankel matrix"

Copied!
62
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Realization of the Markov parameter sequences using the

singular value decomposition of the Hankel matrix

Citation for published version (APA):

Hajdasinski, A. K., & Damen, A. A. H. (1979). Realization of the Markov parameter sequences using the singular value decomposition of the Hankel matrix. (EUT report. E, Fac. of Electrical Engineering; Vol. 79-E-095).

Technische Hogeschool Eindhoven.

Document status and date: Published: 01/01/1979

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

providing details and we will investigate your claim.

(2)

Realization of the Markov parameter sequences using the singular value decomposition of the Hankel matrix

by

(3)

j

E I N D H 0 V E NUN I V E R SIT Y 0 F T E C H N 0 LOG Y

Department of Electrical Engineering Eindhoven The Netherlands

REALIZATION OF THE MARKOV PARAMETER SEQUENCES USING THE SINGULAR VALUE DECOMPOSITION OF THE HANKEL MATRIX

by

A.K.

Hajdasinski and A.A.H. Damen TH-Report 79-E-95 ISBN 90-6144-095-5 Eindhoven May 1979

(4)

CONTENTS

Abstract

1. Introduction

1.1 Remarks about the desired type of identification 1.2 The degree of complexity of the model:

the order definition of a MIMO-system

1.3 Gauss-Markov estimation of Markov parameters

2. Description and properties of the Ho-Kalman Algorithm

3. Description and properties of the Singular Value Decomposition 3.1 Existence of the S.V.D.

3.2 The least squares fit on a matrix 3.3 Some properties of the S.V.D.

4. Derivation of the realization algorithm using the S.V.D. 4.1 The noise free case

4.2 Estimation of the realization for the noisy case 4.2.1. Estimation of the system order fi

0

4.2.2. Estimation of the Hankel matrix of rank fi 0 4.2.3. Estimation of the shifted Hankel matrix

5. Results of the simulation-examples

6. Conel us ions

Appendix: Some remarks about the limitations of S.V.D. realization

References 2 2 3 12 17 20 20 21 22 23 23 25 25 21) 27 30 43 46 49

(5)

-1--REALI ZATION OF THE MAHKOV PAHAMETEH SEQUENCES USING THE SINGULAH VALUE DECOMPOSITION OF THE HANKEL MATRIX

Abstract

Identification of multi input/multi output systems is the topic of this study. A crucial problem is the degree of complexity of the model used to estimate

tile original system. In mathematical terms tllis is defined by the order or the dimension of the system and these important notions for multivariable systems are redefined. Once these notions have been strictly defined and Markov parameters have heen estimated [rom input/output sequences, it is shown, that a clear estimate of the system dimension can be obtained by means of a singular value decomposition of the Hankel matrix. which is built up by all estimated Markov parameters. With the results of that same singular value decomposition it is possible to derive a realization, which turns out to improve the Ho-Kalman algorithm Ln the case of additive independent noise on the output signals of the system. Especially in low order, long Markov parameters sequences the presented algorithm seemS to be preferable-It offers us the possibility to incorporate all estimated Markov parameters into the estimation of a realization of the system to be identified. In the noisefree case hoth tllgorithms are equivalent.

Addresses of the Authors: A. K. Hajdasinski,

Central Mining and Designing Office, Plae Grunwaldski 10/8,

KATOWICE, Poland

A. A. H. Damen,

Group Measurement and Control, Dcpartn.ent of Eledrical En~ineerin~, E indhuvcn L1ni vcrsity uf TcchnuloKY,

P.O. Box Gl:l,

[,(iOO Mil EINDIIOVEN, The Ndht,,'lands

(6)

-2-1. INTRODUCTION

1.1. Remarks about the desired type of identification

In this study we are solely interested in the identification of multi input / multi output (MIMO) relations in the mathematical sense. No physical interpretation whatsoever will be pursued and the characte-ristics of the models will dominantly be determined by our limited, mathematical ahility, which defines characteristics as linearity, inde-pendence o[ disturhances, (inite dimensionality etc. Nevertheless the first aim is application to real processes, which puts strong restric-tions on possible models and methods. This last <1ssertion needs some explanation, which we will attempt to g~ve here.

Departing from the need of a practically usable relation between input and output signals, the first step is in general a choice of some model of the process, which at least seems to be justifiable on physical grounds and mathematical manageability. This model demarcates or constitutes a structure of the desired relation and describes it qualitatively, yet the quantization has still to be made and here two steps can be distinguiShed:

t. Once <l fundamental structure of a model has heen defined or

construc-ted, a measure of the degree of complexity has to he found. Refinement of the model is limited due to finite memories but especially and more importantly due to the degree of observability, i.e. the information contained in the measured signals puts several limits on the degree of complexity to be used in the model.

Physical processes possess by nature (almost) infinite complexity, but in most cases part of it is defined as noise, which is itself nonsig-nificant but disturbs the observation of the relevant part of the sys-tem. This division into a significant and a nonsignificant part is crucial and as no effective a priori knowledge concerning this is

available, only pragmatic solutions can be used, which we allow our-selves to apply here as well. Due to the missing decision tools or cri teria for the explained division in the general mathematical approach it is rrovisionnally irrelevant to demand for extensive mathe-matical. proofs and theoretical performance analyses. Therefore, as we

(7)

)

-seem to be able now to supply a prac:tiCi"l]]Y relevant algori l:llm, wC' merely confined ourselves to practical tests by means of

model-to-model adjustments.

The above adstructed degree of complexity corresponds to mathemati-cal terms as dimension and order. These notions, which are rather complex in MIMO-systems will be defined and elucidated in the next paragraph.

2. Once the degree of complexity has been determined or the mathematical cquivalent,the order estimation has been perforrr:.erl a set of parameters may he defined, which uniquely describes the model of that special order. This set of parame_ters may be identified then in the sense, that a residual (some error between process- and modeloutput) is minimized. However, various sets of parameters, are possible and one set may be transformed into another set.

The set of minimal number of parameters, given a certain order of the model, can be denoted as fundamental, because all other sets

necessari-ly show interdependence of the different parameters.

The interrelations between the different parameter sets will be commen-ted upon in the next paragraph, especially the "induced sequence" M: , denoted as Markov parameters, that can be supposed to be the

multidi-mensional impulse response on the one hand and the "realization" A, B

and C, which is generally known as respectively the system-.

input-and outputmatrix, on the other hinput-and.

1.2. The degree of complexity of the model: the order definition of

MIHO-systems

The structure of the models under discussion ~s defined by the following adjectives:

- linear

- multivariable (MIMO multi input / multi output)

- time invariant - finite dynamical - eli scretc

(8)

-4-Onc,e this structure has been given, the complexity has to be limited. Thi" is known as the model order determination or as the system order es timation problem. While for S1S0 systems the notion of the system ord,er is very well defined and extensively worked out, for M1MO-systems

the term "order" causes a number of misunderstandings and ambiguities. lIowever the order definition of the MIMO-system is even more important tl!:ll1 [or S[SO-systems. 1\ reasonahle reduction of the state space dimen-sion, closely related with the order, is extremely important for the

sake of the modelling and computational simplicity.

In the sequel of this report there will be made an attempt towards un~­ fication and the precize definition of the multivariable dynamical system ord,er. Very useful then will occur to be the so called H-model of the lin,ear multi variable dynamical system, but also some other definitions must be remembered here in order to make clear all possible equivocal pas.sages. There will also be printed out the equivalency of different typ,es of models used for identification of the multi variable system. See~{ing for thie equivalence was a necessary condition for the develop-ment of the realization theory i.e. methods of finding the state space

description given a transfer matrix or input/output data.

Definition I: For the mllltivariahlc, linear. dynamical system having p

inputs u

1 (k) '" up (k) and q outputs y I (k) ••. y q (k) there is defined the qxp matrix K(Z) called the transfer matrix (being considered the rational matrix of the argument z) fulfilling the following condition

1.

(z) where y I (z)

1.

(z) = y (z) q K(z)u(z) : .':: (z) u (z) p

and y(z) ,u(z) are the

"z"

transforms of y(k) and u(k) under zero initial conditions. 14

I

Ii 111121

(9)

-'1-iJefinition 2: The characleristic polynomi31 W(?) of the strictly proper or proper tran.fer matrix K(Z) is defined 35 the least Conunon iJenominCltor of 311 minors in K(z). having hy the greatc.t power of Z the coefficient equal to one.

I

I

11 1121

Definition 3: The degree "IK(z)1 o[ the strictly proper or proper transfer

matrix K(z) is defLned as the degree of its characteristic polynomial. (Practically it is the smallest number of shifting elements necessary to model the dynamics of this system).1111

Definition 4: For the multivariable, linear, time invariant, dynamical sys-tems, the state of the system at an arbitrary time instant

K

=

ko 1S defined as a minimal set of such numbers YI(k o)' 'J2 (k ) ... x (I< ) the knowledge of which together with the

o n 0

knowledge of the system model and inputs for k ). k

"

x(k )

=

- 0

x (k ) n 0

is sufficient for determination

of the sys tern behaviour for k ~ k o

is called the state vector, and members xI (k

o)" ·yu(ko) are called state variables.p

IIh21

Definition 5: The set of difference equations

~:k+l) = ~(k) + B ~(k)

where x(k) is a (nxl) state vector ~(k) is a (pxl) input vector

is called the state equation, while the set

y(k) = C x(k)

-

-"here Z(k) 1S a (qxl) output vector

(10)

-

-h-Definition 6: The number n of state variables Ln the state equation 1.8

defined as the dimension of the state vector or the state space and also denoted as the dimension of the complete system.

Definition 7: The triplet of matrices

I

A, E, C

I

is defined as the reali-zation of the dynamical, linear, time invariant, multi va-riable system.

Definition 8: Any polynomial f(:I.)

Lermna I:

f (z) ~ z k + k-I 2 Clz +

...

+ C

k_2z + Ck_lz + Ck [or which holds

f (A) ~ Ak + C Ak- I I +

...

+ Ck_2A 2 + Ck_1A + C Ak O ~ ri I

1S called the annihilating polynomial of the A matrix.

The characteristic polynomial of the A matrix - WA(z) is one of the annihilating polynomials for A (according to Cayley-Hamilton).

Defl_nition 9: The polynomial f(z) of the smallest, nonequal zero degree k, fulfilling definition 8 1S called the minimal polynomial

of the A matrix. II IJ 1121 [9]

Definition 10: The matrix coefficient ' \

~

C Ak B for k

~

0, 1,2, ... is referred to as the k-th Markov parameter of the system defined by the realization lA, B, CI .19 III~

(i.e. the inverse liZ" transform of the transfer matrix)

Definition II: The following description of the multivariable dynamical system is referred to as the H-model of this system. 1711811121

u (i)

o

for i < 0 where

(11)

-7-r'J

~"(O)]

y

" i

(I) ; 1I ':- ~(I) ; /; - properly dimensioned

1. (2) u(2) block

vector

containing

the initial conditions

[MO

MI

~-I

" " "J

Hk~ MI M2

~

...

Generalized Hankel Matrix

M2 M3

~+I

1

Generalized Toer1itz Matrix and M - for k

=

0, I, 2 ••• are the Markov parameters of

k

the considered system.

Now it is necessary to present two theorems being fundamental for

fur-ther considerations.

Theorem I:

Remark:

The sequence of Markov parameters {~} for k = 0, 1, 2, 3 ... has a finite dimensional realization

{A, B, C)

if and only

if there are an integer r and constants a. such that:

M . r+J r

= "'

/ . i=1 a. M . . for all J

>

0 L r + ] - l ~

where r ~s the degree of the minimal polynomial of the state matrix A (assuming we consider only minimal realizations).

Theorem 1 is called the realizability criterion and the r is called the realizability index.

(12)

Theorem '2:

Remark:

-8-If the Narkov parallleters sequence {Mk} [or k = 0, 1, 2 ••. has a finite dimensional realization {A, B, C), \rlith reali-zability index r,

state space (also fulfils

then the rr.inimal dimension n of the

o

of the realization) for this realization

where and H = r rank

III I

= n r 0

n - minima] state-space dimension o

I Mil -

qxp n r x mln (p,q) 0 M MI M r-I 0 MI M2 M r

LS the Hankel matrix (see definition II).

(fhe proof of this theorem is given in 17119 1112)

From the linear dependence of Markov parameters follows lhat that rank II N = rank II

r+ r n o for all N~O.

Hith the aid of these II definitions and two theorems it is possible now to generalize the meaning of the system "order" for different types of multivariable system descriptions. Before i t will be made formally, it can be of some use to present a simple scheme sharing relations between the three defined types of models: the transfer matrix, the state equa-tions and the H-mode 1. (See fig. I)

From this scheme we learn that while from the state space description

there is a straiglltforward way to get the transfer matrix K(z), the re-verse procedure viz. realization theory is a lot mnTC' complicated. On the contrary knowing the. Markov narameters it is equally easy to get any re-quired form of description. For the sake of modelling, Markov parameters can be derived as easily from the state space description as from the tranl-lfer matrix. Obviously Markov parameters are also used in the H-rnodel.

(13)

-')-other

rC;lli~ations

..

lin-Kalman state space

II, B,

e

K(z) realization description

+

Markov parameters

t

II-model

fig. 1. Interdependence of different type models.

considering all pro's and contra's it seems desirable to express .11."1 structural invariants of multivariahle dynamical systems in terms

or

Markov parameters.

Definition 12: The order of the multivariable system will be defined as the minimal number of Markov parameters necessary and sufficient to reconstruct the entire realizable

sequence of Markov parameters according to the theorem l. In other words the multivariable system order is equal to the realizability index r.

Alternatively fur the state space description, the multivariahle system order C;ln he defined ;ts tllC degree of tile minimal polynomial of the

stilte matrix 1\. This follows Jirectly from the proof of theorem I. For the transfer matrix descri ption llOwcver, the order defini tion in the general Cilse is not possible.

(14)

-10-Defin:ction 13: The dimension of the multivariable dynamical system is defined as the number n being equal to the rank of the

o

H -Hankel matrix for this system, where r

is the

reali-r

zability index (order of the system).

Alternatively for the state space description it is the dimension of the state matrix A. And again for the transfer matrix description there does not exist a unique definition of the system dimension. Only in the case when <11_1 poles in the elements of the transfer matrix nre either

diffe-' 1 ' I d "

rent or equa and connnon, t le l.menSl.on can be determined as the degree of this matrix (see definition 3).

To illustrate this the following example shows the case of equal but dis-tinct or noncornmon poles. It is sufficient to consider a diagonal trans-fer matrix K (Z).

(z 0,8) (z 0,25) (z+0,5)

K (z)

0.0

dimension of the system ~s

n

=

6 o degree of the K(z)

hlK

(z)} = 5 because the equal poles in Z

state variables.

0.0

(z 0,8)(Z 0,6)(2+0,01)

0,8 are noncommon, they refer to different

Reassuming in a general case to forecast a system dimension may 0CCU~ to be a very difficult task. However in practical cases it will be seldom that distinct poles have exactly the same value i.e. are equal. Nevertheless, when the poles are given up to a certain accuracy for example numerically evaluated poles, it may be difficult to decide whether they are really distinct or not. This problem is also one of the drawbacks of the

transfer matrix description and one more argument for the state space and Hankel description, where this ambiguity never arises.

(15)

L ... _

-11-Theorclll I mui? lngl'tlier wiLh dl·finitiolls I? and '"3 ;lrc ('ssl'Iltinl for lhe estimation of the system order. As it was pointed out, the complexity de-finitiort for the multivariable dynamic system can be most flexible and most general while using the Markov parameters description. The task of

the complexi ty identification will thus he to determine the order rand tile (minimal) dimension n of the system being considered.

~~~~~~~~~~~o~~~~~~~~~~~~~~

Such .1. posing of the problem is possible only for some strictly conceptual systems having both finite order and finite dimension. In such a case looking for exact rand n o makes sense. However in real systems identifi-cation problem one cannot search for any exact rand n because usually

o

those are systems of infinite order and dimension. The only goal .!hich we can aim at is to find a reasonably simple approximation of the real system

(i.e. estimates of the rand n ). o

As an illustration can be given the following example:

Example I: K(d +3 -(z-2) (z+ I) 2 2 (z+4) 2 (2 + I)

The Markov parameters are:

M {) rank 112

f

112 jll I f 0 3

=f

III 2 -I 4 2 4 4 -2 -7 4 -3 -10 defCree

I

K (z)

f

(there are no in K(z». = 2 " n o distinct poles

(16)

-12-So thl~ dimen:-;ion of titP system J S

n

=

3 0

One of possible realizations is:

+

2

-J

[:

;}

c=[~

A -2 1 ; B

=

-2

The characteristic polynomial of 1\ ~s:

3 (z+ I)

Ilut the minimal polynomial of /\ is:

"1

:J

0

2

11/\ (z) = (z+l) as can easily be checked, so the order of the system

is, i11deed H = H • 2 r

1.3. Gauss-Markov estimation of Markov parameters

This .mbjeet has b('en broadly treated en I R I , and here will be presented only main features o[ the method. Starting with the definition 12 of the

II-model it is straightforward to derive the following equation:

T T '1'

Y =

Hk

Sm + M S 00 ( 1 )

where

'! T -"- [

Y

(I)

Y

(I + 1 ) •••

y

(I +m) ] (2)

and Z(l) I-th measurement of the output vector

Z (1 +m) I +m-th measurement o[ the output vector

1\ " [M(O) M( I) ••. M(k)] - block vector cont.qining first k+1 Markov parameters of the multi- (3)

(17)

-13-T

N " [H(k+l) ~f(k+:~) ... ~I(I;:-t·llI) ... ~I(I+III-k)]- hlne],; Vt't'lnl" Ill" I il1il4'

2::(1-1) 2:: (1-1 +m) S m=

"

2::(1-2) 2:: (1-2+m) u(l-I-k) .. 2:: (l-I-k+m) 2:: (1-k-2) .. ~(1-2-k+m) S A u(1-k-3) .. !!(1-3-k+m) 00 2:: (0) 0

o

0 .•. 0 2::(0) lec~tll cnntaininJ~ r('-(4 ) maining Markov para-meters of the considered

system.

finite dimensional matrix of the input samples

finite dimensional matrix of the input initial con-ditions

(5 )

Relation (I) represents the real system equation, while the model equation

IS assumed to be:

(6)

l' -1'

N stands for the estimate of the ~ and Y for the estimate of the

Considering that the multivariable dynamical system is corrupted by the rnultivariable noise E, relation (I) may he written as:

Y (7)

where E is defined as:

(8)

(18)

-I ',-noise fi lter

[

-H y

fig. 2. The block diagram of the multivariable dynamical system described in terms of Markov parameters.

The multivariable no~se E is considered to be an output of a colouring filter [or the white multivariable noise H

Assuming that

{.':!.(i)}

and{~(j)} are for all i and j mutually uncorrelated and that expected value of the

{!:(i)}.'

\!;.(i)} =

Q.iln

estimate of the first k+l Markov parameters is found minimizing the following loss function:

v

w

whe:re W is a nonnegative weighting matrix and

Solution of this problem results in the following expression for the Markov parameters estimate:

N (S W S) T -I S W Y

m m m

Expressing N in terms of the E

N

(9)

(10)

(J I)

Conditions under which the N is an asymptotically unbiased estimate of tlw ~ are discussed in 15 118 I

(19)

-\5-UStln11y, due to decre~lsinfl nnttIre of tile {H} sequence [()r staille systems

T

for k .1nd m great enough the term S M can be neglected and the part of

'I' -\

,n

tlte hi ns (B m W S) m S '!11 W E assymptotically vanishes when assUming

I L~(j)} = 0 and there IS no correlation between samples of E and S .

ill

Because those are the only cases to be handled,. the following expression is ~lssurned to deliver the ;rsymptotical1y umbiased estimate N:

N

-I

Choosing as the weighting matrix W _ R , where

it can be shown that such a choice leads to minimization of the

As the following inequal ity

II' 1

(M

k -N) (Mk -N)

'I'f

w

II

:>

II, 1

(~-N) (~-N)

T

f

R

II

-\

can be proved for all W

I

R •

At las t

( \2)

( 13)

(I 4)

(15)

and this estimation is equivalent with the Gauss-Markov procedure for the single input/single output system

1411d.

[3]

There remains only to find an appropriat~estimate of the R matrix. This task is completed with the aid of the composite noise notion and again the realization theory. The estimate achieved on this way is called fur-ther the Gauss-Markov Estimate with Realization of the Covariance Matrix (named further C.M.R.E.).

_~Appropriate

in the sense, that the profit of the use of R instead of the identity matri~ is not overcompensated in the negative sense by the devia-tions in the estimate of R.

(20)

-)(,-This estimate has hC(.'11 inlrotiuct'd in I H I and also lll(.'rt' are discussed further properties of the C.M.R.E. estimation.

From

this

moment, for the sake of the continuation of the re~ort, it can be assumed that we are having the C.M.R.E. estimates of the identified system Narkov parame ters ~

(21)

-17-2. DESCRIPTION AND PROPERTIES OF TIlE HO -KALMAN ALCORITHM

The original version of the lIo-Kalman algorithm has been derived for noise-free systems. Since the time it was first published

r

9

I

a number of .1.1gori thros has been proposed basing upon the Ho-Kalman ver-sion and attempting to give solutions for cases where Markov parameters were estimated from the operating records. However ever since has not been proposed any, giving a satisfactory simple approximation of the realization without a high complexity to attack this problem. So~e

facts about the Realization Algorithm of B.L. Ho and R.E. Kalmanl9

11121

are reviewed.

Theorem 3:

191 114/

There 1S

El -=-k

For an arbitrary, finite dimensional, linear, dynamical sys-tem given the input-output map the canonical realization exists in tIle following form:

-p- number of inputs -q- number of outputs

-n- dimension of the realization defined the following matrix

k x 1 matr1x . [ k ~

~

l-~

if k < 1

[II

]

k x 1 matrix

o~

if k > 1 -'1<-1

k x k matr1x . [ k ~

]

if k

rk and 01 are the unit and zero matrices respectively.

-=-k -"k

I. Choose r such that the relation (see theorem 1)

M . -r+J holds. r ~ i=1 a. M . • 1 -r+J-1 for all J ~ 0

2. Find a nonsingular matrix P (qr x qr) and a nonsingular matrix g(pr x prj such that: [I]

(22)

-18-In 0 pr-n -n P II = En Epr - - r

!l

-qr - n n pr-n 0 0 -qr-n -qr-n

where H - is the !Iankel matrix for the considered system

r !I -1' M !'!c l ••• !'!cr-I ] M2 ••• M - - r • • • • • •• !'!c2r- 2

3. A canonical realization of the considered system is given by:

A

=

Eqr P (aH )

Q

En -n - - r - -pr B

=

Eqr P H EP -n - -r -pr C Eqr H En -q -r -pr

where (YH 1S the shifted Hankel matrix - r oil - r MZ ••• M ] - - r • • • • • • •• M -r+ I • • • • • • •• M

-2r-The .proof of this realization theorem can be found in reference [II.]. Also there can be found the following theorem which will be useful late:r.

Theol:em 4: The Penrose-Moore pseudoinverse of the Hankel matrix H is

- - - r

r,iven the following:

where H+ stands [or the lJenrose-Moore pseudoinverse of the H

- r - r

(23)

-19-Bee,luse for the n01 sy case the realizability criterion (theorem I and 3) will never be fulfilled, (for the linear dependence is lost in case of estimated Markov parameters) it is necessary to optimize the multi-variable system model order r another way. Some solutions of this pro-blem are suggested 1n

15 II

71.

Also thc 1I0-Kalman algorithm does not fit to the no1SY case, because using not all Markov parameters estimated (only the number which comes out from the order test 2r-1 see ref.

I

7

I),

it truncates the informa-tion contained in all noisy data estimated.

.!J

Supposing the estimate r of the system order r has been already

estima-ted, it is to be seen that! and

Q

matrices are to be evaluated basing

"-on the - r H~ matrix, which is already truncated to qr x pr dimension. Thus if there are estimated L > 2r-1 Markov parameters necessary to produce

'"

~

It? and CTH

r ,

then information contained in remaining L-2r+ I Markov para-meters is lost.

*)

Ren~ember , tllat for the noise free case the order r and the dimension n

o have to fulfill the following conditions:

r

1 ) det {H } r = 0 M r+j L a. M . . J ). 0 i= 1 1 r+J-1

det {H H T}= 0 if p

.1<

q r r

2) rank{1l } rank {Il } n for all N >

o.

r r+n 0

3) r > n 0

min (q, p)

In the noisy case smallest integer, minant will not lIe

n can he estimated from 2), while

r

will be the o

that satisfies 3) and 1) approximately, as the deter-zero exactly due to the noise.

(24)

-"0-J. DESCRIPTION AND PROPERTIES OF THE SINCULAR VALUE DECOMPOSITION

The Sjngular Value Decomposition (S.V.D.) will he introduced ill a very compact form by means of few most important theorems and definitions. Host vigorous and formal material dealing with this subject is to be

found

in121, 16

1,1101.

3. I. Existence of the S.V.D.

Theore:n 5: For any m x n matrix A, the S. V.D. exists, given by:

where

U - 1.5 a m x II matrix consisting of f1 orthonormal colunms U.

-J so:

T

U U If'

f' - 1S the rank of the matrix A D - 1S the f' x f' diagonal matrix:

D diag ("I' "Z' "f')

(11 ~ (fZ';y. •• ~ rTf} > 0

v -

~s a n x I} matrix consisting ofporthonormal columns v. so: -J

The (T. are called singular values.

J

The proof of the theorem 5 can be found 1n many references among

(25)

-21-3.~. The le.:lst sqU.:lre_s fit on :l. m~ltrix

For the sake of the realization algorithm it will be desirable to limit the rank of a Hankel matrix. This also must be done with a minimal effect on this matrix. Expressing this problem in categories of m x n matrices

(rectangular): if A is the original m x n matrix with the ranklAI =1', the task will be to find the m x n matrix B with the rank{B I

<

I' in such a way that the euklidean norm from A-B is minimal. The norm of this type is given the definition:

Definition 16: The norm of a matrix A will be defined as:

Theorem·6:

In other words it is necessary to find a B with a limited rank and minimized norm of difference between A and B.

[2] [10]

1611101

Given the S.V.D. for a m x n matrix A:

A diag (aI' a

2 ••• al')

IT J ? If 2 ~ (I 3 ~ ... ~ (" > 0

The m x n matrix B of a rank k.,;!' and such that

IA-BI

m1n, 1S given the following:

f '

Dk I

0

1

v

T

v

T

B U

---:----

Uk Dk k

o

I

0

I

where Uk contains the first k columns of U V

-k contains the first k columns of V Die = diag. (lf1, ('2 ' " a k)

Remark: So the B matrix 1S found by setting the smallest

(26)

-22-I t u; also possible to evaluate the error made during such a fit:[IOl

absolute error:

IA-BI

=

lEI

=

relative error:

,/~

~ ('

J.

j=k+1 /'

J.

j=1

Jf

2:

?

fT~

""

= J 1,j j=k+1 J 2 IT. J 2 IT. J

This theorem will be the basic tool for the system order determination and approximation of the Hankel matrix.

3.3 Some properties of the S.V.D. (10(.

Property

I:

If the

s.v.n.

of

A

is ~lven by

A

A + of A wi 11 be (2 ( , (I () ( :

and the singular values of A+ are:

u n v

T,

th~

pseudo lnverse

-I -I

(J IT

fJ ' P-I'

Property 2: From the S.V.D. and orthonormality of the columns of V it follows that T tr(A A) /'

;::

j=1 2 fT. J

(27)

-23-4. DERIVATION OF THE REALIZATION ALGORITHM USING THE S.V.D.

it wiH be demonstrated that the S.V.Il. delivers in a noise frel'! case an exact realization algorithm which is equivalent to the Ho-Kalman algorithm but intuitively simpler and sav1ng some computational efforts. For the noisy case the S.V.D. will deliver for a chosen r the best approximation in the least squares sense of the realization.

4. I. The noise free case

The whole procedure will base upon the S.V.D. of the Hankel matrix ~. Having exact Markov parameters it is always possible to find .k,~ r the system order. For such a case referring to the Theorems 2, 3, 4, 5 and Property I we have: where [D

J

according n = n o = n x n o 0

to theorem 2 the rank will be

(16)

( 17)

(loS)

Comparison of (17) and (18) may lead to the following equivalence (infinite possibilities are available however)

(19)

V (20)

With the aid of (19) and (20) the HO-Kalman algorithm equations can be re-written the following way:

A D -I U T o-Hk V (21 ) -I UT IT D VT P VI EP (22) B D Epk = pk C EqkU D VT V q Eqk U D (23) q

(28)

where<.r.1.!." is the :llready known sl!i[t(~d lIankL~1 matrix. It rL'm:l.lns only to

be shown that the realization (21), (22), (23) fulfils requirements of.

"

th .. Ilo-Krllmim theorem, namely that

Tn , ~pk-n n n k P II Q L ____ ~ EPk Eqk n (24) n I (IIpk-n (IIqk-n qk-n let

1\

~ Un D (Vn ) T qk n pk and consequently Eqk pqk = D -I (V") T n qk n qk Qpk pk En pk Vn pk V choosing p II I qk-n

,

where uqk n = U and D D and (u'jk-n) T can be chosen in such a way that

n qk

all qk-n rows are orthonormal (in qk dimensional space qk-n orthonormal columns [or Uqk can theoretically be found*)

qk and (I (Vn pk (where V n pk V)

where aga,n the

v~~-n

can be chosen basing upon the fact that in pk

dimensional space pk-n extra orthonormal columns for V can theoretically

be found.

Remembering that columns in U and V are orthonormal:

"') Note, that this ,s only necessary for the proof, ,n actual practice

the remaining space of uPk

pk should not be defined 'n detail by the

(29)

-:~)-P

f\

Q

["~~\~!]

(Uqk) T Un D (Vn ) T [vn vPk-n] ; qk qk n pk pk pk

fJ

I qk-n

~-"

n I

a

J

EPk EPk

---;---

E" D t,n qk n n ·qk n

o

,I k , q -n

This last completes the devivation of the realization algorithm which c.an be used for further consideration.

Actually this shows especially for the n01se free case that P and Q

contain too many degrees of freedom, while U~k and v~k contain strictly sufficient parameters necessary to construct the realization from the Hankel matrix.

4.2. Estimation of the realization for the noisy case

In the noisy case there can be performed an easy test which singular values are substantial and which can be neglected comparing their rate of decrease.

4.2.1. Estimation of the system order n

"

It is assumed that there are estimated 2k Markov parameters. From those Markov parameters it is possible to construct the following Hankel and

shifted Hankel matrices: Hk ; aHk

Performing the S.V.D. there also 1S found a vector of singular values

°13-°2;' 30 . (where s s = k

*

min. (p, q). Comparison of singular values gives a solution to the order test. Neglecting

values we determine no as the dimension of the s-n singular values.

o

smallest s-n singular o realization. Consequently we omit the smallest

A criterion deciding which singular values are sufficiently small is very problem-dependent. But in all investigated cases (model to model) there always existed very sudden changes in singular values for an increasing position index. This situation is illustrated in fig. 3.

(30)

o

0' J

\ 2 \ \ \

'--3 -20-~/A <P ('

%

4 5 N =3 o 7 J

Fig. 3. behaviour of the singular values for the ideal and non-ideal noisy system (estimated system).

This property gives a very easy way to find a lower order estimate of the state space dimension for the system being considered.

4.2.2. Estimation of the Hankel matrix of rank no-:.

Having known no and D from the S. V. D. of the 1\, according to the theorem 6, there can be found the best least squares estimate of the

"k

being of the value no' where

no ~ min (qk, pk) (25)

Thus

f\'

being demanded estimate of

"k

is found as: Hk U

~;"-::

1

v

T U D

v

T (where n no) (26)

n n n

and Hk 1S also (qk x pk).

The Hk is not any more a block symmetric matrix, due to the fact of neglecting (qk-no ) x (pk-oo ) part of the D full rank matrix. As it will be seen more clearly in the next section, the

"k

will approximate the Hanke~v matrix of a vnri linear system. However, if the order test was performed correctly, the nonsymmetric deviation will be rather small.

(31)

-:'7

Because the Markov parameters have been found as an consistent L.S.

estimate (also efficient) of ideal ones, the S.V.D. appea~ one more filtration of the noisy data in the L.S. sense. The approximation of the realization takes the following form:

A

n-

I V 'l' (61

'k )

V n n n B D -I VT V D VT EP VT EP n n n n n pk n pk C Eqk V D VT V Eqk V D q n n n n q n n to be (Z7) (Z8) (Z9)

still remains one more problem to solve, the estimate of the shifted Hankel matrix aH

k. To solve this it is necessary to take a deeper look ~ ~

into the structure of Hk and aH k•

4.Z.3. Estimation of the shifted Hankel matrix

~

Let us remember once again the structure of the Hk and crHk matrices:

M MI M Z ~-I a MI MZ M3 ~ H = k

~-I ~

...

MZk- Z MI MZ

Y7 Y

M2 M3

/ /Y

/~+I

"II = M3 M4 /

~+I/

k / / / / / / / MZk- Z / / ,/ - - --.

-

/

-

.-~t ~+I / MZk-ziMZk-1

From equations (30) and (31) it 1S seen that only one element in aH

k differs from elements of H

k, and this is the last Markov parameter estimated for the dynamical system MZk_

I'

(30)

(32)

-28-""

Huwever, after the least squares fit on the

1\

we have:

Ilk 1JJ. where 11, r II 12 110 111 ZI 22 111 I1Z 11 32 I1Z 113 kl kZ 11 k-I 11k 13 112 Z3 po J 33 1'4 k3 I1k+ I Ik 11k - I Zk 11k ,3k l'k+ I kk 112k -k J; i, J I • Z •••• k { 0, I ..• Zk-Z, (12)

whic',l means that the block synunetry property in the

1\

matrix is lost.

I t s,~ems like Hk matrix was the Hankel matrix of a vari linear system. Taking this under consideration there can be proposed two structures of the aHk matrix:

1,

aHlk = IZ 13 PI )lZ 22 Z3 I1Z 113 3Z 33 113 114 kZ k3 \lk \lk+1 Ik Zk Pk-I 11k Zk/3k 11k I1k+1 3 k / · I1k+1

/~k

(33) I1Zk-Z kk / - ----I1Zk-Z :

,

The last column of the shifted matrix is constructed copy~ng elements of the last column of Ilk ,which is due~to the assumption about the

vari-linear nature of the system described Hko Making such a choice we possibly cormnit the smallest nonaccuracy in aHk estimation.

(33)

-29-kk Al so it is proposed to take [or 1l

2k- 1 the c:orrcBponding real vaJ lie of the estimate M

2k- l• As it has been experimentally checked, the realization (27), (28) and (29) is not very sensitive to changes of

jl

~~_I'

However,

M

2k_1 is taken from a different matrix space, which has the realization of the higher order then estimated r, as it incorporates the noise.

2.

]1 I 21 ]12 22 113 23 Ilk 2k

31 32 33 3k

112 113 114 Il k+ I

a H2k

kl k2 k3 kk

Ilk-I Ilk Ilk+1 k Z / k 3 / • Ilk Ilk+1 Il • IlZk-Z

/ '

-kk : IlZk-Z:

t

MZk-1

The last row of the shifted matrix ~s constructed cory~ng elements of

the last row of Hk, which again is due to the assumption about a vari-linear nature of system described H

k. And again it is proposed to take kk

for PZk-1 the corresponding real value of the estimate MZk_l~

There are many other possibilities of the solution for the aRk matrix estimation. However, having in mind also a simplicity of the algorithm, which in case of multivariable and multidimensional systems can be even more important than a slight sophistication of the formalism , one or

two above mentioned methods is proposed as a solution.

In numerical experiments there was chosen the realization algorithm incorporating the allik shifted Hankel matrix.

As the complete algorithm for the noisy case is explained now and its profits will be illustrated by means of a number of examples, some

remarks should be made of the drawbacks as well and more especially

about the heuristical property of the algorithm or the impossibility till noW to prove the algorithm in some mathematical sense. Therefore some remarks will be made in the appendix in order not to disturb the progress of the explanation of the algorithm.

(34)

-30-5. RESULTS OF THE SIMULATION-EXAMPLES

Example 2.

ConGider the two input, two output system given the following block diagram - fig.

4.

1I1(z) U (z) K (z) o "I (z) 1: 2(z) ,.._~_-L_~ + J 1 (z) J L (z)

Fig. 4. the transfer function model of the identified system.

The the K (Z) o [ (Z - 0.8) 0.0 0.2 ] (z _-_0_._8.:..) _(_Z_- O. 6 ) (z - 0.6) *)

fact that Ko(Z)

=

K~(Z) means that this system is equivalent to system with an additive white input noise,

if

we omit initial conditions.

The realization of the K (Z)

0 transfer function is given by:

A

[o.s

0.2J B [1.0

o.o];c=

[1.0 0.0 ]

0.0 0.6 0.0 1.0 0.0 1.0

and Markov parameters are:

[1.0

o·t

= [0.8 0.2] .

~.64

0.28J M = MI M = 0 0.0 0.0 ' 0.0 0.6 ' 2 0.0 0.36

512 0.296] ; M = [0.4096 0.28

J

M = 4

.

.

.

.

3 0.0 0.216 0.0 0.129

*) Tlle poles may be distinct or cornmon. As we neglect the initial

(35)

-'11-Eigenvalues of the state m.:1trix /\ are:

Case a: ,n order to show how the modified lio-Kalman algorithm works for the sake of the ideal systems modelling, the exact Markov parameters were taken as an input to the algorithm.

The singular value decomposition of the H4 singular values: °1 2.5701194 °2 1.4734138 °3 8.6947604 10- 12 ~ 0 °4 1.5247013 10- 12

-

0 °5 I. 5134382 10- 12

-

0 H4 delivered following

It is quite clear that the system dimension n = 2.

o

The estimate H4 used for the realization is exactly (i.e. within accuracy of the computer) equal H

4, such that the system is exactly realizable. Also oH

4 can be evaluated exactly.

~0.834575

-0.068194) A

=

-0.011896 0.565417

~o.

568066 -0.259609] B -0;288067 0.772964

C

=

r

1 .504 1 70 -0.560574 -0.505194] 1.105450

This realization generates exactly ideal Markov parameters i.e.

-

-;

-

.

M.

,

= CAB = C A'B.

Eirr,en valueI)' of the A matrix are:

0.79999 ... 0.59999 ..•

0.8 0.6

(36)

-32-CaBe b: An additive coloured noise sequence is generated on the output of the systC'm. The i npllt is gC'IlC'r:1 tC'd ;ts tIle wll i tC' nOJ s('

having a rectangular density function between (-I, 1)

while the noise filter input 1S generated

as the white gaussian no~se with the standard deviation 0.1. The sequence of 10 Markov parameters was estimated in 6 runs based on 100 samples in every run, and averaged over those 6 runs. The Markov parameters estimated this way with the G.M.R.E. method are:

M=~I.OIO

0.019J

M = [0.746 0.239"] ; [0.637 M = o -0.005 0.995 I 0.024 0.558 2 0.009 [0.481 0.294] M = [0.439 0.315 ] [0.362 M = M = 3 0.030 0.201 4 0.001 0.084 5 0.030

~O.

315 O. 188 ] M = [0.237 0.168

J

M = [0. 102 M = 6 0.036 0.038 7 0.032 0.019 8 0.072

027 0.083] M = 9 0.083 0.018

The S.V.D. of the H4 delivered following singular values.

°1 °2 °3 °4 2.0 1.0 2.591692 1.473426 0.231209 0.199619 2 o J °5 0.146380 °b 0.088079 °7 0.039385 °8 0.021803 4 5 (, 7

Fi~. 5. singular values if the H4 matrix. 8 0.260

J

0.385 0.243J 0.304 0.14

J

0.034

(37)

-33-Again it i:-; ohvious th.1t the .... dimension no :::: 2 will be the best approximation. The estimate H4 used for the realization is not any

more

e~uAl

H

4, but the deviation is really very small.

As shown before the overall relative error will be: R

\I

ii-l!

\I

trace { U(Dk-Dn)

v

T V (DltDn)

UT}~

i~3 [ 2

1 .014 ~

Ill! II

trace

{u

Dk VT V Dk uT

1

8 2 [ i=1 o· 1

So the relative error 1n each element of H will be in the region of

~~

.12.

This overall relative error of 12% is accomplished by an error of -4% for the big numbers -I and by an error up to -100% for the small numbers -.01. As the estimates of the Markov parameters have been given before, the differences between H,

Ii

and H can be checked given the following found H: 0.975 0.024 I 0.766 0.222 I 0.638 0.272 0.520 0.268 -0. 00 2 0.9 90 I 0.025 0.600 1 0.021 0.408 0.039 0.276 I + . . . -I 0.766 0.244 0.607 0.300 0.505 0.298 0.417 0.282 0.014 0.599 0.027 0.366 1 0.023 0.251 0.032 0.17l -I . , .

l!

~ 4 0.637 0.268 0.507 0.298 I 0.423 0.281 0.350 0.257 0.003 0.409 I 0.01/, 0.248 I 0.011 0.170 0.018 O. liS . 0.517 0.293 1 0.413 0.288 0.344 0.260 I 0.287 0.230 -0.000 0.284 I 0.007 0.172

,

0.006 0.117

,

0.011 0.080

The matrix H4 has "almost" a block symmetric structure and assumptions about a slight varilinearity of the system modelled H4 for the sake of the aH4 reconstruction seems to be quite reasonable.

The second order approximation of the realization is found using the simplest method for estimation of the aH4 i.e. filling M7 for the lacking element of the oH4'

The approximation of A [0.847094 -0.106906 B [ -0.539030 0.345924 the realization 0.053035·1 ; 0.576929 -0.299207

J

-0.742041 takes form: _ [-1.45438 C --0.677517 0.55351J -1.06114

(38)

I

I

-34-ThlS realization generates .1. very r,ood sequence of Markov pnr,amcters

es'~imates, being very close to original ones: (fDr five first Markov parameters)

Ideal Markov Markov parameters Markov parameters

parameter generated via HO- generated via S.V.D.

Kalman realization realization

-

-{M. } 1 {M. } 1 {M. } 1 1.0 0.0 1.0 0.00151

--

0.97543 0.24434 M M M 0 0.0 1.0 0 -0.0041 1.0 0 -0.001871 0.99013 0.8 0.2

-

0.77918 0.20118

-

0.7797 0.20660 MI 0.0 0.6 MI -0.005519 0.59000 MI 0.'124006 0.618721 0.64 0.28

-

0.6066 0.27492

-

0.62817 0.28213 H2 0.0 0.36 M2 -0.005671 0.34747 M2 0.03511 0.3915 0.512 0.296 - 0.47206 0.28393

..

0.50902 0.29902 M3 0.0 0.216 M3 -0.0052270 0.20416 M3 0.003813 0.25172 0.4096 0.28

-

0.36716 0.26230

-

0.41430 0.28718 M4 0.0 O. 1296 M4 -0.0045473 0.11957 M4 0.036940 0.164871

Th,~ minimal polynomial coefficients for Ho-Kalman and S. V. D. real izations co:nparing wi th ideal ones are:

(39)

I

-35-I

I Minimal polynomial Minimal polynomial

Minimal polynomial

,

1

I coefficient coefficient coefficient

I

ideal system Ho-Kalman S.V.D.

I

realization realization

I

-

-a l -0.48 at -0.46 at -0.49

-

-a2 ' .4 a 2 1.37 a2 1.42

Comparison of elgen values for state matrices of the Ho-Kalman reali.za-tion and S.V.D. realizareali.za-tion shows the following:

~

,

0.777 } HO-Kalman realization A2 0.593 A, = 0.7906 } S.V.D. realization A2 0.6035

While ideal elgen valuES are A, = 0.8; A2 = 0.6.

It also occurs to be a very interesting experlence to observe the

norm of the following matrices:

II

(M k - ~ )

II

110 and

II

(~

-

~

) II

S. V. D. where T ~= (M 0 M, M2 ~) T

-~= (M M, MZ ~) 0 - T ~= (M 0 M, M2 ~)

(40)

-)6-For k=2, which is the sufficient index to construct the He-Kalman

approximation of the realization there is:

II

(~

~) _ 0.0015 thus for k=2

II

~!

- t\

II

<

II~

-

~II

II S.V.D. I: 0 [or k=4

II

~

-

~II

H

0.0216 0

II

~

-

~II

S.V.D. = 0.02149 so

II

~

-

~IIH

0

II~

-

~II

S. V.D.

0

but for k >4 for example k=IO

II

~\

-

~111l

O. 1976

0

II

~\ -

~II

= 0.03544 S.V.D.

II

~ ~II

>

II~

-

~II

S.V.D. II

0

Which induces the conclusion that while Ho-Kalman approximation of the

realization gives slightly better modelling of the syst€r.t in the transient state, the S.V.D. approximation gives almost equally good approximation in the transient and steady state, which means that it gives a better overall fit.

This phenomenon is caused by the fact that S. V.D. incorporates more

available information containe d in noisy data than the H crKalman algorithm.

Case c. The additive coloured noise 1S generated at the output of the

system. The input is generated as the white nOl.se having a r~ctangular density function between (-I, I)

while the noise filter input is generated as the white gal1Bsion noise with the standard deviation 0.5.

(41)

-37-The Harkov parameters estimated under such conditiong with th£' C.M.R.E. method arc:

(10 Markov parameters - 100 samples)

M =

~

I. II 0.2 ] M=(0.439 0.465J M = [ 0.t,39 a -0,112 I. 09_ I 0.0601 0.490

~

-0.0247 M =

l

0.101

0.22~1

M {0.294

0.32~;

M =

377 3 0.110 0.189 4 0.0318 -0.102 5 0.073 M = [ 0.622

-0.022~]

M =[0.677 0.124]. M = [0.342 6 0.0912 -0.0362 7 0.0423 -0.0287 ' 8 0.235 M = [ 0 .298 0.268

J

9 0.277 0.0038

The S.V.D. of the H5 delivered following singular values:

(JI "2 = 2.0950862 = 1.3087709 ~ 6. 3

1

L 2 °3 "4 2 = 6.8433805.10-1 °5 = 5.5023061.10-1 °6 °7 °8 3 4 5 6 7

Fig. 6. singular values of the 114 matrix.

= 5.0606823.10-1 2.5954151.10-1 1.326579 .10-1 3.036047 .10-2 i 8 0.144J; 0.571 0.044j; -0.191

0.32~}

0.070

(42)

The result of the dimension test 1S less prOnOllnCJ.ng 1.n this case,

bllt still it LS possible to decide for n ~2.

o

The estimate 114 used for the realization certainly suffers deviation

fn)m the block symmetry property, but also this deviation is rather small considering the noise level.

Again the second order approximation of the realization is found using

the simplest method of the a1l4 estimation.

The approximation of the realization takes form:

A~

[

O. 741028 -0.0183221J -

c-

[-1. 12064 -1.01055

J

-0. I 10 I 77 0.33448 0.776194 -0.865162 - [-0.432248 -0.56872

J

B~ 0.662446 -0.609601

Considering the noise level 50%, this realization is also a very good api')roximation of the original one,havirig eigen values of the A:

, ~ 0.74593

::1

A 2 =: 0.32957 Example 3 (0.8) (0.6)

The following system, having a structure presented on the fig. 4. was simulated under following conditions.

wh!:!:re and where

r·'

0 0 A ~ 0 0.4

'J

B ~ 0 0

o.

K (7) 0 C (17 - /1)-1 B

/I_~

r

O •1 0 ]

LO

0.7 B ~ E, K (I.) !; C

r,

(II - /I ) -

r;

I B r,

l:

-:]

[~

0

-~]

c

-I

(43)

-39-Markov parameters COl" this system are:

H o = C B = [ - l , O O J 1.0 1.0 [ -0.64 0.6-] 0.4 0.4 and 50 on. CAB

r-O. 8

L

0.4

0.61

0.4 [ -0.4096 0.0154 0.408J 0.0154

The intensi ty of the simul;lted noise "as 10% of the output signal amplitude,

"i~envalues of the A matrix ),1=0.8, ),2=0.4, ),3=0.2.

Estimates of Markov parameters derived via C.M.R.E. method are:

t o .994 -0.003] [-0.788 0.586] H = MI 0 1.0 I .0 0.403 0.397 H2 [-0.611 0.162 0.s7

J

0.166 H) [ -0.469 0.069 0.47IJ 0.068 M4 [-0.347 0.028 0.376] 0.028 Ms [ -0.246 0.015

O.27~J

; ...

0.010

Singular values of the Hs matrix for such a set of Markov parameters are: °1 2.7706702 (J2 I. 7184590 l13 3.582625.10-1 "4 3.7090322.10-2 "5 8.1052389.10-3 - )

°

6 3.819628.10 -3 "7 2.842466.10

°

8 1.0787837. 10-3

(44)

-40-0. L 1

.,

2 3 4 5 7 8 i

Fig. 7. singular values of the 114 matrix.

Tn this case the more pronouncing will be testing of the ra.tio:

o. 1 x 10 3

/

\

\ \

r,

\

\

4 2

"-~

.•..

/ /

2 3 4 5 6 7 8 i

Fig. 8. ratios of sinr,ular values of the H4

(45)

-1,1-After approximation of the 114 with the 114 calculated from the S.V.Il. and applying the realization alf;orithm, the approx.imation of the realization will be:

[

,.

'",'''

A

=

0.0852291 -0.547107

t

,.,m"

B = -0.458556 0.510278

c

=

[-I .

12M, 7 0.459472 0.175944 0.429488 -0.421211

-,.

-0.775701.

""''']

-0.566931 0.217971 -1.47049

0.""""]

0.0104386 0.205312 0.269546

J

0.0988497

ICigen values of the A matrix are: 0.7325

0.4075 0.2875

which again is a very good estimate of original ones.

Example 4

This case deals with the no~se free, two input/two output system

being of the first order and having the dimension I.

This example is "'eant to show that in such case also the S.V.D. aided

realization nlgorithm is capable to deliver correct results.

U 1 (z)

~L

__ K_(_Z) ____

~

____

Y~I_(7_')

______ :

-1

...

Y2(z)

(46)

-42--1.0 1.0 (z 0.5) (z - 0.5) K( ~) 2.0 -2.0 (z - D.5) (2 - 0.5)

The realization of this system can he:

A = 0.5 B = ( I , - I )

The Harkov parameters are:

M o [ -I

11

2 -2 ~1. I = 0.5 M. 1+ 1

c

[ -0.5 0.5J I -I

The singular values of the

H4

are:

"I I,. 1999 -II "2 1.4551915.10 "3 0 a

4

a

M2 = [-0.25 0.5 and all a. 1

o

for i = 3, 4, 5, 6, 7, 8, 9, 10. Thi~: glves an exact answer for n o _ = 1.

The realization computed from the 114 there is:

A 0.5

0.25·J

-0.5 B ( -0.613572 0.613572 ) ( - 1.0 I .0 )

*

0.6 I 3572 Then C via A Il C [ 1.6298

J

1.6298

[-~J

-3,2596

tIle similarity relation for T = 0.613572 we have '1'-1 ~ T '1'-1

R

C T 1.6298+0.5.>'-0.613572 = 0.5 1.6298 ..-0.16572 ;, (-1.0 1.0) (-1.0 1.0)

I.6298H.I('357U[_~

J=

[_~]

(47)

(,. CONCLUS TONS

Along this report there has been made an attempt to unify and redefine some notions important for the multivariable systems theory. Also another version of the realization algorithm has been proposed. This algorithm, called the Singular Value Decomposition Realization, shows some important and jnteresting properties:

I. For the noise free case the dimension test is an intrinsic part of the S.V.Il. realization.

2. For the no~sy case the S.V.Il. clgorithm delivers an excellent and convincing dimension test, which can be implemented

automatically.

3. For the noise free case the S.V.D. realization ~s equivalent to the Ho-Kalman realization (see 4.1.) and in the noisy case the equivalency exists as well, if the minimal amount of Markov parameters is used to constitute a Hankel matrix

H.

4. For the noisy case the S.V.Il. algorithm lead" to an

approximate realization, which is hased upon all estimated Markov parameters improving an overall fit of the model to

a g~ven system.

The whole procedure illustrating the identification of the multivariable

system in terms of the Markov parameters may be represented in the

(48)

g

er 10th ! re.a I lizations ~

i

bas L . - -:Lng on Hk -44-~ -

--

.. ~---. System

_r

~ C.M. R.E. method

{N.j

.v Ilk Order Test

L

."-S.V.D.

-'\

Order test 4. ~

,

Estimate llk S.V.D.R.

I--

uses (realizations) 1I~ ~nly

~

r

[~i1

+

+

.y

Comparison lIses only

-J

I!~ r Ho-Kalman Realization

{Mil

, 1 , Conclusions!

I

,

- I

Referenties

GERELATEERDE DOCUMENTEN

For the manipulation of Domain Importance we expected that in more important domains (compared to the control condition) participants would feel more envy, but also engage

We have presented quantum algorithms for generating samples from stationary distributions of a sequence of Markov chains which achieve a quadratic improve- ment over previous

Lasse Lindekilde, Stefan Malthaner, and Francis O’Connor, “Embedded and Peripheral: Rela- tional Patterns of Lone Actor Radicalization” (Forthcoming); Stefan Malthaner et al.,

In this section we treat the first decomposition theorem for matrix sequences (or matrix recurrences). The use of the theorem lies in the fact that the second

In addition to proving the existence of the max-plus-algebraic QRD and the max-plus-algebraic SVD, this approach can also be used to prove the existence of max-plus-algebraic

In Section 5, we use the factorization to find an approximate hidden Markov model corresponding to given probabilities of output strings of length 2 (i.e. the hidden Markov

First, the matrices A, B, C are reduced to a lower-dimensional triplet A, B, C, with B and C nonsingular, using orthogonal transformations such as the QR-factorization with

Zha, A new preprocessing algorithm for the computation of the generalized singular value decomposition, product of two matrices, SIAM J. De Moor, On the structure and geometry of