• No results found

BEHAVIORS DEFINED BY RATIONAL FUNCTIONS

N/A
N/A
Protected

Academic year: 2021

Share "BEHAVIORS DEFINED BY RATIONAL FUNCTIONS"

Copied!
6
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

BEHAVIORS DEFINED BY RATIONAL FUNCTIONS

Jan C. Willems University of Leuven B-3001 Leuven, Belgium

Jan.Willems@esat.kuleuven.be www.esat.kuleuven.be/∼jwillems

Yutaka Yamamoto Kyoto University Kyoto 606-8501, Japan yy@i.kyoto-u.ac.jp

www-ics.acs.i.kyoto-u.ac.jp/∼ yy

Abstract— In this article behaviors defined by ‘differential equations’ involving matrices of rational functions are intro- duced. Conditions in terms of controllability, stabilizability, and observability for the existence of rational representations that are prime over various rings are derived. Elimination of latent variables, image-like representations of controllable systems, and the structure of the rational annihilators of a behavior are discussed.

Index Terms— Behaviors, rational functions, controllability, stabilizability, observability, annihilators.

I. I

NTRODUCTION

In the behavioral approach, a system is viewed as a family of time trajectories, called the behavior of the system. Of course, this family must be specified somehow. The usual approach is to specify it as the set of solutions of a system of differential equations. However, specifications involving integral equations (as convolutions) and transfer functions are also common. In these situations it is not always clear how the behavior is actually defined. The present article deals with representations of behaviors that are specified in terms of matrices of rational functions (which in a sense may be thought of as transfer functions).

Until now, the behavioral theory of linear time-invariant differential systems has been dominated by polynomial ma- trix representations, and a rather complete theory, including control, H

-theory, etc. has been developed starting from such representations. Unfortunately, contrary to more con- ventional approaches, representations using rational func- tions have been neglected (other than in the context of input/output representations of controllable systems). In fact, only recently the basic idea of how to define a behavior in terms of rational functions has been introduced in [10] for discrete-time systems. In this paper, we deal with continuous- time systems.

A few words about the notation and nomenclature used.

We use standard symbols for the sets R, N, Z, etc. C denotes the complex numbers and C

+

the closed left half of the complex plane, C

+

:= s ∈ C

Re(s) ≥ 0 . We use R

n

, R

n×m

, etc. for vectors and matrices. When the number of rows or columns is immaterial (but finite), we use the notation

,

•×•

, etc. Of course, when we then add or multiply vectors or matrices, we assume that the dimen- sions are compatible. Matrices of polynomials and ratio- nal functions play an important role in this paper. Some

of the properties which we will use are collected in the appendix for easy reference. C

(R, R

n

) denotes the set of infinitely differentiable functions from R to R

n

. The notation rank, dim, rowdim, coldim, det, ker, im, degree, etc. is self- explanatory. The notation diag(M

1

, M

2

, · · · , M

n

) denotes the block matrix with the matrices M

1

, M

2

, · · · , M

n

on the block diagonal, and zeros elsewhere, and row(M

1

, M

2

, · · · , M

n

) denotes the block matrix obtained by stacking them next to each other. I denotes the identity matrix, and 0 the zero matrix. When we want to emphasize the dimension, we write I

n

and 0

n1×n2

.

II. R

EVIEW

: P

OLYNOMIAL REPRESENTATIONS

A dynamical system is a triple Σ = (T, W, B), with T ⊆ R the time-set, W the signal space, and B ⊆ W

T

the behavior. Hence a behavior is just a family of functions of time, mappings from T to W. In this article, we deal exclusively with continuous-time systems, T = R, with a finite dimensional signal space, W = R

. Moreover, we assume throughout that our systems are (i) linear (meaning that B is a linear subspace of (R

)

R

), (ii) time-invariant (meaning that B = σ

t

(B) for all t ∈ R, where σ

t

is defined by σ

t

( f ) (t

) := f (t

+ t)), and (iii) differential. This means that the behavior consists of the set of solutions of a system of differential equations. We now describe property (iii) more precisely in the linear time-invariant case.

Each of the behaviors B ⊆ (R

)

R

which we consider is the solution set of a system of linear constant coefficient dif- ferential equations. In other words, there exists a polynomial matrix R ∈ R [ ξ ]

•×w

such that B is the solution set of

R

dtd

 w = 0 (R)

We need to make precise when we want to call w : R → R

a solution of (R). Since generality in this respect is not germane to our aims, we shall deal only with infinite differentiable solutions. Hence (R) defines the dynamical system Σ = (R, R

, B ) with

B = w ∈ C

(R, R

)

(R) is satisfied .

Note that we may as well denote this as B = ker R

dtd

 , since B is actually the kernel of the differential operator R

dtd

 : C

(R, R

coldim(R)

) → C

(R, R

rowdim(R)

).

We denote the set of linear time-invariant differential sys-

tems or their behaviors by L

, and by L

w

when the number

(2)

of variables is w. Note that a behavior B ∈ L

w

is defined in terms of the representation (R), as B = ker R

dtd

 , with R some polynomial matrix in R[ ξ ]

•×w

. The analogous discrete- time system can be defined without involving a representa- tion. Indeed, B ⊆ (R

)

Z

linear, shift-invariant, and closed in the topology of pointwise convergence implies the existence of an R ∈ R [ ξ ]

•×•

such that B = ker R ( σ ). Hence, in this case, the representation as the kernel of a difference operator is deduced from properties of the behavior. Unfortunately, we know of no continuous-time analogue of this result (see [8, page 279] for some remarks concerning this point).

III. R

ATIONAL REPRESENTATIONS

The aim of this article is to discuss other representations of L

, namely representations by means of matrices of rational functions. These play a very important role in the field, in the context of robust stability, system topologies, the parametrization of all stabilizable controllers, etc.

Let G ∈ R ( ξ )

•×•

, and consider the system of ‘differential equations’

G

dtd

 w = 0. (G )

Since G is a matrix of rational functions, it is not clear when w : R → R

is a solution of (G ). This is not a question of smoothness, but a matter of giving a meaning to the equality, since G(

dtd

) is not a differential operator. We do this as follows (see the appendix for the nomenclature used).

Definition 1: Let (P, Q) be a left co-prime matrix factor- ization of G = P

−1

Q over R[ ξ ]. Define

[[ w : R → R

is a solution of (G )]] :⇔ [[ Q( d

dt )w = 0 ]].



We denote the set of solutions of (G ) by ker G(

dtd

).

Whence (G ) defines the system Σ = R, R

, ker G(

dtd

) = R , R

, ker Q(

dtd

) ∈ L

.

For example, for G ∈ R ( ξ ) , G =

qp

, with p, q ∈ R [ ξ ] co- prime, the set of solutions of

qp dtd

 w = 0 is defined to be equal to that of the differential equation q

dtd

 w = 0. In this case ker G(

dtd

) is finite dimensional, with dimension equal to degree(q). Our interest is mainly in the case that G is ‘wide’: more columns that rows. For example, the behavior of

qp1

1

d

dt

 w

1

+

qp2

2

d

dt

 w

2

= 0, with p

1

, q

1

∈ R [ ξ ] and p

2

, q

2

∈ R [ ξ ] both co-prime, is equal to the set of solutions of p

2

q

1 dtd

 w

1

+ p

1

q

2 dtd

 w

2

= 0, where p

1

= d p

1

, p

2

= d p

2

, d ∈ R [ ξ ], and p

1

, p

2

co-prime. This implies that, because of common factors, the behavior of G

1 dtd

 w

1

= G

2 d

dt

 w

2

with G

1

, G

2

∈ R ( ξ ) , G

1

6= 0, is not necessarily equal to the behavior of w

1

= G

−11

G

2



d

dt

 w

2

. Since the representations (R) are merely a subset of the representations (G ), matrices of rational functions form a representation class of L

that is more redundant than the polynomial matrices.

Definition 1 may evoke some scepticism, since the de- nominator P of the co-prime factorization G = P

−1

Q does not enter into the specification of the solution set, other than through the co-primeness requirement on P, Q. We now mention two other views which support definition 1.

1. Factor G as G = R+ F with R ∈ R [ ξ ]

•×•

and F ∈ R ( ξ )

•×•

strictly proper. Let F(s) = C(Is − A)

−1

B be a controllable (in the usual sense) realization of F. Consider the system

d

dt x = Ax + Bw, 0 = R( d

dt )w + Cx. (LS) This defines a set of w-trajectories w : R → R

. It equals ker G(

dtd

). More precisely, the w-behavior of (LS), i.e.

w ∈ C

(R, R

)

∃ x ∈ C

(R, R

) such that (LS) holds is equal to ker G(

dtd

).

2. Consider the (unique) controllable input/output system w 7→ y with transfer function G. Now consider its zero dynamics, i.e. the ‘inputs’ corresponding to ‘output’ y = 0.

This set of ‘inputs’ equals ker G(

dtd

).

That B ∈ L

admits a representation as the kernel of a polynomial matrix in

dtd

is a matter of definition. However, representations using the ring of proper stable rational func- tions (see the appendix for definitions) play a very important role. We state this representability in the next proposition.

Proposition 2: Let B ∈ L

. There exists G ∈ R ( ξ )

•×•S

such that B = ker G(

dtd

).

Proof: Assume that B = ker R

dtd



with R ∈ R [ ξ ]

•×•

of full row rank (such a representation always exists - see [5, theorem 2.5.25]). Let λ ∈ R, λ > 0 be such that rank(R (− λ )) = rank(R) and let n ∈ N be such that

(ξ+λR(ξ))n

is proper. Now take G( ξ ) =

R(ξ+λ))n

. The factorization G = P

−1

R with P ( ξ ) = ( ξ + λ )

n

I

rowdim(R)

is left co-prime.



IV. C

ONTROLLABILITY AND STABILIZABILITY

In this section, we relate controllability and stabilizability of a system to properties of their rational representations. We first recall the behavioral definitions of these notions.

Definition 3: The time-invariant system Σ = (R, R

, B ) is said to be controllable if for all w

1

, w

2

∈ B, there exists T ≥ 0 and w ∈ B, such that w(t) = w

1

(t) for t < 0, and w(t) = w

2

(t − T ) for t ≥ T (see figure 1).

2

0 T

w1

w

σTw W

time W

Figure 1. Controllability

It is said to be stabilizable if for all w ∈ B, there exists w

∈ B, such that w

(t) = w(t) for t < 0 and w

(t) → 0 for t → ∞ (see figure 2).

w’

w

0

W

time

Figure 2. Stabilizability



(3)

Observe that for B ∈ L

, controllability ⇒ stabilizability.

Denote the controllable elements of L

by L

contr

, and of L

w

by L

contrw

, and the stabilizable elements of L

by L

stab

, and of L

w

by L

stabw

. It is easy to derive tests for controllability and stabilizability in terms of kernel representations.

Proposition 4: (G ) defines a controllable system iff G has no zeros, and a stabilizable one iff G has no zeros in C

+

. Proof: Factor G in terms of the Smith-McMillan form (see the appendix for the notation) as G = ΠU

−1



−1

ZV . By the definition of ker G(

dtd

), ker G(

dtd

) = ker ZV (

dtd

). The system ZV (

dtd

)w = 0 is known to be controllable iff all the ζ

k

’s are equal to 1 [5, theorem 5.2.10], and stabilizable iff all the ζ

k

’s are Hurwitz [5, theorem 5.2.30].



The following result links controllability and stabilizability of systems in L

to the existence of left prime representa- tions over the rings R [ ξ ] and R ( ξ )

S

respectively.

Theorem 5:

1) B ∈ L

is controllable iff it admits a representation (R) with R ∈ R [ ξ ]

•×•

left prime over R [ ξ ].

2) B ∈ L

iff it admits a representation (G ) with G left prime over R ( ξ )

P

.

3) B ∈ L

is stabilizable iff it admits a representation (G ) with G ∈ R ( ξ )

•×•S

left prime over R ( ξ )

S

. Proof:

1): Each B ∈ L

admits a representation (R) with R of full row rank [5, theorem 2.5.25]. This representation is controllable iff R( λ ) ∈ C

•×•

has full row rank for all λ ∈ C (see [5, theorem 5.2.10]), equivalently iff R is left prime over R [ ξ ].

2): ‘if’: by definition. The proof of the ‘only if’ part is analogous to the proof of the ’only if’ part of 3). Just take S in that proof such that it does not contain any zeros of R.

3) ‘if’: G left prime over R ( ξ )

S

implies that it has no zeros in C

+

. Now apply proposition 4.

3) ‘only if’: As a preamble to the general case, assume first that B ∈ L

w

is described by a scalar equation

r

1

( d

dt )w

1

+ r

2

( d

dt )w

2

+ · · · + r

w

( d

dt )w

w

= 0

with r

1

, r

2

, . . . , r

w

∈ R [ ξ ]. Since B is stabilizable, r

1

, r

2

, . . . , r

w

have no common roots in C

+

. Take p ∈ R [ ξ ] Hurwitz, left co-prime with r

1

r

2

· · · r

w

, and with degree(p) = max({degree(r

1

) , degree(r

2

) , . . . , degree(r

w

)}). Then

r

1

p ( d

dt )w

1

+ r

2

p ( d

dt )w

2

+ · · · + r

w

p ( d

dt )w

w

= 0 is a representation of B that is left prime over R ( ξ )

S

.

In order to prove the general case, we establish first the following lemma.

Lemma 6: Consider R ∈ R [ ξ ]

n×n

with det (R) 6= 0. Let S ⊂ C be symmetric w.r.t. the real axis, and with a non-empty intersection with the real axis. There exists P ∈ R [ ξ ]

n×n

, such that

1) det(P) 6= 0,

2) det(P) all its roots in S, 3) P

−1

R ∈ R ( ξ )

n×n

is bi-proper.

Proof: The proof goes by induction on n. The case n = 1 is straightforward. Assume that n ≥ 2. Note that by taking (R, P) 7→ (UR,UP), we can depart from a suitable form for R obtained by pre-multiplying by a U ∈ U

R[ξ]

. Assume therefore (e.g. . the Hermite form, see e.g. [2]) that R is of the form

R = R

11

R

12

0 R

22

 ,

with R

11

and R

22

square of dimension < n. Assume, by the induction hypothesis, that P

11

satisfies the conditions of the lemma w.r.t. R

11

and P

22

w.r.t. R

22

.

We now prove the lemma by taking P conformably, P = P

11

P

12

0 P

22

 .

We will choose P

12

such that P satisfies the conditions of the lemma w.r.t. R. Note that

P

−1

= P

11−1

−P

11−1

P

12

P

22−1

0 P

22−1

 . Hence

P

−1

R = P

11−1

R

11

P

11−1

R

12

− P

11−1

P

12

P

22−1

R

22

0 P

22−1

R

22

 . Rewriting yields

P

11−1

R

11

0

0 I

  I R

−111

R

12

R

−122

P

22

− R

−111

P

12

0 I

  I 0 0 P

22−1

R

22

 . Let R

−111

R

12

R

−122

P

22

= M + N, with M ∈ R [ ξ ]

•×•

the polyno- mial part and N ∈ R ( ξ )

•×•

the strictly proper part. Choose P

12

= R

11

M. Then

P

−1

R = P

11−1

R

11

0

0 I

  I N 0 I

  I 0 0 P

22−1

R

22

 . P

−1

R satisfies the conditions of the lemma.



We now return to the proof of theorem 5. It is well-known [5, theorem 3.3.22] that, up to a permutation of the columns, we can assume that R is of the form R = R

1

R

2

, with R

1

square, det(R

1

) 6= 0, and R

−11

R

2

proper. Assume, for ease of exposition, that this permutation has been carried out. Choose P as in lemma 6. Then P

−1

R = P

−1

R

1

P

−1

R

2

. Note that P

−1

R

2

is proper, since P

−1

R

2

= P

−1

R

1

R

−11

R

2

.

We will now choose S to obtain that (i) P

−1

R is left prime over R( ξ )

S

, and (ii) P

−1

R 

d

dt

 w = 0 is a rational representation of B.

This is obtained by taking any S such that S ∩ C

+

= ∅, and such that none of the zeros of R are contained in S.

Note that, by stabilizability, R has no zeros in C

+

. To prove that P

−1

R satisfies (i), note that P

−1

R ∈ R ( ξ )

•×•

is proper, has no poles (the zeros of P) and no zeros (the zeros of R) in C

+

, and has a bi-proper submatrix (P

−1

R

1

).

This implies, by proposition 17 that P

−1

R is left prime over R ( ξ )

S

.

To prove that it satisfies (ii), note that P and R are both left co-prime, since they have full row rank, and the λ ∈ C where P( λ ) drops rank are disjoint from those where R( λ )

does.



(4)

The above theorem spells out exactly what the condition is for the existence of a kernel representation that is left prime over R ( ξ )

S

: stabilizability. It is of interest to compare this result with the classical results, obtained by Vidyasagar in his book [7] (this builds on a series of earlier results, for example [3], [11], [1]). In these publications, the aim is to obtain a representation of a system that is given as a transfer function to start with,

y = F( d

dt )u, w = u y



(F ) where F ∈ R ( ξ )

p×m

. This is a special case of (G ), and, since I

p

−F has no zeros, this system is controllable (by proposition 4), and therefore stabilizable. Thus, by theorem 5 it also admits a representation G

1

(

dtd

)y = G

2

(

dtd

)u with G

1

, G

2

∈ R ( ξ )

•×•S

, and left co-prime over R ( ξ )

S

. This is an important, classical, result. However, theorem 5 implies that, since we are in the controllable case, there exists a such a representation such that G

1

G

2

 has no zeros.

The main difference of our result from the classical left co-prime factorization results over R( ξ )

S

is that we pre- serve also the non-controllable part, whereas in the classical approach all stabilizable systems with the same transfer function are identified. By taking a trajectory based defi- nition, rather than a transfer function based definition, the behavioral point of view is able to carefully keep track of all trajectories, also of the non-controllable ones. Loosely speaking, left co-prime factorizations over R ( ξ )

S

manage to avoid unstable pole-zero cancellations. Our approach avoids altogether introducing common poles and zeros as well as pole-zero cancellations. Since the whole issue of co-prime factorizations started from a need to deal with pole-zero cancellations [11], we feel that our trajectory based mode of thinking offers a useful point of view.

At this point, we can go through the whole theory of behaviors and cast the results and algorithms in the context of rational representations, or cast the theory of co-prime factor- izations over R ( ξ )

•×•S

in the behavioral setting. We gave the salient facts, without proofs, concerning three further topics:

elimination of latent variables, image representations, and the structure of the rational annihilators of a behavior.

V. L

ATENT VARIABLES

Until now, we have dealt with representations involving the variables w only. However, many models, e.g. first principles models obtained by interconnection and state models, include auxiliary variables in addition to the variables the model aims at. We call the latter manifest variables, and the auxiliary variables latent variables. In the context of rational models, this leads to the model class

R

dtd

 w = M

dtd

 ℓ (RM ) with R, M ∈ R ( ξ )

•×•

. Since we have reduced the behavior of the system of ‘differential equations’ (RM ), involving rational functions, to one involving only polynomials, the

elimination theorem [5, theorem 6.2.2] remains valid. Con- sequently, the manifest behavior of (RM ), defined as {w ∈ C

(R, R

)

∃ ℓ ∈ C

(R, R

) such that (RM ) holds}, belongs to L

.

Definition 7: The latent variable representation (RM ) is said to be observable if, whenever (w, ℓ

1

) and (w, ℓ

2

) satisfy (RM ), then ℓ

1

= ℓ

2

. It is said to be detectable if, whenever (w, ℓ

1

) and (w, ℓ

2

) satisfy (RM ), then ℓ

1

(t) − ℓ

2

(t) → 0 as

t → ∞.



The following proposition is readily obtained.

Proposition 8: (RM ) is observable iff M has full column rank and has no zeros. It is detectable iff M has full column

rank, and has no zeros in C

+

.



VI. I

MAGE

-

LIKE REPRESENTATIONS

Consider now the following special cases of (R) and (G ):

w = M

dtd

 ℓ (M )

with M ∈ R [ ξ ]

•×•

, and

w = H(

dtd

)ℓ (H )

with H ∈ R ( ξ )

•×•

. Note that the manifest behavior of (M ) is the image of the differential operator M(

dtd

). This representation is hence called an image representation of its manifest behavior. It is not appropriate, however, to call (H ) an image representation of its manifest behavior. Indeed, for a given ℓ ∈ C

(R, R

), there are, whenever H has poles, many corresponding solutions w: H(

dtd

) is not a map. But in the observable case, (H ) defines a map from w to ℓ. Hence if H is of full column rank and has no zeros, (RM ) defines a map from w to ℓ. If it has no poles, it is a map from ℓ to w.

Nevertheless, the well known relation between controllability and these representations remains valid.

Theorem 9: The following are equivalent for B ∈ L

. 1) B is controllable,

2) B admits an image representation (M ),

3) B admits an observable image representation (M ), 4) B admits an image representation (M ) with M

R [ ξ ]

•×•

right prime over R[ ξ ],

5) B admits a representation (H ) with H ∈ R ( ξ )

•×•

, 6) B admits a representation (H ) with H ∈ R ( ξ )

•×•S

right prime over R ( ξ )

S

,

7) B admits an observable representation (H ) with H ∈ R ( ξ )

•×•S

right prime over R( ξ )

S

.



VII. T

HE ANNIHILATORS

In this section, we study the polynomial vectors or vectors of rational functions that annihilate an element of L

w

. We shall see that the polynomial annihilators form a module over R [ ξ ], and that the rational annihilators of a controllable system form a vector space over R( ξ ).

Obviously, for n ∈ R [ ξ ]

and w ∈ C

(R, R

), the state-

ments n

(

dtd

)w = 0, and, hence, with B ∈ L

, n

(

dtd

)B = 0,

(5)

are well-defined. However, since we have given a meaning to (G ), these statements are also well-defined for n ∈ R ( ξ )

.

Definition 10:

(i) [[ n ∈ R [ ξ ]

is a polynomial annihilator of B ∈ L

]]

:⇔ [[ n

(

dtd

)B = 0 ]].

(ii) [[ n ∈ R ( ξ )

is a rational annihilator of B ∈ L

]]

:⇔ [[ n

(

dtd

)B = 0 ]].



Denote the set of polynomial and of rational annihilators of B ∈ L

by B

R[ξ]

and B

R(ξ)

, respectively. The structure of these annihilators is given in the following proposition.

Proposition 11:

1) Let B ∈ L

. B

R[ξ]

is an R[ ξ ]-submodule of R [ ξ ]

. 2) Let B ∈ L

contr

. Then B

R(ξ)

is a linear R( ξ )-subspace

of R ( ξ )

.



The question occurs to what extent these annihilators uniquely define the system itself.

Theorem 12:

1) Denote the R[ ξ ]-submodules of R [ ξ ]

w

by M

w

. There is a bijective relation between L

w

and M

w

, given by

B ∈ L

w

7→ B

R[ξ]

∈ M

w

, M ∈ M

w

7→ {w ∈ C

(R, R

w

)

n(

dtd

)

w = 0 ∀ n ∈ M}.

2) Denote the linear R ( ξ )-subspaces of R ( ξ )

w

by L

w

. There is a bijective relation between L

contrw

and L

w

given by

B ∈ L

contrw

7→ B

R(ξ)

∈ L

w

, L ∈ L

w

7→ {w ∈ C

(R, R

w

)

n(

dtd

)

w = 0 ∀ n ∈ L}.



This theorem shows a precise sense in which a controllable linear system can be identified with a finite dimensional linear subspace.

We now introduce the controllable part of a system, and relate it to the annihilators. In [4], the existence of a maximal controllable subsystem has been established from a sophisticated point of view. We content ourselves with the following pedestrian definition.

Definition 13: Let B ∈ L

. The controllable part of B is defined as

B

contr

:= {w ∈ B

∀ t

0

,t

1

∈ R,t

0

≤ t

1

, ∃ w

∈ B, of compact support, such that w(t) = w

(t) for t

0

≤ t ≤ t

1

}.



It is easy to see that B

contr

∈ L

contr

.

Consider the system B ∈ L

. Consider its rational an- nihilators B

R(ξ)

. In general, this is not a vector space over R( ξ ). But its polynomial elements, B

R[ξ]

always form a module over R [ ξ ], and this module determines B uniquely. Of course, therefore, B

R(ξ)

determines B uniquely. Moreover, B

R(ξ)

forms a vector space over R( ξ ) iff B is controllable. More generally, the R ( ξ )-span of B

R(ξ)

is exactly B

contrR(ξ)

. Therefore the R ( ξ )-span of the rational annihilators of two systems are the same iff they have the same controllable part. Of course, other properties of systems can be deduced from these annihilators. For instance, stabilizability (see theorem 5).

VIII. C

ONCLUSIONS

The set of solutions of the system of ‘differential equa- tions’ G(

dtd

)w = 0 with G a matrix of rational functions can be defined very concretely in terms of a left co-prime factorization of G. This implies that G(

dtd

)w = 0 defines a linear shift invariant differential behavior. This definition bring the behavioral theory of systems and the theory of representations using proper stable rational functions in line with each other.

IX. A

PPENDIX

R [ ξ ] denotes the set of polynomials with real coefficients in the indeterminate ξ , and R ( ξ ) denotes the set of real rational functions in the indeterminate ξ . R[ ξ ] is a ring, and R [ ξ ]

n

a module over R[ ξ ]. R ( ξ ) is a field, and R ( ξ )

n

is an n dimensional vector over R( ξ ).

The polynomials p

1

, p

2

∈ R [ ξ ] are said to be co-prime if they have no roots in common. A polynomial p ∈ R [ ξ ] is said to be Hurwitz if it has no roots in C

+

.

We now review some salient facts regarding co-prime factorizations. For general rings, see [7]. In this appendix, we deal concretely with three rings that all have R[ ξ ] as their field of fractions:

1) the ring R[ ξ ] of polynomials,

2) the ring R( ξ )

P

of proper rational functions, and 3) the ring R( ξ )

S

of stable proper rational functions.

Informally, this means (i) all poles at ∞, (ii) no poles at ∞, (iii) only finite stable poles. We now review some salient facts regarding (matrices over) these rings.

An element U ∈ R [ ξ ]

n×n

is said to be unimodular over R [ ξ ] if it has an inverse in R [ ξ ]

n×n

. This is the case iff det(U) is equal to a non-zero constant. We denote the R [ ξ ]- unimodular elements of R[ ξ ]

•×•

by U

R[ξ]

.

M ∈ R ( ξ )

n1×n2

can be brought into a simple canonical form, called the Smith-McMillan form, using pre- and post- multiplication by elements from U

R[ξ]

, so by pre- and post- multiplication by polynomial matrices.

Proposition 14: Let M ∈ R ( ξ )

n1×n2

. There exist U,V ∈ U

R]

such that

M = U

"

diag 

ζ

π11

,

ζπ2

2

, · · · ,

πζrr



0

r×(n2r)

0

(n1rr

0

(n1r)×(n2r)

# V

with ζ

1

, ζ

2

, · · · , ζ

r

, π

1

, π

2

, · · · , π

r

non-zero elements of R[ ξ ], the pairs ζ

k

, π

k

co-prime for k = 1, 2, . . . , r, ζ

k−1

a factor of ζ

k

and π

k

a factor of π

k−1

, for k = 2, · · · , r. Of course,

r = rank(M).



The roots of the π

k

’s (hence of π

1

if we disregard multiplicity issues) are called the poles of M, and those of the ζ

k

’s (hence of ζ

r

) the zeros of M. More about the significance of the poles and the zeros may be found in [2], [6]. When M ∈ R [ ξ ]

•×•

, the π

k

’s are absent (they are equal to 1). We then speak of the Smith form.

M ∈ R [ ξ ]

n1×n2

is said to be left prime over R [ ξ ] if

for every factorization M = FM

with F ∈ R [ ξ ]

n1×n1

, M

R [ ξ ]

n1×n2

, F is unimodular over R[ ξ ].

(6)

Proposition 15: Let M ∈ R [ ξ ]

n1×n2

. The following are equivalent.

1) M is left prime over R [ ξ ], 2) rank M ( λ )  = n

1

∀ λ ∈ C,

3) ∃ N ∈ R [ ξ ]

n2×n1

such that MN = I

n1

,

4) M is of full row rank and it has no zeros.



The polynomial matrices M

1

, M

2

, . . . , M

n

∈ R [ ξ ]

nו

are said to be left co-prime over R [ ξ ] if the matrix M formed by them, M = row (M

1

, M

2

, . . . , M

n

), is left prime over R [ ξ ].

The pair (P, Q) is said to be a left factorization over R [ ξ ] of the matrix of rational functions M ∈ R ( ξ )

n1×n2

if

(i) P ∈ R [ ξ ]

p×p

and Q ∈ R [ ξ ]

p×m

, (ii) det(P) 6= 0, and

(iii) M = P

−1

Q.

It is said to be a left co-prime factorization of M over R [ ξ ] if, in addition,

(iv) P and Q are left co-prime over R [ ξ ].

The existence of such a left co-prime factorization is read- ily deduced from the Smith-McMillan form. Take P = ΠU

−1

, Q = ZV , with Π = diag ( π

1

, π

2

, . . . , π

r

, 1, 1, . . . , 1) and Z = diag ( ζ

1

, ζ

2

, . . . , ζ

r

, 0, 0, . . . , 0). It is easy to see that a left co-prime factorization of M ∈ R ( ξ )

n1×n2

over R [ ξ ] is unique up to multiplication by a unimodular element U ∈ U

R[ξ]

.

The relative degree of f ∈ R ( ξ ) , f =

dn

, n, d ∈ R [ ξ ], is defined as the degree of the denominator d minus the degree of the numerator n. The rational function f ∈ R ( ξ ) is said to be proper if the relative degree is ≥ 0, strictly proper if it is > 0, and bi-proper if it is 0. Denote

R ( ξ )

P

:=  f ∈ R( ξ )

f is proper . R ( ξ )

P

is a ring, in fact, a proper Euclidean domain.

M ∈ R ( ξ )

n1×n2

is said to be proper if each of its elements is proper. M ∈ R ( ξ )

n×n

is said to be bi-proper if det(M) 6= 0 and M, M

−1

are both proper. U ∈ R ( ξ )

nP×n

is said to be unimodular over R( ξ )

P

if it has an inverse in R( ξ )

nP×n

. There holds: [[U ∈ R ( ξ )

nP×n

is unimodular over R( ξ )

P

]]

⇔ [[ it is bi-proper]] ⇔ [[ det(U) is bi-proper]]. We denote the unimodular elements of R ( ξ )

•×•P

by U

R(ξ)P

.

M ∈ R ( ξ )

n1×n2

is said to be left prime over R ( ξ )

P

if every factorization M = FM

with F ∈ R ( ξ )

nP1×n1

, M

∈ R ( ξ )

nP1×n2

is such that F is unimodular over R ( ξ )

P

. The algebraic structure of R( ξ )

P

leads to the following propo- sition.

Proposition 16: Let M ∈ R ( ξ )

n1×n2

. The following are equivalent.

1) M ∈ R ( ξ )

P

and is left prime over R ( ξ )

P

,

2) M is proper, and it has an n

1

× n

1

submatrix that is bi-proper,

3) M ∈ R ( ξ )

P

and ∃ N ∈ R ( ξ )

nP2×n1

such that MN = I

n1

.



The matrices of rational functions M

1

, M

2

, . . . , M

n

∈ R ( ξ )

nPו

are said to be left co-prime over R ( ξ )

P

if the matrix M formed by them, M = row(M

1

, M

2

, . . . , M

n

), is left prime over R ( ξ )

P

.

Define

R ( ξ )

S

:=  f ∈ R( ξ )

f proper, no poles in C

+

.

Other stability domains are of interest, but we stick with the usual ‘Hurwitz’ domain for the sake of concreteness.

It is easy to see that R ( ξ )

S

is a ring. R( ξ ) is its field of fractions. In [7, page 10], it is proven that R ( ξ )

S

is a proper Euclidean domain.

An element U ∈ R ( ξ )

nS×n

is said to be unimodular over R ( ξ )

S

if it has an inverse in R( ξ )

nS×n

. This is the case iff det(U) is bi-proper and miniphase (:⇔ it has no poles and no zeros in C

+

). We denote the unimodular elements of R ( ξ )

•×•S

by U

R(ξ)S

.

M ∈ R [ ξ ]

n1×n2

is said to be left prime over R ( ξ )

S

if every factorization M = FM

with F ∈ R ( ξ )

nS1×n1

, M

∈ R ( ξ )

nS1×n2

is such that F is unimodular over R ( ξ )

S

. The algebraic structure of R( ξ )

S

leads to the following proposition.

Proposition 17: Let M ∈ R ( ξ )

n1×n2

. The following are equivalent.

1) M ∈ R ( ξ )

S

and is left prime over R ( ξ )

S

,

2) M has no poles and no zeros in C

+

, it is proper, and it has an n

1

× n

1

submatrix that is bi-proper,

3) M ∈ R ( ξ )

S

and ∃ N ∈ R ( ξ )

nS2×n1

such that MN = I

n1

.



The matrices of rational functions M

1

, M

2

, . . . , M

n

∈ R ( ξ )

nSו

are said to be left co-prime over R ( ξ )

S

if the matrix M formed by them, M = row(M

1

, M

2

, . . . , M

n

), is left prime over R ( ξ )

S

.

Right (co-)prime, right (co-) prime factorizations, etc., are defined in complete analogy with their left counterparts.

X. A

CKNOWLEDGMENTS

This research is supported by the Belgian Federal Government under the DWTC program Interuniversity Attraction Poles, Phase V, 2002–2006, Dy- namical Systems and Control: Computation, Identification and Modelling, by the KUL Concerted Research Action (GOA) MEFISTO–666, and by several grants en projects from IWT-Flanders and the Flemish Fund for Scientific Research.

R

EFERENCES

[1] C.A. Desoer, R.W. Liu, J. Murray, and R. Saeks, Feedback system de- sign: The fractional representation approach to analysis and synthesis, IEEE Transactions on Automatic Control, volume 25, pages 399–412, 1980.

[2] T. Kailath, Linear Systems, Prentice Hall, 1980.

[3] V. Kuˇcera, Stability of discrete linear feedback systems, paper 44.1, Proceedings of the 6-th IFAC Congress, Boston, Massachusetts, USA, 1975.

[4] W. Bian, M. French, and H. Pillai, An intrinsic behavioral approach to the gap metric, Proceedings of the 44-th IEEE CDC, Seville, Spain, pages 1553–1558, 2005.

[5] J.W. Polderman and J.C. Willems, Introduction to Mathematical Systems Theory: A Behavioral Approach, Springer-Verlag, 1998.

[6] H.H. Rosenbrock, State-Space and Multivariable Theory, Wiley, 1970.

[7] M. Vidyasagar, Control System Synthesis, The MIT Press, 1985.

[8] J.C. Willems, Paradigms and puzzles in the theory of dynami- cal systems, IEEE Transactions on Automatic Control, volume 36, pages 259–294, 1991.

[9] J.C. Willems, On interconnections, control and feedback, IEEE Trans- actions on Automatic Control, volume 42, pages 326–339, 1997.

[10] J.C. Willems, Thoughts on system identification, in Control of Un- certain Systems: Modelling, Approximation and Design (edited by B.A. Francis, M.C. Smith and J.C. Willems), Springer Verlag Lecture Notes on Control and Information Systems, volume 329, 2006.

[11] D.C. Youla, J.J. Bongiorno, and H.A. Jabr, Modern Wiener-Hopf design of optimal controllers, Part I: The single-input case, IEEE Transactions on Automatic Control, volume 21, pages 3–14, 1976.

Referenties

GERELATEERDE DOCUMENTEN

Moreover, if we find that the toolbox adaptation is already intractable with this smaller toolbox (the toolbox containing a subset of all heuristics), it is likely that

De volgende hoofdstukken bespreken achtereenvolgens de geologische, topografische en archeologische context van het plangebied in hoofdstuk 2, de methodiek van de archeologische

Abstract-The present paper describes an algorithm for estimating the translation vector and the rotation matrix of a moving body from noisy measurements on

that linked poor pollutant detoxification mechanisms of neonatal stage organisms to enhanced toxicity effects. The reversed ranking of &lt; 72-hrs neonates and &lt;

Rational Functions Invariant under a Finite Abelian Group 301 the order of A, and Section 5 contains the proof of the main theorem.. Supplementary results are given in Section 6,

We present some methodology (using semidefinite programming and results from real algebraic geometry) for least-norm approximation by polynomials, trigonometric polynomials and

It is the purpose of this paper to reconsider the controller synthesis question for specific classes of linear and time- invariant L 2 systems that admit representations in terms

There is also shown that there exists a relation between the problem of elimination for the class of L 2 systems and disturbance decoupling problems, which has resulted in an