• No results found

On polynomial and rational quadratic differential forms

N/A
N/A
Protected

Academic year: 2021

Share "On polynomial and rational quadratic differential forms"

Copied!
12
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

On polynomial and rational quadratic

differential forms

K. Takaba H.L. Trentelman∗∗ J.C. Willems∗∗∗ Department of Applied Mathematics and Physics

Graduate School of Informatics

Kyoto University, Kyoto 606-8501, Japan (e-mail: takaba@ amp.i.kyoto-u.ac.jp).

∗∗Mathematics Department, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands (e-mail:

h.l.trenteman@math.rug.nl)

∗∗∗ ESAT-SISTA, K.U. Leuven, Kasteelpark Arenberg 10, Leuven B-3001, Belgium (e-mail: Jan.Willems@esat.kuleuven.be)

Abstract: In system and control theory, we often encounter the situation of investigating some functionals which represent Lyapunov functions, energy storage, performance measures, e.t.c. In particular, if the system is linear time-invariant, the functionals can be quadratic forms of system variables and their derivatives. Such a quadratic form is called a quadratic

differential form (QDF) which was introduced in the context of the behavioral system theory

by Willems and Trentelman (1998). The contribution of this keynote paper is twofold. Firstly, this paper illustrates the basic features of QDF’s in terms of polynomial matrices. We also discuss the dissipativity and Lyapunov stability on the basis of QDF’s. The second contribution is to present a new and more general formulation of QDF’s in terms of rational functions rather than polynomials. A QDF defined by rational functions is called a rational QDF. Unlike polynomial QDF’s, a rational QDF defines a set of values of a quadratic functional. It is shown that the several features of polynomial QDF’s (nonnegativity, average nonnegativity, e.t.c.) can be generalized to the case of rational QDF’s.

Keywords: behavior, linear systems, polynomial methods, quadratic differential form, rational representation

1. INTRODUCTION

In analysis and synthesis of dynamical systems, we often encounter the situation of investigating some functionals which represent Lyapunov functions, energy supply, energy storage, performance measures, and so forth. In particular, if the system under consideration is linear time-invariant, the functionals can be quadratic forms of the system variables and their derivatives. Such a quadratic form is called a quadratic differential form (QDF) which was introduced in the context of the behavioral system theory by Willems and Trentelman (1998).

In the behavioral framework, the QDF’s have been playing a crucial role in many aspects of system and control the-ory: Lyapunov stability (Willems and Trentelman 1998, Peeters and Rapisarda 2001, Kojima and Takaba 2005), dissipation theory (Willems and Trentelman 1998, Willems and Trentelman 2002, Kaneko and Fujii 2000, 2003), linear quadratic optimal control (Willems 1993, Willems and Valcher 2005),Hcontrol (Trentelman and Willems 1999, Willems and Trentelman 2002, Belur and Trentelman2002) and stability analysis of uncertain or nonlinear intercon-nections (Pendharkar Pillai 2007, Takaba 2005, Willems and Takaba 2007).

This keynote paper discusses the basic features of QDF’s and their generalization.

We first introduce the definition of a QDF in terms of polynomial matrices in Section 2. Note that a QDF has one-to-one correspondence with a two-variable polynomial matrix, whose indeterminates represent the differentia-tions of the variables. Hence, in terms of polynomials, we will illustrate the basic features of QDF’s including non-negativity, average nonnon-negativity, half-line nonnegativity and so forth, which are closely related to the dissipation theory and Lyapunov stability.

In Section 3, as a generalization of the ‘polynomial’ QDF’s described above, we will present a new formulation of a QDF in terms of rational functions, and will examine the basic features of such a type of QDF’s. We call such a QDF defined by rational functions a rational QDF. Note that the need for such rational QDF’s arises, for example, in the stability analysis of interconnected or feedback systems via rational multipliers or integral quadratic constraints (e.g. Megretski and Rantzer 1997, Iwasaki and Hara 1998). It will be shown that the several features of polynomial QDF’s such as nonnegativity and average nonnegativity can be generalized to the case of rational QDF’s.

Some basics on a linear differential system and its repre-sentation by a rational function in the behavioral frame-work are given in Appendix A.

(2)

Notations:

R: the field of real numbers C: the field of complex numbers iR := {λ ∈ C | λ = iω, ω ∈ R} R[ξ]: the ring of polynomials

R[ζ, η]: the ring of two-variable polynomials R(ξ): the ring of rational functions

R(ζ, η): the ring of two-variable rational functions Rp×q:p × q real matrices

Rp×q[ξ]: p × q polynomial matrices

Rp×q[ζ, η]: p × q two-variable polynomial matrices

Rp×q(ξ): p × q rational matrices

Rp×q(ζ, η): p × q two-variable rational matrices

Rp×ps : p × p symmetric real matrices Rp×ps [ζ, η]: p × p

symmetric two-variable polynomial matrices

Rp×ps (ζ, η): p × p symmetric two-variable rational matrices C(R, Rw): infinitely differentiable functions fromR to Rw

D: compact support functions

Note that we will often use “•” to denote irrelevant dimensions of a vector or a matrix.

2. QUADRATIC DIFFERENTIAL FORMS IN THE POLYNOMIAL SETTING

We here give the original definition of a QDF in terms of polynomial matrices, and review the basic features of QDF’s from Willems and Trentelman (1998), Rapisarda and Willems (2004), Kaneko and Fujii (2004).

2.1 Definitions and basic calculus

A quadratic differential form (QDF) QΦ is defined as a

quadratic form of w : R → Rwand its derivatives. Namely,

QΦ:C(R, Rw)→ C∞(R, R) QΦ(w)(t) = k  i=0 k  j=0  diw dti  Φij  djw dtj  where Φij ∈ Rw×w and Φji = Φij. (i = 0, 1, . . . , k). Throughout this paper, we will assume that all variables areC∞functions of time t for simplicity of discussion. Also, we often drop the time dependence for ease of notation. We can associate QΦwith a symmetric two-variable

poly-nomial matrix Φ(ζ, η) = k  i=0 k  j=0 ζiηjΦij ∈ Rw×ws [ζ, η].

By “symmetric”, we mean Φ(ζ, η) = Φ(η, ζ). Notice that the indeterminates ζ and η correspond to the differentia-tions on w and w, respectively.

Furthermore, we define the coefficient matrix of Φ(ζ, η) as ˜ Φ := ⎛ ⎜ ⎜ ⎝ Φ00 Φ01 · · · Φ0k Φ10 Φ11 · · · Φ1k .. . ... . .. ... Φk0 Φk1 · · · Φkk ⎞ ⎟ ⎟ ⎠ ∈ Rw(k+1)×w(k+1)s

It is clearly seen that there is one-to-one correspondence among QΦ, Φ(ζ, η) and ˜Φ.

We consider the factorization of Φ∈ Rw×ws [ζ, η] in the form of

Φ(ζ, η) = M(ζ)ΣM (η), M ∈ R•×w[ξ], Σ = diag(Ir+, −Ir)∈ R•×•.

Note that a constant matrix in the form of diag(Ir+, −Ir)

is called a signature matrix. If M (ξ) has minimal row number among such factorizations of Φ(ζ, η), then we say that the factorization is a symmetric canonical

factoriza-tion or simply a canonical factorizafactoriza-tion. The canonical

factorization of Φ(ζ, η) can be obtained by the following procedure. Letv = rank˜Φ, and factor Φ as ˜Φ = ˜MΣ ˜M ,

˜

M ∈ Rv×w, Σ∈ Rv×vs , where Σ is a signature matrix. Then, a canonical factorization of Φ(ζ, η) is obtained by

Φ(ζ, η) = M(ζ)ΣM (η), M (ξ) = ˜M ⎛ ⎜ ⎝ Iw ξIw .. . ξkI w ⎞ ⎟ ⎠ Of course, the canonical factorization of a two-variable polynomial matrix is not unique. If we obtain two different canonical factorizations

Φ(ζ, η) = M1(ζ)ΣM1(η) = M2(ζ)ΣM2(η),

then there exists a nonsingular matrix U ∈ Rv×vsuch that Σ = UΣU, M1(ξ) = U M2(ξ).

Therefore, the canonical factorization is unique modulo a pseudo-unitary transformation U (p. 1709 in Willems and Trentelman 1998).

Example 1. We consider the QDF QΦ(w) = ˙w12+ 2w1w˙2,

where “ ˙ ” denotes time differentiation. Since

QΦ(w) =  w1 w2  0 0 0 0   w1 w2  +  ˙ w1 ˙ w2  0 0 1 0   w1 w2  +  w1 w2  0 1 0 0   ˙ w1 ˙ w2  +  ˙ w1 ˙ w2  1 0 0 0   ˙ w1 ˙ w2  ,

this QDF is induced by the polynomial matrix Φ(ζ, η) =  0 0 0 0  + ζ  0 0 1 0  + η  0 1 0 0  + ζη  1 0 0 0  =  ζη η ζ 0 

Moreover, the coefficient matrix ˜Φ is ˜ Φ = ⎛ ⎜ ⎝ 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 ⎞ ⎟ ⎠

Since rank ˜Φ = 3, the canonical factorization of Φ(ζ, η) is given by M (ξ) = ⎛ ⎝ 1 2 12ξ −ξ 0 1 2 −√12ξ⎠ , Σ = 1 0 0 0 1 0 0 0 −1 3

One of the important properties of QDF’s is the fact that the derivative of a QDF is also a QDF. Namely, there holds

d

dtQΦ(w)(t) = QΨ(w)(t) ∀t ∈ R, ∀w ∈ C

(R, Rw) ⇔ (ζ + η)Φ(ζ, η) = Ψ(ζ, η)

We hereafter use the notation

Φ (ζ, η) := (ζ + η)Φ(ζ, η).

(3)

Example 1 (continued). Again, consider QΦ(w) = ˙w21+

2w1w˙2. Direct calculation shows that d dtQΦ(w) = 2 ˙w1w¨1+ 2 ˙w1w˙2+ 2w1w¨2. is induced by Φ (ζ, η) = (ζ + η)  ζη η ζ 0  =  ζ2η + ζη2 ζη + η2 ζη + ζ2 0  . 3 2.2 Nonnegativity

Definition 1. Let Φ∈ Rw×ws [ζ, η] be given. A QDF QΦ, or

Φ(ζ, η), is said to be nonnegative, denoted by Φ ≥ 0, if

QΦ(w)(t) ≥ 0 ∀t ∈ R, ∀w ∈ C∞(R, Rw).

Furthermore, QΦ(or Φ(ζ, η)) is called positive, denoted by

Φ > 0, if it is nonnegative and QΦ(w)(t) = 0 ∀t implies

w(t) = 0 ∀t. The nonpositivity and negativity of a QDF

are also defined in the same way.

Proposition 1. Let Φ∈ Rw×ws [ζ, η] be given. (i) The following are equivalent.

(a) Φ≥ 0 (nonnegative).

(b) There exists D ∈ R•×w[ξ] such that

Φ(ζ, η) = D(ζ)D(η). (1) (c) ˜Φ≥ 0.

(ii) The following are equivalent. (a) Φ > 0 (positive).

(b) There exists D ∈ R•×w[ξ] such that (1) holds, and

D(λ) has full column rank for all λ ∈ C.

Note also that ˜Φ > 0 implies the positivity of QΦ, but the

converse is not true.

Example 2. Consider the QDF

QΦ(w) = ˙w21+ w12+ w22 which is induced by Φ(ζ, η) =  1 + ζη 0 0 1  .

It is obvious that QΦis positive. We have

˜ Φ = ⎛ ⎜ ⎝ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 ⎞ ⎟ ⎠

This ˜Φ is nonnegative definite, but not positive definite. Now, Φ(ζ, η) is factored as Φ(ζ, η) = D(ζ)D(η) with

D(ξ) = 1 0 0 1 ξ 0 .

Actually, this D(ξ) is a canonical factor of Φ(ζ, η), and

D(λ) has full column rank for all λ ∈ C. 3 2.3 Average nonnegativity and dissipation inequality

We often need to consider integrals of QDF’s, in several applications such as LQ optimal control, H control, etc. In particular, we are interested in boundedness or nonnegativity of such integrals.

Definition 2. Let Φ∈ Rw×ws [ζ, η] be given.

(i) A QDF QΦ, or Φ(ζ, η), is said to be average nonnega-tive, denoted by QΦ≥ 0, if

−∞QΦ(w)(t)dt ≥ 0 ∀w ∈ C

(R, Rw)∩ D (2)

(ii) A QDF QΦ, or Φ(ζ, η), is said to be half-line nonneg-ative, denoted tQΦ≥ 0, if

 0

−∞QΦ(w)(t)dt ≥ 0 ∀w ∈ C

(R, Rw)∩ D (3)

The next proposition characterizes necessary and sufficient conditions for the average nonnegativity of QΦ.

Proposition 2. The following are equivalent for Φ

Rw×w s [ζ, η].

(i) QΦ≥ 0 (average nonnegative).

(ii) Φ(¯λ, λ) ≥ 0 ∀λ ∈ iR

(iii) There exists Ψ∈ Rw×ws [ζ, η] such that

d

dtQΨ(w)(t) ≤ QΦ(w)(t) ∀t ∈ R, ∀w ∈ C

(R, Rw)

(4) (iv) There exist Ψ∈ Rw×ws [ζ, η] and F ∈ R•×w[ξ] such that (ζ + η)Ψ(ζ, η) + F(ζ)F (η) = Φ(ζ, η) (5) The inequality (4) is closely related to the dissipativity of a linear differential system. In fact, the inequality (4) is referred to as a dissipation inequality, in which QΦand QΨ

are called a supply rate and a storage function, respectively. The reason for these names is explained as follows. Let a controllable system B ∈ Lv be given by the image representation v = imM (dtd)w, M ∈ Rv×w[ξ], namely B = {v ∈ C∞(R, Rv)| ∃w ∈ C(R, Rw) s.t. v = M (d

dt)w}.

See Appendix A for the details about linear differential systems. Consider the quadratic form s(t) = v(t)Σv(t), Σ ∈ Rv×vs . Using the image representation, s(t) can be expressed as a QDF of the latent variable w as

s = QΦ(w) = [M (d dt)w]

Σ[M (d

dt)w] (6)

where Φ(ζ, η) = M(ζ)ΣM (η). If we view s(t) =

QΦ(w)(t) as the rate of energy supply into the system B and QΨ(w)(t) as the energy stored in B, then (4) claims

that the rate of increase of the stored energy QΨ() never

exceeds the rate of energy supply QΦ, namely, the system

B dissipates the energy.

If QΦ is average nonnegative, a storage function can be obtained by the following procedure. First, we perform the polynomial spectral factorization

Φ(−ξ, ξ) = F(−ξ)F (ξ). (7) This factorization is possible whenever the condition (ii) in the above proposition is satisfied. Then, one of the storage functions for QΦis induced by

Ψ(ζ, η) = Φ(ζ, η) − F

(ζ)F (η)

ζ + η .

Clearly, the resulting Φ(ζ, η) and F (ξ) satisfy (5). It is seen from (5) that there holds

d

dtQΨ(w)(t)+QΔ(w)(t) = QΦ(w)(t) (8) ∀t ∈ R, ∀w ∈ C∞(R, Rw)

(4)

where we have defined Δ(ζ, η) = F(ζ)F (η), and QΔ is

called a dissipation rate. Note that Δ ≥ 0 holds by its definition and Proposition 1 (i). It is also immediate from (8) that  −∞QΔ(w)(t)dt =  −∞QΦ(w)(t)dt, ∀w ∈ C (R, Rw)∩D (9) Hence, as another characterization of the average nonneg-ativity (dissipnonneg-ativity) of QΦ, we conclude that QΦ ≥ 0

is equivalent to the existence of a dissipation rate QΔ

satisfying Δ≥ 0 and (9).

As we have seen, a storage function or a dissipation rate is obtained by performing the spectral factorization of (7). This claims that the storage functions for QΦ

are characterized by the choice of the spectral factor

F (ξ). We introduce the Hurwitz and anti-Hurwitz spectral

factorizations as

Φ(−ξ, ξ) = H(−ξ)H(ξ), H(ξ) : Hurwitz, (10) Φ(−ξ, ξ) = A(−ξ)A(ξ), A(ξ) : anti-Hurwitz. (11) Then, the minimal and maximal storage functions are characterized in the next proposition.

Proposition 3. Assume that QΦ is average nonnegative

for a given Φ ∈ Rw×ws [ζ, η]. Then, there exist Ψ−, Ψ+

Rw×w

s [ζ, η] such that

Ψ≤ Ψ ≤ Ψ+ (12)

holds for any Ψ ∈ Rw×ws [ζ, η] which induces a storage function for QΦ. If Φ(¯λ, λ) > 0 for all λ ∈ iR, such Ψ−(ζ, η)

and Ψ+(ζ, η) are given by

Ψ(ζ, η) = Φ(ζ, η) − H (ζ)H(η) ζ + η , (13) Ψ+(ζ, η) =Φ(ζ, η) − A (ζ)A(η) ζ + η , (14)

where H(ξ) and A(ξ) are the Hurwitz and anti-Hurwitz spectral factors define in (10),(11).

L

R

C

v

i

Fig. 1. RLC electrical network (Example 3)

Example 3. We consider a simple electrical network

con-sisting of a resistor R, an inductor L and a capacitor

C (Fig. 1). Let V (t) and I(t) be the port voltage and

current at time t, respectively. We also define q(t) as the electric charge in the capacitor. Using fundamental laws of electrical elements and circuits, the dynamics of this system is described by dV dt = 1 CI + R dI dt + L d2I dt2 (15)

Then, an image representation of this system is given by  V I  =  Rdtd + Ldtd22+C1 d dt  q,

where the electric charge q serves as a latent variable. As well known, the power (supply rate) into the electrical network is the product V ·I, which is equivalently described by the QDF V I = QΦ(q), Φ(ζ, η) = 1 2  1 C+ Rζ + Lζ2 ζ  0 1 1 0  1 C+ Rη + Lη2 η  Define QΨ(q) = 1 2Cq 2+L 2( dq dt) 2= 1 2Cq 2+L 2I 2

where Ψ(ζ, η) = 2C1 + 12Lζη. Then, QΨ(q) represents

the energy stored in the system, namely the sum of the energies stored in the capacitor and the inductor. This

QΨ(q) satisfies the dissipation inequality, because d dtQΨ(q) = 1 CQI + LI dI dt = (1 Cq + L dI dt)I = (V − RI)I = QΦ(q) − RI2

Hence, we conclude that this system is dissipative with re-spect to the supply rate QΦ(q) = V I, with the dissipation

rate QΔ(q) = RI2and the storage function QΨ(q). In fact,

the direct calculation proves that

(ζ + η)Ψ(ζ, η) + Δ(ζ, η) = Φ(ζ, η), Δ(ζ, η) = Rζη (16)

3 Remark 1. If R = 0 holds, then we have dtdQΨ(q)(t) = QΦ(q)(t) for all t ∈ R and q ∈ C∞(R, R). This implies

that the system preserves the supplied energy without dissipating it. Hence such a system is called lossless. A nonnegative storage function often plays an important role in many issues arising in system and control theory such as stability analyses. Note that Example 3 is an example of the system which admits a nonnegative storage function.

The next proposition relates the half-line positivity in Definition 2 (ii) and the existence of a nonnegative storage function.

Proposition 4. Let Φ∈ Rw×ws [ζ, η] be given. The following are equivalent.

(i) QΦis average nonnegative and admits a nonnegative

storage function.

(ii) tQΦ≥ 0 (half-line nonnegative).

Remark 2. The frequency domain condition for the

half-line nonnegativity is more complicated than that for the average nonnegativity. In fact, QΦis half-line nonnegative

if and only if

• Φ(¯λ, λ) ≥ 0 ∀λ ∈ iR.

• The Pick matrix TΦ associated with Φ(ζ, η) is

non-negative definite.

The Pick matrix is an Hermitian matrix constructed in terms of the stable zeros of the spectral density matrix Φ(−ξ, ξ). For the detail, see Section 9 in Willems and Trentelman (1998).

(5)

2.4 QDF along behavior, and Lyapunov stability

In the analysis and control of linear dynamical systems, we often need to study QDF’s whose variables are produced from the systems. More precisely, we consider the proper-ties of properproper-ties of QΦ(w), when w belongs to a certain

linear differential behavior B ⊂ Lw.

Definition 3. Let the behavior B ∈ Lwbe given. Consider two different QDF’s QΦ1 and QΦ2 induced by Φ1, Φ2

Rw×w

s [ζ, η]. We say that QΦ1 and QΦ2 are equivalent over

B, denoted by Φ1B= Φ2, if

QΦ1(w)(t) = QΦ2(w)(t) ∀t ∈ R, ∀w ∈ B.

Let B be represented by the image representation w =

M (dtd), M ∈ Rw×l[ξ]. Then, we easily that

QΦ(w)(t) = QΨ()(t) ∀t ∈ R,

where we have defined Ψ(ζ, η) = M(ζ)Φ(ζ, η)M (η). Since  is free in the image representation, the trajectory of QΦ(w)(t) over B is completely characterized by QΨ().

We thus obtain the following result.

Proposition 5. Let B ∈ Lw be represented by the image representation w = M (dtd), M ∈ Rw×l[ξ]. Then, Φ1= ΦB 2

if and only if

M(ζ)Φ1(ζ, η)M (η) = M(ζ)Φ2(ζ, η)M (η). Next, assume that B is represented by the kernel repre-sentation

Proposition 6. Let B ∈ Lw be represented by the kernel representation R(dtd)w = 0, R ∈ Rp×w[ξ]. Then, the following are equivalent.

(i) Φ1= ΦB 2.

(ii) There exists X ∈ Rp×w[ζ, η] such that

Φ1(ζ, η) = Φ2(ζ, η) + X(η, ζ)R(η) + R(ζ)X(ζ, η)

The next proposition characterizes the nonnegativity and positivity of a QDF along the behavior.

Proposition 7. Let Φ ∈ Rw×ws [ζ, η] be given. Assume that B ∈ Lw be represented by the kernel representation R(dtd)w = 0, R ∈ Rp×w[ξ].

(i) ΦB≥ 0 holds if and only if there exists X ∈ Rp×w[ζ, η] and D ∈ R•×w[ξ] satisfying

Φ(ζ, η) = X(η, ζ)R(η) + R(ζ)X(ζ, η) + D(ζ)D(η) (17) (ii) Φ> 0 holds if and only if there exist X ∈ RB p×w[ζ, η]

and D ∈ R•×w[ξ] satisfying (17), with R(ξ) and D(ξ) being right coprime, i.e.



R(λ) D(λ)



has full column rank for all λ ∈ C.

The above result plays an important role in the Lyapunov stability analysis of a linear differential system. Recall that a linear differential behaviorB is said to be asymptotically

stable if

w(t) → 0 (as t → +∞) ∀w ∈ B.

Proposition 8. Let an autonomous behavior B ∈ Lw be given by the kernel representation R(dtd)w = 0, R ∈ Rp×w[ξ]. Then, the following are equivalent.

(i) B is asymptotically stable.

(ii) There exists Ψ∈ Rw×ws [ξ] such that ΨB≥ 0 & Ψ•< 0B

(iii) There exist Ψ∈ Rw×ws [ζ, η], X, Y ∈ Rp×w[ζ, η], D, E ∈ R•×w[ξ] such that

Ψ(ζ, η) = X(η, ζ)R(η) + R(ζ)X(ζ, η) + D(ζ)D(η) (18) (ζ + η)Ψ(ζ, η) = Y(η, ζ)R(η) + R(ζ)Y (ζ, η)

− E(ζ)E(η), (19)

where R(ξ) and E(ξ) are right coprime.

The condition (ii) in the above theorem is equivalent to the existence of a QDF QΨ satisfying

QΨ(w)(t) ≥ 0 &d

dtQΨ(w)(t) < 0 ∀t ∈ R, ∀w ∈ B.

If we regard QΨ(w)(t) as the energy stored in the system at

time t, these inequalities mean that the energy monotoni-cally decays to its steady-state value. For this reason, the QDF QΨ is called a Lyapunov function of B. Note also that the equivalence (ii)⇔(iii) immediately follows from Proposition 7.

Example 4. Consider an unforced mechanical system

de-scribed by the following equation of motion

m¨x + d ˙x + kx = 0,

where m, d and k represent the mass, damping coefficient and spring constant, respectively. Namely,

B =x ∈ C∞(R, R)| R(dtd)x = 0, R(ξ) = mξ2+ dξ + k Then, the QDF QΨ(x) = 1 2(kx 2+ m ˙x2), Ψ(ζ, η) = 1 2(k + mζη). is a Lyapunov function that proves the asymptotic stability ofB. In fact, (18) and (19) are satisfied for

X(ζ, η), Y (ζ, η) = 1 2η, D(ξ) =  1 ξ  , E(ξ) =√dξ. In addition,  R(λ) E(λ)  =  2+ dλ + k 

has full column rank for all λ ∈ C. Hence, the asymptotic stability ofB is proven by Proposition 8 (i)⇔(iii). As easily seen from this example, the main advantage of the QDF-based stability condition is that the asymptotic stability can be checked by using the artless high-order differential equation without converting it into a

state-space form. 3

3. RATIONAL QUADRATIC DIFFERENTIAL FORMS The rest of this paper is devoted to more general QDF’s de-fined by rational functions, which we call rational QDF’s. Quadratic constraints on rational transfer functions such as bounded realness, positive realness, integral quadratic constraints (IQCs) and quadratic separators (see e.g. Megretski and Rantzer 1997, Iwasaki and Hara 1998) have been playing a crucial role in the analyses of linear dynam-ical systems (especially, stability analysis of feedback sys-tems). Also, by using state-space representations, the well-5

(6)

known Kalman-Yakubovich-Popov (KYP) lemma clarified the connection between such constraints and dissipativity. The purpose of this section is to introduce the notion of rational QDF’s as the foundation for studying the above quadratic constraints on rational functions from viewpoint of the behavioral approach.

We now formulate a rational QDF induced by a symmetric two-variable rational matrix Φ ∈ Rw×ws (ζ, η) rather than a two-variable polynomial matrix. Similarly to the poly-nomial case, we mean Φ(ζ, η) = Φ(η, ζ) by the term “symmetric”.

Definition 4. A symmetric two-variable rational matrix

Φ ∈ Rw×ws (ζ, η) is said to be factorizable if there exist a rational matrix G ∈ R•×w(ξ) and a signature matrix Σ∈ R•×•s such that

Φ(ζ, η) = G(ζ)ΣG(η), . (20) It is straightforward to show that Φ(ζ, η) is factorizable iff the least common multiple of the denominators of all entries of Φ(ζ, η) is factored as ϕ(ζ)ϕ(η), namely, Φ(ζ, η) is expressed as

Φ(ζ, η) = Π(ζ, η)

ϕ(ζ)ϕ(η), Π ∈ R w×w

s [ζ, η], ϕ ∈ R[ξ]. (21)

Of course, the factorization in (20) is not unique. We say that the factorization Φ(ζ, η) = G(ζ)ΣG(η) is

canon-ical, if row(G) ≤ row(G) for any other G ∈ R•×w(ξ) and Σ = diag(Ir+, −Ir) ∈ R•×•s satisfying Φ(ζ, η) = G(ζ)ΣG(η), where row( · ) denotes the number of rows of a (rational) matrix.

We hereafter make the following assumption.

Assumption 1. The two-variable rational matrix Φ

Rw×w

s (ζ, η) is symmetric, factorizable and admits a

canon-ical factorization in the form of (20) with v := row(G). We now at the position to define a rational QDF.

Definition 5. Under Assumption 1, the rational QDF with respect to Φ is the set defined by

QrΦ(w) :=  s ∈ C∞(R, R) ∃v ∈ C∞(R, Rv) s.t. d dt s = vΣv & v = G(d dt)w  .

Note that we put the superscript “r” in order to distin-guish the rational QDF from polynomial QDF’s.

The rational QDF is well-defined in the sense that it is uniquely defined regardless of the choice of the canonical factorization. The reason of this uniqueness is as follows. Suppose that Φ∈ Rw×ws (ζ, η) is expressed as in (21). More-over, assume that Φ(ζ, η) admits two different canonical factorizations as

Φ(ζ, η) = G1(ζ)ΣG1(η) = G2(ζ)ΣG2(η).

By the definition of the canonical factorization, we must have row(G1) = row(G2) =: v. Then, it is obvious that

ϕ(ξ)Gi(ξ) (i = 1, 2) is a polynomial matrix in Rv×w[ξ] and induces a canonical factorization of the polynomial matrix Π(ζ, η) in the polynomial matrix sense. (Otherwise, it can be shown that Gi(ξ) is not a canonical factor of Φ(ζ, η).)

As discussed in Sub-section 2.1, there exists a nonsingular

constant matrix U such that ϕ(ξ)G1(ξ) = U ϕ(ξ)G2(ξ) and UΣU = Σ. Hence, we have G1(ξ) = U G2(ξ). As a result,

we see that the rational QDF’s induced from two different canonical factors are identical, because there holds

s ∈ C∞(R, R) | ∃v1 s.t. s = v1Σv1& v1= G1(dtd)w =s ∈ C∞(R, R) | ∃v1s.t. s = v1Σv1 & v1= U G2(dtd)w  =s ∈ C∞(R, R) | ∃v2s.t. s = v2UΣU v2 & v2= G2(dtd)w =s ∈ C∞(R, R) | ∃v2s.t. s = v2Σv2 & v2= G2(dtd)w  ,

where the most left- and right-hand sides are the rational QDF’s induced by G1(ξ) and G2(ξ), respectively.

Example 5. We again consider the electrical network in

Fig. 1. We see from (15) that a rational representation of this system is given by

 V I  =  1 H(dtd)  V, H(ξ) = 1 ξ C+ Rξ + Lξ2 (22) As we have seen earlier, the supply rate is given by

s = V I = (V I)  0 1/2 1/2 0   V I  . (23)

This expression of s is associated with the rational matrix Φ(ζ, η) = 1 2(1 H(ζ))  0 1 1 0   1 H(η)  =1 2(H(ζ) + H(η)) . One of the canonical factorization of Φ(ζ, η) is given by

G(ξ) = 1 2  1 + H(ξ) 1− H(ξ)  , Σ =  1 0 0 −1 

Therefore, the supply rate s can be expressed as the rational QDF with s = v  1 0 0 −1  v, v = 1 2  V + I V − I  , w = V 3 Proposition 9. For two symmetric rational matrices Φ1, Φ2∈ Rw×ws (ζ, η), we have

QrΦ1+Φ2(w) ⊆ QrΦ1(w) + QrΦ2(w) ∀w ∈ C∞(R, Rw).

Proof. We introduce the canonical factorizations of Φ1,

Φ2and Φ1+ Φ2 as

Φ1(ζ, η) = G1(ζ)Σ1G1(η),

Φ2(ζ, η) = G2(ζ)Σ2G2(η),

Φ1(ζ, η) + Φ2(ζ, η) = G(ζ)ΣG(η)

Since the rows of G(ξ) are linearly independent over R, there exists a nonsingular square matrix K such that  G1(ξ) G2(ξ)  = K  G(ξ) 0  , (I 0) K  Σ1 0 0 Σ2  K  I 0  = Σ. (24) Assume that s ∈ C∞(R, R) belongs to QrΦ12(w). There exists a v ∈ C∞(R, R•) such that s = vΣv and v =

G(d dt)w. Define  v1 v2  = K  v 0  .

It then follows from (24) that  v1 v2  =  G1(dtd) G2(dtd)  w, vΣv = v1Σ1v1+ v2Σ2v2 6

(7)

This implies that v1Σ1v1 ∈ QrΦ1(w), v2Σ2v2 ∈ QrΦ2(w),

and hence v ∈ QrΦ1(w) + QrΦ2(w). That is, QrΦ1+Φ2(w) ⊆

QrΦ1(w) + QrΦ2(w) holds.

Let the right coprime factorization of G(ξ) in (20) over R[ξ] be given by

G(ξ) = N (ξ)D−1(ξ), N ∈ Rv×w[ξ], D ∈ Rw×w[ξ]. (25) Then, we see from Appendix A that the pair (v, w) ∈ C(R, Rv+w) is a solution of v = G(d

dt)w iff there exists  ∈ C∞(R, Rw) satisfying v = N (d dt) & w = D( d dt) (26) Since s = vΣv is rewritten as s = [N (dtd)]Σ[N (dtd)] = QNΣN(), we have QrΦ(w) =  QNΣN()   w = D(dtd),  ∈ C∞(R, Rw)  . (27) Therefore, the rational QDF QrΦ inherits the properties (e.g. nonnegativity, average nonnegativity, etc) of the polynomial QDF QNΣN as discussed below.

In the same way as the polynomial case, we introduce the notation

Φ (ζ, η) = (ζ + η)Φ(ζ, η).

Proposition 10. Let a symmetric rational matrix Φ

Rw×w

s (ζ, η) be given. Then, under Assumption 1, we obtain Qr Φ(w) =  r ∈ C∞(R, R) ∃s ∈ QrΦ(w) s.t. r = d dts  (28) Proof: Let a right coprime factorization be given by (25). We also define

Θ(ζ, η) = D(ζ)Φ(ζ, η)D(η) = N(ζ)ΣN (η). Then, Θ(ζ, η) is a symmetric polynomial matrix in Rw×w[ζ, η]. Moreover, let M ∈ Rv×w[ξ] and Σ ∈ Rv×v

induce a canonical factorization

(ζ + η)Θ(ζ, η) = M(ζ)ΣM (η).

Then, there holds

d dt  (N (d dt)) ΣN (d dt)  =  M (d dt)  Σ  M (d dt)  ∀ ∈ C∞(R, Rw) (29)

Note that, by the definition of the canonical factorization, it can be shown that the pair (M, D) is right coprime. We also see that H(ξ) := M (ξ)D−1(ξ) is a canonical factor of

Φ (ζ, η). As a result, we get Qr Φ(w) =  r ∈ C∞(R, R) ∃z s.t. r = zΣz & z = H(dtd)w =QMΣM()w = D(dtd),  ∈ C∞(R, Rw)  =d dtQNΣN()w = D(dtd),  ∈ C∞(R, Rw)  =d dtss = QNΣN() & w = D(dtd) &  ∈ C∞(R, Rw)  =dtds | s ∈ QrΦ(w)  This completes the proof.

3.1 Nonnegativity of rational QDF

Definition 6. Under Assumption 1, QrΦ is said to be

non-negative, denoted by Φ≥ 0, if

[[ s(t) ≥ 0 ∀t ∈ R ∀s ∈ QrΦ(w) ]] ∀w ∈ C∞(R, Rw)

The nonpositivity of QrΦis also defined in the same way.

Proposition 11. Under Assumption 1,

Φ≥ 0 ⇔ ∃K ∈ R•×w(ξ) s.t. Φ(ζ, η) = K(ζ)K(η) (30) Proof:

(⇐) Obvious because we can assume without loss of gen-erality that Φ(ζ, η) = K(ζ)K(η) is a canonical factor-ization. Even if this not the case, we can easily show that the canonical factorization is given in the form of Φ(ζ, η) = G(ζ)G(η), Σ = Iv.

(⇒) Under Assumption 1, we again introduce a right coprime factorization of (25). It then follows from (27) and Definition 6 that

Φ≥ 0 ⇔ s(t) = QNΣN()(t) ≥ 0 ∀t ∈ R, ∀ s.t. w = D(d

dt), ∀w ∈ C∞(R, Rw)

As shown Proposition A.2 (i), we see that w ∈ C∞(R, Rw)

 ∈ C∞(R, Rw) for the solutions of w = D(dtd). Therefore, we have

Φ≥ 0 ⇔ QNΣN()(t) ≥ 0 ∀t ∈ R, ∀ ∈ C∞(R, Rw) ⇔ N(ζ)ΣN (η) ≥ 0 (in the sense of polynomial QDF’s) ⇔ ∃F ∈ R•×w[ξ] s.t. N(ζ)ΣN (η) = F(ζ)F (η)

(by Proposition 1 (i))

As a result, by defining K(ξ) = F (ξ)D−1(ξ), we obtain Φ(ζ, η) = K(ζ)K(η). This completes the proof.

Also, the inequality between two rational QDF’s is defined as follows.

Φ1− Φ2≥ 0

:⇔ s(t) ≥ 0 ∀t ∈ R, ∀s ∈ QrΦ1−Φ2(w), ∀w ∈ C∞(R, Rw) Note that Φ1−Φ2≥ 0 does not imply s1(t) ≥ s2(t), ∀t ∈ R, ∀si∈ Qr

Φi(w) (i = 1, 2), ∀w ∈ C∞(R, Rw).

3.2 Average nonnegativity and dissipation inequality

In order to define the average nonnegativity, we introduce the set ˆQrΦ(w) as  QrΦ(w) =  s ∈ C∞(R, R) ∃v ∈ C∞(R, Rv)∩ D s.t. dtd s = vΣv & v = G(dtd)w.

Clearly, ˆQrΦ(w) is a subset of QrΦ(w), and any s ∈ ˆQrΦ(w) has compact support, guaranteeing the existence of the integral −∞ s(t)dt. .

Definition 7. Under Assumption 1, QrΦ is called average

nonnegative, denoted by QrΦ≥ 0, if there holds

−∞s(t)dt ≥ 0, ∀s ∈ ˆQ r

Φ(w), ∀w ∈ C∞(R, Rw)∩ D.

(31) The average nonpositivity is also defined in the same way. It may be noted that the integral inequality in (31) can be interpreted as a kind of integral quadratic constraints (Megretski and Rantzer 1997) in the time domain setting.

Lemma 1. Under Assumption 1, QrΦ is average

nonnega-tive if and only if there exist a factorizable Ψ∈ Rw×ws (ζ, η) and an F ∈ R•×w(ξ) such that

(ζ + η)Ψ(ζ, η) + F(ζ)F (η) = Φ(ζ, η) (32) 7

(8)

Proof:

(⇒) In view of (26) and Proposition A.2 (ii), (v, w) has compact support iff so does . It thus follows that

QrΦ≥ 0  −∞v (t)Σv(t)dt ≥ 0 ∀v ∈ C(R, Rv)∩ D s.t. v = G(d dt)w, ∀w ∈ C∞(R, Rw)  −∞v (t)Σv(t)dt ≥ 0, v = N (d dt) ∀ ∈ C∞(R, Rw)∩ D 

QN(ζ)ΣN(η)≥ 0 (in the sense of polynomial QDF’s) ⇔ ∃ ˆΨ ∈ Rw×w

s [ζ, η], ˆF ∈ R•×w[ξ] such that

(ζ + η) ˆΨ(ζ, η) + ˆF(ζ) ˆF (η) = N(ζ)ΣN (η) (33) (by Proposition 2 (i)⇔(iv))

Pre- and post-multiplying (33) by D−(ζ) and D−1(η), respectively, we obtain (32) with

Ψ(ζ, η) := D−(ζ) ˆΨ(ζ, η)D−1(η),

F (ξ) := ˆF (ξ)D−1(ξ).

It is clear from this definition that Ψ(ζ, η) is factorizable. (⇐) Since Φ(ζ, η) = G(ζ)ΣG(η) and G(ξ) = N (ξ)D−1(ξ), it follows from (32) that

(ζ + η)D(ζ)Ψ(ζ, η)D(η) + (F (ζ)D(ζ))F (η)D(η)

= N(ζ)ΣN (η) (34) To prove the sufficiency, we first show that we can always find a solution of (32) such that both D(ζ)Ψ(ζ, η)D(η) and F (η)D(η) are polynomial matrices. Suppose that this is not the case. It follows from (34) that

N(−ξ)ΣN(ξ) = (F (−ξ)D(−ξ))F (ξ)D(ξ)

Since the left-hand side is a polynomial matrix, F (ξ)D(ξ) must be factored as

F (ξ)D(ξ) = U (ξ)W (ξ),

where W (ξ) is a polynomial matrix, and U (ξ) is a unitary rational matrix, namely U(−ξ)U(ξ) = I. Then, we re-define F (ξ) and Ψ(ζ, η) as F (ξ) ←W (ξ)D−1(ξ) Ψ(ζ, η) ←Ψ(ζ, η) + 1 ζ+η  F(ζ)F (η) − D−(ζ)W(ζ)W (η)D−1(η)

Note that this new Ψ(ζ, η) is still factorizable because

F(ζ)F (η) − D−(ζ)W(ζ)W (η)D−1(η) has ζ + η as a factor of its numerator. Furthermore, since (ζ +

η)D(ζ)Ψ(ζ, η)D(η) is a polynomial matrix from (34), we conclude D(ζ)Ψ(ζ, η)D(η) is a polynomial matrix. Thus, we hereafter assume that both D(ζ)Ψ(ζ, η)D(η) and F (η)D(η) are polynomial matrices.

Next, we define

Ξ(ζ, η) = D(ζ)Ψ(ζ, η)D(η) (35) Δ(ζ, η) = (F (ζ)D(ζ))F (η)D(η) (36) Since Ξ(ζ, η) and Δ(ζ, η) are polynomial matrices. the polynomial QDF’s QΞ and QΔsatisfies

QNΣN() = d

dtQΞ() + QΔ() (37)

Hence, for every  ∈ C∞(R, Rw)∩ D, integrating (37) from

−∞ to ∞ yields  −∞QN ΣN()dt =  −∞QΔ()dt + QΞ()(∞) − QΞ()(−∞) =  −∞QΔ()dt ≥ 0 (38)

In the last inequality, we have used the fact that Δ≥ 0. By Proposition A.2, for the solution of v = G(d

dt)w, v and w have compact support iff  ∈ D. Hence, (38) implies

that QrΦis average nonnegative.

Remark 3. We give a remark regarding to QΨ and QΔ

introduced in the proof of sufficiency. Since QΔ()(t) ≥ 0

holds for all t ∈ R and for all  ∈ C∞(R, Rw), (37) reduces to a version of dissipation inequality

QNΣN()(t) ≥ d

dtQΞ()(t) ∀t ∈ R, ∀ ∈ C∞(R, Rw) s.t. w = D(d

dt)

In this case, QΞ and QΔ respectively serve as a storage function and a dissipation rate for the QDF QΦ repre-sented by a rational matrix inRw×ws (ζ, η).

The next lemma provides a frequency domain condition for the average nonnegativity.

Lemma 2. Under Assumption 1, there exist a Ψ

Rw×w

s (ζ, η) and a F ∈ R•×w(ξ) satisfying (32) if and only if

Φ(¯λ, λ) ≥ 0 ∀λ ∈ iR\{poles of G(ξ)} (39) Proof: (⇒) It readily follows from (32) that

Φ(¯λ, λ) = (F (λ))∗F (λ) ≥ 0 ∀λ ∈ iR\{poles of G(ξ)}

(⇐) Let G(ξ) = N(ξ)D−1(ξ) be a right coprime fac-torization over R[ξ]. Since det D(λ) = 0 holds for all

λ ∈ iR\{poles of G(ξ)}, and since N (λ) is continuous in λ ∈ C, we obtain

Φ(¯λ, λ) ≥ 0 ∀λ ∈ iR\{poles of G(ξ)} ⇔ Nλ)ΣN (λ) ≥ 0 ∀λ ∈ iR

The latter condition is equivalent to the existence of polynomial matrices ˆΨ∈ Rw×ws [ζ, η] and ˆF ∈ R•×w[ξ] such that

(ζ + η) ˆΨ(ζ, η) + ˆF(ζ) ˆF (η) = N(ζ)ΣN (η) (see Propositions 2 (ii)⇔(iv)). Thus, we obtain (32) by defining Ψ(ζ, η) = D−(ζ) ˆΨ(ζ, η)D−1(η) and F (ξ) =

ˆ

F (ξ)D−1(ξ), and by pre- and post-multiplying the above equation by D−(ξ) and D−1(ξ), respectively.

Summarizing the discussions in Lemmas 1 and 2, we obtain the following theorem.

Theorem 1. Under Assumption 1, the following

state-ments are equivalent. (i)



QrΦ≥ 0 (average nonnegative).

(ii) There exists a symmetric and factorizable rational matrix Ψ∈ Rw×ws (ζ, η) such that

Ψ−Φ ≤ 0 (40)

(iii) Φ(¯λ, λ) ≥ 0 holds for all λ ∈ iR\{poles of G(ξ)}.

(iv) There exists a symmetric and factorizable rational matrix Ψ ∈ Rw×ws (ζ, η) and a rational matrix F ∈ R•×w(ξ) such that

(9)

Φ(ζ, η) = (ζ + η)Ψ(ζ, η) + F(ζ)F (η) (41) Moreover, among Ψ’s satisfying (40) or (41), there exists one for which D(ζ)Ψ(ζ, η)D(η) is a polynomial matrix for the right coprime factors (N, D) of G(ξ).

Proof: It is easily seen from Proposition 11 thatΨ −Φ ≤ 0 holds iff there exists F ∈ R•×w(ξ) satisfying

Φ(ζ, η) − (ζ + η)Ψ(ζ, η) = F(ζ)F (η).

Therefore, (i)⇔(ii) and (ii)⇔(iii) immediately follow from Lemmas 1 and 2, respectively. The existence of Ψ(ζ, η) for which Ψ −Φ ≤ 0 and D(ζ)Ψ(ζ, η)D(η) is a polynomial matrix is also clear from the proof of the sufficiency of Lemma 1.

Remark 4. The equivalence between (iii) and (iv), namely

Lemma 2, claims that the para-Hermitian rational matrix Φ(−ξ, ξ) ∈ Rw×w(ξ) admits a spectral factorization

Φ(−ξ, ξ) = F(−ξ)F (ξ), F ∈ Rw×w(ξ),

if and only if the condition (iii) is satisfied (This condition was proved by Youla (1961)). Hence, the above result provides an behavioral interpretation to Youla’s spectral factorizability condition.

Example 5 (continued). Since

Φ(−ξ, ξ) = (−Rξ 2 (C1 − Rξ + Lξ2)(C1 + Rξ + Lξ2), we have Φ(¯λ, λ) = R1 λ C+ Rλ + Lλ2  2≥ 0 ∀λ ∈ iR.

Hence, the spectral factor of Φ(−ξ, ξ) exists, and is given by F (ξ) = 1 C + Rξ + Lξ2 . It follows that Ψ(ζ, η) = Φ(ζ, η) − F (ζ)F (η) ζ + η = 1 2C+12Lζη (C1 + Rη + Lη2)(C1 + Rζ + Lζ2) Since the right coprime factors of G(ξ) are given by

D(ξ) = 1 C + Rξ + Lξ 2, N (ξ) = 1 2  D(ξ) + ξ D(ξ) − ξ  = 1 C1 + (R + 1)ξ + Lξ2 C+ (R − 1)ξ + Lξ2  ,

the polynomial matrices in (35),(36) are given by Ξ(ζ, η) = D(ζ)Ψ(ζ, η)D(η) = 1

2C + 1 2Lζη, Δ(ζ, η) = D(ζ)F (ζ)F (η)D(η) = Rζη.

In addition, the observable (polynomial) image represen-tation of v = G(dtd)w is given by  v w  = ⎛ ⎝ 1 2(V + I) 1 2(V − I) V ⎞ ⎠ = N (dtd) D(d dt) q,

Note that, in this case, the latent variable is exactly the electrical charge q. Hence, Ξ(ζ, η) and Δ(ζ, η) recovers the

storage function and the dissipation rate in Example 3 as follows. QΞ(q) = 1 2Cq 2+1 2LI 2, Q Δ(q) = RI2, I =dq dt. 3

Next, we consider the notion of half-line nonnegativity.

Definition 8. Under Assumption 1, QrΦ is called half-line nonnegative, denoted by tQrΦ≥ 0, if there holds

−∞s(t)dt ≥ 0 ∀s ∈ ˆQ r

Φ(w), ∀w ∈ C∞(R, Rw)∩ D (42)

The half-line nonpositivity is also defined in the same way.

Theorem 2. Under Assumption 1, the following

state-ments are equivalent. (i)

 t

QrΦ≥ 0 (half-line nonnegative).

(ii) There exists a symmetric and factorizable rational matrix Ψ∈ Rw×ws (ζ, η) such that

Ψ−Φ ≤ 0 & Ψ ≥ 0 (43) (iii) There exists a symmetric and factorizable Ψ

Rw×w

s (ζ, η) and a rational matrices F, K ∈ R•×w(ξ)

satisfying (41) and Ψ(ζ, η) = K(ζ)K(η).

Moreover, among Ψ’s satisfying (41) or (43), there exists one for which D(ζ)Ψ(ζ, η)D(η) is a polynomial matrix for the right coprime factors (N, D) of G(ξ).

We omit the proof, since the theorem can be proved almost in the same way as Theorem 1. In particular, it turns out from (27) and Propositions A.1, A.2 that the half-line nonnegativity of the rational QDF is equivalently translated to that of a polynomial QDF:

 t

QrΦ(w) ≥ 0 ⇔

 t

QN(ζ)ΣN(η) ≥ 0

Therefore, we have the same observation as Remark 3 that, for Ψ(ζ, η) satisfying the conditions in Theorem ??, the polynomial QDF QΞ(), Ξ(ζ, η) = D(ζ)Ψ(ζ, η)D(η)

serves as a “nonnegative” storage function for the system defined by v = G(dtd)w.

4. CONCLUDING REMARKS

I this paper, we first illustrated the basic features of a QDF including its definition, calculus, nonnegativity, average nonnegativity, e.t.c. We also clarified the relation between QDF’s and dissipativity and Lyapunov stability of a linear differential system. Although we have concentrated only on the QDF’s are defined by ordinary differential equa-tions in the continuous-time, it should be noted that there have been a number of research works about the discrete-time QDF’s (Kaneko and Fujii 2000, 2003, Kojima and Takaba 2005), and the QDF’s defined by partial differ-ential equations or multivariable polynomials (Pillai and Shankar 1999, Pillai and Willems 2002, Napp Avelli and Trentelman 2007).

Secondly, we presented a new and more general formula-tion of a QDF in terms of raformula-tional funcformula-tions rather than polynomials. It turned out that the rational QDF defines a set unlike the polynomial QDF. We also showed that the 9

(10)

notions of nonnegativity, average nonnegativity and half-line nonnegativity can be generalized for the rational QDF, and derived their necessary and sufficient conditions. It remains as future topics to study the rational QDF along a given behavior and its applications to the Lyapunov stabil-ity and the dissipation theory. It should be noted that the rational QDF has many important applications in system and control theory: for example, the dissipation theory for linear systems defined by rational representations, the stability analysis of interconnected systems with rational multipliers, and so forth.

The first author would like to acknowledge that this work is supported by JSPS Grant-in-Aid for Scientific Research (C) No. 18560431.

REFERENCES

M.N. Belur and H.L. Trentelman, Algorithmic issues in the synthesis of dissipative systems, Mathematical and

Computer Modelling of Dynamical Systems, vol. 8, no. 4,

pp. 407–428, 2002.

M.N. Belur and H.L. Trentelman, The strict dissipativity synthesis problem and the rank of the coupling QDF,

Syst. Contr. Letters, vol. 51, pp. 247–258, 2004.

T. Iwasaki and S. Hara, Well-posedness of feedback sys-tems: insights into exact robustness analysis and approx-imate computations, IEEE Trans. Automat. Contr., vol. 43, no. 5, pp. 619–630, 1998.

O. Kaneko and T. Fujii, Discrete-time average positivity and spectral factorization in a behavioral framework,

Syst. Contr. Letters, vol. 39, pp. 31–44, 2000.

O. Kaneko and T. Fujii, When is a storage function a state function in discrete time?, SIAM J. Contr. Optimiz., vol. 42, no. 4, pp. 1374–1394, 2003.

O. Kanekoa nd T. Fujii, The behavioral approach to dis-sipation theory of dynamical systems: Based on QDF’s,

Syst. Contr. Inform., vol. 48, no. 5, 2004 (in Japanese).

C. Kojima and K. Takaba, A generalized Lyapunov stabil-ity theorem for discrete-time systems based on quadratic difference forms, Proc. 44th IEEE Conf. Decis. Contr.,

and Eur. Contr. Conf. 2005, pp. 2911–2916, 2005. CDC-ECC2005, 2005.

A. Megretski and A. Rantzer, System analysis via integral quadratic constraints, IEEE Trans. Automat. Contr., vol. 42, no. 6, pp. 819-830, 1997.

D. Napp Avelli and H.L. Trentelman, Algorithms for mul-tidimensional spectral factorization and sum of squares,

Linear Algebra and its Applications, to appear, 2007.

R. Peeters and P. Rapisarda, A two-variable approach to solve the polynomial Lyapunov equation, Syst. Contr.

Letters vol. 42, pp. 117–126, 2001.

I. Pendharkar and H. Pillai, Systems with sector bound nonlinearities: A behavioral approach, Syst. Contr.

Let-ters, to appear, 2007.

H. Pillai and S. Shankar, A behavioural approach to con-trol of distributed systems, SIAM J. Contr. Optimiz., vol. 37, pp. 388–408, 1999.

H. Pillai and J.C. Willems, Lossless and dissipative dis-tributed systems, SIAM J. Contr. Optimiz., vol. 40, pp. 1406–1430, 2002.

J.W. Polderman and J.C. Willems, Introduction to

Math-ematical Systems Theory: A Behavioral Approach,

Springer-Verlag, 1998.

P. Rapisarda and J.C. Willems, An introduction to quadratic differential forms, Proc. of 16th Symp.

Mathe-matical Theory of Networks and Systems (MTNS2004),

Leuven, 2004.

K. Takaba, Robust stability analysis of uncertain inter-connection in the behavioral framework, Proc. of 16th

IFAC World Congress, Prague, 2005.

H.L. Trentelman and J.C. Willems, H∞ control in a

behavioral context: The full information case, IEEE

Trans. Automat. Contr., vol. 44, no. 3, pp. 521–536,

1999.

H.L. Trentelman and J.C. Willems, Dissipative linear dif-ferential systems and the state space H∞ control prob-lem, Int. J. of Robust and Nonlinear Control, vol. 10, pp. 1039–1057, 2000.

J.C, Willems, LQ-control: a behavioral approach, Proc. of

32nd IEEE Conf. on Decis. and Contr., San Antonio,

pp. 3664–3668, 1993.

J.C. Willems and H.L. Trentelman, On quadratic differen-tial forms, SIAM J. Contr. Optimiz., vol. 36, pp. 1703– 1749, 1998.

J.C. Willems and H.L. Trentelman, Synthesis of dissipative systems using quadratic differential forms: Parts I and II, IEEE Trans. Automat. Contr., vol. 47, pp. 53–86, 2002.

J.C. Willems and M.E. Valcher, Linear-quadratic control and quadratic differential forms, Proc. of 16th IFAC

World Congress, Prague, Paper code: Mo-A15-TO/1,

2005.

J.C. Willems and K. Takaba, Dissipativity and stability of interconnections, Int. J. of Robust and Nonlinear

Control, vol. 17, pp. 563–586, 2007.

J.C. Willems and Y. Yamamoto, Behaviors defined by rational functions, Linear Algebra and its Applications, vol. 425, pp. 226–24, 2007.

D.C. Youla, On the factorization of rational matrices, IRE

Trans. Inform. Theory, vol. 7, pp. 172-189, 1961.

Appendix A. LINEAR DIFFERENTIAL SYSTEMS

Representation by polynomial matrices

In the behavioral approach, a dynamical system is char-acterized by the triple Σ = (T, W, B), where T is the time axis and W is the signal space where the system variables take their values at each time instant. The setB is the behavior, namely, the set of all possible trajectories which meet the dynamic laws of the system. Throughout this paper, we will identify a dynamical system with its behavior for ease of notation.

We are mainly interested in a linear differential system, that is described by a a differential-algebraic equation with constant coefficients. Typically, such an equation is given by

R0w + R1d

dtw + · · · + RL dL dtLw = 0,

or in short hand notation,

R  d dt  w = 0, where R(ξ) = R0+ R1ξ + · · · + RLξL ∈ Rp×w[ξ]. We call the variable w : R → Rw a manifest variable. Then, the

behavior is defined by 10

(11)

B =  w ∈ C∞(R, Rw) R  d dt  w = 0  .

In short, we denote the behavior as B = kerR(dtd). For the obvious reason, the above representation is called a

kernel representation. We defineLwas the set of such linear time-invariant differential behaviors with w variables. For simplicity of discussion, we assume that the behaviors are defined in the class ofC-functions.

Recall that there are more than one polynomial matrices which induce kernel representations of B. A polynomial matrix R(ξ) satisfying B = kerR(d/dt) is said to be

minimal if the number of rows of R(ξ) is less than or equal

to that of any other polynomial matrix which induces a kernel representation ofB.

A system B is called controllable if, for any w1, w2∈ B,

there exist a w ∈ B and a positive constant T such that

w(t) = w1(t) (t ≤ 0) and w(t) = w2(t − T ) (t ≥ T ).

The family of controllable linear time-invariant differential systems is denoted byLwcont. When a kernel representation ofB is induce by R(ξ), B is controllable iff rank(R(λ)) is constant for all λ ∈ C. If R(ξ) induces a minimal kernel representation of a controllable system, then R(λ) has full row rank for all λ ∈ C.

If R(λ) has full row rank for all λ ∈ C, there exists a polynomial matrix M ∈ Rw×m[ξ] such that R(ξ)M (ξ) = 0. In this case,B can be rewritten as

B =  w ∈ C∞(R, Rw) ∃ s.t. w = M  d dt    ,

or in short B = imM(dtd), where  : R → Rm is an

auxiliary variable called a latent variable. The represen-tation w = M (dtd) is called an image representation. The image representation is said to be observable if M (dtd) = 0 implies  = 0. The behavior B = im M (d

dt) is observable

if and only if M (λ) has full column rank for any λ ∈ C. Suppose that R ∈ Rp×w[ξ] induces a minimal kernel representation ofB ⊂ Lw. Then, there exists a nonsingular permutation matrix Π such that

R(ξ)Π−1= (Q(ξ) −P (ξ)) , det P = 0, Π w =  u y  , u : R → Rm, y : R → Rp, p + m = w.

Then, u and y serve as the input and output of B, respectively, and the transfer function from u to y is defined by

G(ξ) = P (ξ)−1Q(ξ)

For the obvious reason, the above partition is called the

input/output (I/O) partition ofB. It should be noted that

the choice of inputs and outputs is not unique, and is not given a priori. The dimensions of u and y (namely, m and

p) are invariant for any choice of inputs and outputs and

for any representation ofB. We refer to these dimensions as input and output cardinalities ofB, and denote them by m(B) and p(B), respectively. It should also be noted that, the system B ∈ Lwis autonomous if and only if m(B) = 0 andp(B) = w.

A system B is said to be asymptotically stable if w(t) → 0 (t → ∞) holds for all w ∈ B. Clearly, B must be autonomous in order to be asymptotically stable. The behaviorB = kerR(d

dt) is asymptotically stable if and only

if R(λ) has full column rank for all λ ∈ C+. In the case

where R(ξ) is square, B is asymptotically stable iff R(ξ) is Hurwitz, namely det R(ξ) = 0 has all roots in Re ξ < 0.

Representation by rational matrices

We next consider the linear system represented by

v = G(d

dt)w, G ∈ R

v×w(ξ) (A.1)

For more various rational function representations such as G(dtd)w = 0 and G(dtd)w = H(dtd), the readers are recommended to refer to Willems and Yamamoto (2007). We introduce left and right coprime factorizations of G(ξ) overR[ξ] as

G(ξ) = X−1(ξ)Y (ξ) = N (ξ)D−1(ξ) (A.2)

X ∈ Rv×v[ξ], Y ∈ Rv×w[ξ],

N ∈ Rv×w[ξ], D ∈ Rw×w[ξ]

Then, along the line of Willems and Yamamoto (2007), the solution of (A.1) is defined as follows.

Definition 9.  (v, w) : R → Rv+w is a solution of v = G(d dt)w  :  (v, w) satisfies X(d dt)v = Y ( d dt)w 

Proposition A.1. Let G ∈ Rv×w(ξ) be given. Then, [[ (v, w) satisfies (A.1).]]  ∃ s.t. v = N(d dt), w = D( d dt) 

Proof: Recall that X(d

dt)v = Y (dtd)w is a kernel

represen-tation of the behavior of (v, w) in terms of a “polynomial” matrix (X(ξ) Y (ξ)). Hence, the proposition is obvious from the definition of the coprime factors and the standard results on kernel and image (polynomial) representations (see e.g. Section 6.6 in Polderman and Willems (1998)).

Proposition A.2. Consider the solution to the differential

equation  v w  = N (dtd) D(dtd)  (A.3)

associated with the rational representation v = G(dtd)w. (i) w ∈ C∞(R, Rw)⇔  ∈ C∞(R, Rw)

(ii) (v, w) ∈ C∞(R, Rv+w) has compact support iff  ∈ C(R, Rw) has compact support.

Proof: (i) Immediate. (ii) By the right coprimeness,



N(λ) D(λ)



has full column rank for all λ ∈ C. In other words,  is observable from (v, w). The observability proves the statement in (ii).

Consider the situation where w is arbitrarily given and v is determined from (A.1). Since det X = 0 and det D = 0,

w is free in (A.1). Hence, (A.1) defines a linear differential

behaviorBv∈ Lv. Bv=  v ∈ C∞(R, Rv)| ∃w ∈ C∞(R, Rw) s.t. v = G(d dt)w  11

(12)

Indeed, the above propositions claim that Bv is

repre-sented by an image representation as Bv=  v = N (d dt) |  ∈ C∞(R, Rw)  .

Note that, in general, G(dtd) does not define a map from w to v. This is because there are a number of ’s satisfying

w = D(dtd) for a fixed w. Of course, if G(ξ) is a polynomial matrix, then G(d

dt) defines a differential map.

The above propositions claim that, though G(d

dt) is not

a map, G(dtd)C(R, Rw) is a linear differential behavior which is characterized as the image space of the differential operator N (dtd).

Referenties

GERELATEERDE DOCUMENTEN

In particular, the effects of Simons’ levers-of-control (i.e. beliefs systems, boundary systems, diagnostic control systems and interactive control systems) for two different

By convention of this package, items with an “*” in their title is a signal to the document consumer that that video cannot be embedded and must be played on the YouTube site

This study will therefore focus on how individual research participants (the members of an Inclusive Education Outreach Team in a rural education district within

Doel: beschrijven probleemgedrag en in kaart brengen van mogelijke oorzaken NB: Vul dit formulier bij voorkeur in met één of meer collega’s1. Wat

‘Met verschillende kerken in Leidsche Rijn willen we ons als netwerk gaan organiseren, zodat we meer zichtbaar zijn voor de wijkteams en andere professionals in de

In the behavioral framework, the QDF’s have been playing a crucial role in many aspects of system and control the- ory: Lyapunov stability (Willems and Trentelman 1998, Peeters

Not all behaviors admit an image representation: indeed, a behavior can be represented in image form if and only if each of its kernel representations is associated with a

While there is no evidence to assume that reasons other than legislative elections significantly explains the relationship between cabinet termination and stock market