• No results found

Delin Chu

N/A
N/A
Protected

Academic year: 2021

Share "Delin Chu"

Copied!
15
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Delin Chu



and Bart De Moor

y

Department of Electrical Engineering

Katholieke Universiteit Leuven Kardinaal Mercierlaan 94

B-3001 Leuven, Belgium August 13, 1998

Abstract

Recently, Chu, Funderlic and Golub [ SIAM J. Matrix Anal. Appl., 18:1082{1092, 1997] presented a variational formulation for the quotient singular value decomposition (QSVD) of two matrices

A 2 Rnm;C 2 Rpm

which is a generalization of that one for the ordinary singular value decomposition (OSVD) and characterizes the role of two orthogonal matrices in QSVD. In this paper, we give an alternative derivation of this variational formulation and extend it to establish an analogous variational formulation for the Restricted Singular Value Decomposition (RSVD) of Matrix Triplets

A2Rnm;B2

Rnl;C2Rpm

which provides new understanding of the orthogonal matrices appearing in this decomposition.

Keywords:

OSVD, QSVD, RSVD, Generalized Singular Value, Variational Formu- lation, Stationary Value, Stationary Point.

AMSsubject classi cation:

65F15, 65H15.

1 Introduction

The ordinary singular value decomposition (OSVD) of a given matrix A

2

R

nm

is U

T

AV =

"

r

a

m

?

r

a

r

a

 0

n

?

r

a

0 0

#

; (1)

with

U =

h

r

a

n

?

r

a

n U

1

U

2 i

; V =

h

r

a

m

?

r

a

m V

1

V

2 i

;

 = diag

f



1

;



;

rag

; 

1 



2 



ra

> 0 ; r

a

= rank( A ) ;

where U;V are orthogonal matrices. The 

1

;



;

ra

are the non-trivial singular values of A , and the columns of U

1

and V

1

are, respectively, the non-trivial left and right singular vectors of A . In this paper,

kk

denotes the 2

?

norm of a vector. The following theorem is well-known [4]:

Delin.Chu@esat.kuleuven.ac.be

yBart.Demoor@esat.kuleuven.ac.be

1

(2)

Theorem 1 Given A

2

R

nm

with OSVD (1).

(a) Consider the optimization problem

y=Ax; y

max

6=0

k

y

k

k

x

k

: (2)

Then the non-trivial singular values 

1

;



;

ra

of A are precisely the stationary values, i.e., the functional evaluations at the stationary points, of (2). And, let the stationary points in (2) corresponding to the stationary values 

1

;



;

ra

be

"

x

1

y

1

#

;



;

"

x

ra

y

ra

#

, then V

1

=

h kxx11k  kxxrarak i

:

Moreover, if n = m = r

a

, then

U

1

=

h kyy11k  kyyrara k

i

: (b) Consider the dual optimization problem

yT=x

max

TA; y6=0

k

y

k

k

x

k

: (3)

Then the non-trivial singular values 

1

;



;

ra

of A are precisely the stationary values of (3). And, let the stationary points in (3) corresponding to the stationary values 

1

;



;

ra

be

"

x

1

y

1

#

;



;

"

x

ra

y

ra

#

, then

U

1

=

h kxx11k  kxxrara k

i

: Moreover, if n = m = r

a

, then

V

1

=

h kyy11k  kyyrraa k

i

:

Recently, in Chu, Funderlic and Golub [1] Theorem 1 has been generalized to the Quotient Singular Value Decomposition (QSVD) [3, 5, 6, 7, 8, 9, 10, 11, 13, 14] of two matrices A

2

R

nm

;C

2

R

pm

based on the relationship between QSVD of two matrix A;C and the eigendecomposition of the matrix pencil ( A

T

A;C

T

C ).

The purposes of this paper are twofold. Firstly, we present an alternative derivation of the variational formulation in [1] directly based on the QSVD of two matrices A;C . Then we extend this result to the Restricted Singular Value Decomposition (RSVD)[8, 9, 10, 11, 15, 16] of matrix triplets and obtain a analogous variational formulation which provides new understanding of the orthogonal matrices appearing in this decomposition.

In order to prove our main results, we will establish two condensed forms based on orthog- onal matrix transformations. The QSVD of two matrices and the RSVD of matrix triplets can be obtained and the variational formulation for QSVD and RSVD can be proved directly based on these two condensed forms.

In this paper, we use the following notation:

 S

1

( M ) denotes a matrix with orthogonal columns spanning the right nullspace of a

matrix M ;

(3)

 T

1

( M ) denotes a matrix with orthogonal columns spanning the right nullspace of a matrix M

T

;



M

?

denotes the orthogonal complement of the space spanned by the columns of M ;



Unless noted, we do not distinguish between a matrix with orthogonal columns and the space spanned by its columns.

We also use the following notation for any given matrices A;B;C with compatible sizes:

denote

r

a

= rank( A ) ; r

b

= rank( B ) ; r

c

= rank( C ) ; r

ab

= rank

h

A B

i

;  r

ac

= rank

"

A C

#

; r

abc

= rank

"

C A B 0

#

; k

1

= r

abc?

r

b?

r

c

; k

2

= r

ab

+ r

c?

r

abc

;

k

3

=  r

ac

+ r

b?

r

abc

; k

4

= r

a

+ r

abc?

r

ab?

 r

ac

:

2 A Variational Formulation for QSVD

Nowadays, several generalizations of the OSVD have been proposed and analysed. One that is well-known is the generalized SVD as introduced by Paige and Saunders in [5], which was proposed by De Moor and Golub [11] to rename as the QSVD. Another one is the RSVD, introduced in its explicit form by Zha [16] and further developed and discussed by De Moor and Golub [8].

In this section we will give an alternative proof for the variational formulation for the QSVD of [1] based directly on QSVD itself. Firstly we present a condensed form to derive QSVD of two matrices.

Lemma 2 Given matrices A

2

R

nm

;C

2

R

pm

. Then there exist three orthogonal matrices U

a2

R

nn

;W

2

R

mm

;V

c2

R

pp

such that

U

Ta

AW =

2

6

4

r 

ac?

r

c

r

a

+ r

c?

r 

ac

r 

ac?

r

a

m

?

r 

ac

 r

ac?

r

c

A

11

A

12

0 0

r

a

+ r

c?

r 

ac

0 A

22

0 0

n

?

r

a

0 0 0 0

3

7

5

;

(4) V

Tc

CW =

2

6

4

r 

ac?

r

c

r

a

+ r

c?

r 

ac

r 

ac?

r

a

m

?

r 

ac

p

?

r

c

0 0 0 0

r

a

+ r

c?

r 

ac

0 C

22

0 0

 r

ac?

r

a

C

31

C

32

C

33

0

3

7

5

; where A

11

;A

22

;C

22

and C

33

are nonsingular.

Proof. See Appendix A.

Let the OSVD of A

22

C

22?1

be

U

22T

A

22

C

22?1

V

22

= diag

f



1

;



;

sg

=: S

A

; s = r

a

+ r

c?

r 

ac

; (5)

(4)

where U

22

;V

22

are orthogonal matrices, 

1



2 



s

> 0. De ne

U := U

a

diag

f

I

rac?rc

;U

22

;I

n?rag

; (6) V := V

c

diag

f

I

p?rc

;V

22

;I

rac?rag

: (7) X =

2

6

6

6

4

I 0 0 0

0 I 0 0

?

C

33?1

C

31 ?

C

33?1

C

32

C

33?1

0

0 0 0 I

3

7

7

7

5 2

6

6

6

4

A

?111 ?

A

?111

A

12

C

22?1

V

22

0 0 0 C

22?1

V

22

0 0

0 0 I 0

0 0 0 I

3

7

7

7

5

; (8) Then, as a direct consequence of the condensed form (4), we have the following well-known QSVD theorem.

Theorem 3 ( QSVD Theorem ) Let A

2

R

nm

;C

2

R

pm

, there exist orthogonal matrices U

2

R

nn

;V

2

R

pp

and nonsingular matrix X such that

U

T

AX =

2

6

4

r 

ac?

r

c

r

a

+ r

c?

r 

ac

r 

ac?

r

a

m

?

r 

ac

r 

ac?

r

c

I 0 0 0

r

a

+ r

c?

r 

ac

0 S

A

0 0

n

?

r

a

0 0 0 0

3

7

5

;

V

T

CX =

2

6

4

r 

ac?

r

c

r

a

+ r

c?

r 

ac

r 

ac?

r

a

m

?

r 

ac

p

?

r

c

0 0 0 0

r

a

+ r

c?

r 

ac

0 I 0 0

r 

ac?

r

a

0 0 I 0

3

7

5

; (9)

where S

A

is of the form (5), and U;V and X can be chosen to be given by (6), (7) and (8), respectively. 

i

;i = 1 ;



;s are de ned to be the non-trivial generalized singular values of two matrices A;C .

According to the uniqueness theorem in [16], we only need to characterize matrices U;V given by (6) and (7) in order to characterize the role of orthogonal matrices in QSVD. Let U;V be given by (6) and (7) and partition these two orthogonal matrices by

U =

h

r 

ac?

r

c

r

a

+ r

c?

r 

ac

n

?

r

a

U

1

U

2

U

3 i

; (10)

V =

h

p

?

r

c

r

a

+ r

c?

r 

ac

 r

ac?

r

a

V

1

V

2

V

3 i

: (11)

(12) Then, from Lemma 2 we have

U

3

=

T1

( A ) ; U

1

=

T1?

( A

S1

( C )) ; (13) V

1

=

T1

( C ) ; V

3

=

T1?

( C

S1

( A )) : (14) Hence, in order to characterize the role of orthogonal matrices U;V in QSVD, it should only characterize the role of U

2

;V

2

in QSVD.

The following variational formulation has been established in [1] to characterize U

2

and

V

2

.

(5)

Theorem 4 Given A

2

R

nm

;C

2

R

pm

. Consider the optimization problem max

2

6

4

A

T

C

T

TT

1

( A ) 0 0

T1T

( C )

3

7

5

"

x

?

y

#

=0; x6=0

k

y

k

k

x

k

: (15)

Then the non-trivial generalized singular values 

1

;



;

s

of two matrices A;C are precisely the stationary values for the problem (15). Furthermore, let

"

x

1

?

y

1

#

;



;

"

x

s

?

y

s

#

be sta- tionary points of the problem (15) with corresponding stationary values 

1

;



;

s

, then

U

2

=

h kxx11k  kxxssk i

; V

2

=

h kyy11k  kyyssk i

: Proof. We prove Theorem 4 by the following three arguments.



Argument 1 Firstly, we characterize orthogonal matrices U

22

;V

22

in (5). Consider the optimization problem

xT2A22=

max

y2TC22; x26=0

k

y

2k

k

x

2k

: (16)

Since A

22

;C

22

are both nonsingular, by Theorem 1 the 

1

;



;

s

, i.e., the singular values of the matrix A

22

C

22?1

are precisely the stationary values of the problem (16), and, if

"

x

12

?

y

21

#

;



;

"

x

s2

?

y

2s

#

are the stationary points of the problem (16) with corresponding stationary values 

1

;



;

s

, then

U

22

=

h kxx121

2 k



xs2

kxs2k

i

; V

22

=

h kyy211

2 k



ys2

kys2k

i

:



Argument 2 Secondly, let

F

=

f

"

x

?

y

#

j

x

2

R

n

;y

2

R

p

;U

Ta

x =

2

6

6

6

4

r 

ac?

r

c

0 r

a

+ r

c?

 r

ac

x

2

n

?

r

a

0

3

7

7

7

5

;

V

Tc

y =

2

6

6

6

4

p

?

r

c

0 r

a

+ r

c?

 r

ac

y

2

r 

ac?

r

a

0

3

7

7

7

5

; x

T

A = y

T

C;x

6

= 0

g

: Consider the optimization problem

max

"

x

?

y

#

2F k

y

k

k

x

k

: (17)

(6)

Obviously, we have that

"

x

?

y

#

is a stationary point of the problem (17) with stationary value  if and only if

"

x

2

?

y

2

#

is a stationary point of the problem (16) with the same stationary point  and furthermore

U

Ta

x =

2

6

6

6

4

 r

ac?

r

c

0 r

a

+ r

c?

r 

ac

x

2

n

?

r

a

0

3

7

7

7

5

; V

Tc

y =

2

6

6

6

4

p

?

r

c

0 r

a

+ r

c?

 r

ac

y

2

r 

ac?

r

a

0

3

7

7

7

5

:



Argument 3 Finally, for any x

2

R

n

;y

2

R

p

, partition U

Ta

x =

2

6

6

6

4

 r

ac?

r

c

x

1

r

a

+ r

c?

r 

ac

x

2

n

?

r

a

x

3

3

7

7

7

5

; V

Tc

y =

2

6

6

6

4

p

?

r

c

y

1

r

a

+ r

c?

 r

ac

y

2

r 

ac?

r

a

y

3

3

7

7

7

5

:

Since

U

TaT1

( A ) =

2

6

4

r 

ac?

r

c

0 r

a

+ r

c?

r 

ac

0 n

?

r

a

I

3

7

5

; V

TcT1

( C ) =

2

6

4

p

?

r

c

I r

a

+ r

c?

 r

ac

0 r 

ac?

r

a

0

3

7

5

; it is easy to know that

"

x

?

y

#

2F

if and only if

2

6

4

A

T

C

T

TT

1

( A ) 0 0

T1T

( C )

3

7

5

"

x

?

y

#

= 0 ; x

6

= 0 :

Note that

U

Ta

U

2

=

2

6

4

 r

ac?

r

c

0 r

a

+ r

c?

r 

ac

U

22

n

?

r

a

0

3

7

5

; V

Tc

V

2

=

2

6

4

p

?

r

c

0 r

a

+ r

c?

r 

ac

V

22

r 

ac?

r

a

0

3

7

5

; thus, Theorem 4 follows directly from the above Arguments 1, 2 and 3.

3 A Variational Formulation for RSVD

In Section 2 we have derived the QSVD of two matrices A;C based on the condensed form

(4). Now we will establish the RSVD of a matrix triplet ( A;B;C ) via an analogous condensed

form.

(7)

Lemma 5 Given A

2

R

nm

;B

2

R

nl

;C

2

R

pm

. Then there exist orthogonal matrices P

2

R

nn

;Q

2

R

mm

;U

b 2

R

ll

;V

c2

R

pp

such that

PAQ =

2

6

6

6

6

6

6

6

4

k

1

k

2

k

3

k

4

r 

ac?

r

a

m

?

 r

ac

k

1

A

11

A

12

0 0 0 0

k

2

0 A

22

0 0 0 0

k

3

A

31

A

32

A

33

A

34

0 0

k

4

A

41

A

42

0 A

44

0 0

r

ab?

r

a

0 0 0 0 0 0

n

?

r

ab

0 0 0 0 0 0

3

7

7

7

7

7

7

7

5

;

PBU

b

=

2

6

6

6

6

6

6

6

4

l

?

r

b

k

3

k

4

r

ab?

r

a

k

1

0 0 0 B

14

k

2

0 0 0 B

24

k

3

0 B

32

B

33

B

34

k

4

0 0 B

43

B

44

r

ab?

r

a

0 0 0 B

54

n

?

r

ab

0 0 0 0

3

7

7

7

7

7

7

7

5

; (18)

V

Tc

CQ =

2

6

6

6

4

k

1

k

2

k

3

k

4

r 

ac?

r

a

m

?

r 

ac

p

?

r

c

0 0 0 0 0 0

k

2

0 C

22

0 0 0 0

k

4

C

31

C

32

0 C

34

0 0

r 

ac?

r

a

C

41

C

42

C

43

C

44

C

45

0

3

7

7

7

5

; where A

11

;A

22

;A

33

;A

44

, B

32

, B

43

, B

54

, C

22

, C

34

and C

45

are nonsingular.

Proof. See Appendix B.

Let the OSVD of B

43?1

A

44

C

34?1

be

U

44T

B

43?1

A

44

C

34?1

V

44

= diag

f



1

;



;

k4g

=: S

A

; (19) where U

44

;V

44

are orthogonal matrices, 

1



2 



k4

> 0. De ne

U := U

b

2

6

6

6

4

I

l?rb

I

k3

U

44

I

rab?ra

3

7

7

7

5

; (20)

V := V

c

2

6

6

6

4

I

p?rc

I

k2

V

44

I

rac?ra

3

7

7

7

5

: (21)

Similarly to Theorem 3, from Lemma 5 directly, we have

Theorem 6 ( RSVD Theorem ) Given A

2

R

nm

;B

2

R

nl

;C

2

R

pm

. Then there exist

nonsingular matrices X

2

R

nn

;Y

2

R

mm

and orthogonal matrices U

2

R

ll

;V

2

R

pp

(8)

such that

X

T

AY =

2

6

6

6

6

6

6

6

4

k

1

k

2

k

3

k

4

r 

ac?

r

a

m

?

r 

ac

k

1

I 0 0 0 0 0

k

2

0 I 0 0 0 0

k

3

0 0 I 0 0 0

k

4

0 0 0 S

A

0 0

r

ab?

r

a

0 0 0 0 0 0

n

?

r

ab

0 0 0 0 0 0

3

7

7

7

7

7

7

7

5

;

X

T

BU =

2

6

6

6

6

6

6

6

4

l

?

r

b

k

3

k

4

r

ab?

r

a

k

1

0 0 0 0

k

2

0 0 0 0

k

3

0 I 0 0

k

4

0 0 I 0

r

ab?

r

a

0 0 0 I

n

?

r

ab

0 0 0 0

3

7

7

7

7

7

7

7

5

; (22)

V

T

CY =

2

6

6

6

4

k

1

k

2

k

3

k

4

 r

ac?

r

a

m

?

r 

ac

p

?

r

c

0 0 0 0 0 0

k

2

0 I 0 0 0 0

k

4

0 0 0 I 0 0

r 

ac?

r

a

0 0 0 0 I 0

3

7

7

7

5

;

where S

A

is of the form (19), and U;V can be chosen to be given by (20) and (21), respectively,



1

;



;

k4

are de ned to be the non-trivial restricted singular values of matrix triplets A;B;C . From the uniqueness theorem in [16], we only need to consider matrices U;V given by (20) and (21) in order to characterize the role of orthogonal matrices in RSVD. Let U;V be de ned by (20) and (21), respectively and partition

U =

h

l

?

r

b

k

3

k

4

r

ab?

r

a

U

1

U

2

U

3

U

4 i

; V =

h

p

?

r

c

k

2

k

4

r 

ac?

r

a

V

1

V

2

V

3

V

4 i

: We have

U

1

=

S1

( B ) ; U

4

=

S1?

(

T1T

( A ) B ) ; V

1

=

T1

( C ) ; V

4

=

T1?

( C

S1

( A )) : Furthermore, if we de ne

1

:= C

S1

(

T1T

( B ) A ) ;

2

:= A

S1

(

T1T

( B ) A ) ;

3

:= (

T1

(

2S1

(

1

)))

T

B;

(9)

then,

1

= V

c

2

6

6

6

4

k

3

k

4

r 

ac?

r

a

m

?

r 

ac

p

?

r

c

0 0 0 0

k

2

0 0 0 0

k

4

0 C

34

0 0

r 

ac?

r

a

C

43

C

44

C

45

0

3

7

7

7

5

;

2

= P

T 

2

6

6

6

6

6

6

6

4

k

3

k

4

r 

ac?

r

a

m

?

r 

ac

k

1

0 0 0 0

k

2

0 0 0 0

k

3

A

33

A

34

0 0

k

4

0 A

44

0 0

r

ab?

r

a

0 0 0 0

n

?

r

ab

0 0 0 0

3

7

7

7

7

7

7

7

5

;

3

=

2

6

6

6

6

6

4

l

?

r

b

k

3

k

4

r

ab?

r

a

k

1

0 0 0 B

14

k

2

0 0 0 B

24

k

4

0 0 B

43

B

44

r

ab?

r

a

0 0 0 B

54

n

?

r

ab

0 0 0 0

3

7

7

7

7

7

5

U

Tb

: (23)

Hence, from (23) we have

h

U

1

U

2 i

=

S1

(

3

) :

Thus, in order to characterize the role of orthogonal matrices U;V in RSVD we only need to characterize the role of U

3

;V

3

of U;V in RSVD. This can be done by the following variational formulation.

Theorem 7 Given matrices A

2

R

nm

;B

2

R

nl

;C

2

R

pm

. Consider the optimization problem

max

2

6

6

6

6

6

6

6

6

4

A B

ST

1

(

"

A C

#

) 0

ST

1

( A ) C

T

C 0 0

S1T

( B ) 0

T1T

( B

S1?

(

3

)) B

3

7

7

7

7

7

7

7

7

5

"

x

?

y

#

=0; x6=0

k

y

k

k

Cx

k

: (24)

Then the stationary values for the problem (24) are precisely the non-trivial generalized singu- lar values 

1

;



;

k4

of the matrix triplet A;B;C . Moreover, if

"

x

1

?

y

1

#

;



;

"

x

k4

?

y

k4

#

are the stationary point of the problem (24) corresponding to the stationary values 

1

;



;

k4

, respectively, then

U

3

=

h kyy11k  kyykk44 k

i

:

Proof. Same as the proof of Theorem 4, we prove part (a) by the following three arguments.

(10)



Argument 1 Firstly, we characterize U

44

in (19). Consider the optimization problem

A44x4=

max

B43y3; y36=0

k

y

3k

k

C

34

x

4k

: (25)

Since A

44

;B

43

;C

34

are nonsingular, so by Theorem 1, the stationary values of the prob- lem (25) are precisely 

1

;



;

k4

, i.e., the singular values of the matrix B

43?1

A

44

C

34?1

, and, if let the corresponding stationary points be

"

x

14

?

y

31

#

;



;

"

x

k44

?

y

3k4

#

, then U

44

=

 kyy131

3 k



yk34

kyk34k



:



Argument 2 Secondly, de ne

F

:=

f

"

x

?

y

#

k

x

2

R

m

;y

2

R

l

;U

Tb

y =

2

6

6

6

4

l

?

r

b

0

k

3

0

k

4

y

3

r

ab?

r

a

0

3

7

7

7

5

; Q

T

x =

2

6

6

6

6

6

6

6

4

k

1

0

k

2

0

k

3

x

3

k

4

x

4

 r

ac?

r

a

x

5

m

?

r 

ac

0

3

7

7

7

7

7

7

7

5

;

C

43

x

3

+ C

44

x

4

+ C

45

x

5

= 0 ;Ax = By;y

6

= 0

g

: Consider the optimization problem

max

"

x

?

y

#

2F k

y

k

k

Cx

k

: (26)

Since A

33

;A

44

;B

43

and C

45

are nonsingular, so a simple calculation yields that the problem (26) are equivalent to the problem (25) in the sense that the stationary values of the problem (26) are precisely the stationary values of the problem (25), i.e., 

1

;



;

k4

, and,

"

x

?

y

#

is the stationary point of the problem (26) if and only if

"

x

4

?

y

3

#

is the stationary point of the problem (25) with same stationary value.



Argument 3 Thirdly, for any x

2

R

m

;y

2

R

l

, denote

U

Tb

y =

2

6

6

6

4

l

?

r

b

y

1

k

3

y

2

k

4

y

3

r

ab?

r

a

y

4

3

7

7

7

5

; Q

T

x =

2

6

6

6

6

6

6

6

4

k

1

x

1

k

2

x

2

k

3

x

3

k

4

x

4

 r

ac?

r

a

x

5

m

?

r 

ac

x

6

3

7

7

7

7

7

7

7

5

:

(11)

Since

ST

1

(

"

C A

#

) x = 0

()

x

6

= 0;

Ax = By =

)

x

1

= 0 ; x

2

= 0 ; y

4

= 0;

ST

1

( A ) C

T

Cx =

()

C

41

x

1

+ C

42

x

2

+ C

43

x

3

+ C

44

x

4

+ C

45

x

5

= 0;

ST

1

( B ) y = 0

()

y

1

= 0 : From (23), we also know

T

T

1

( B

S1?

( )) By = 0

()

y

2

= 0 : Therefore, we have that

"

x

?

y

#

2F

if and only if

2

6

6

6

6

6

6

6

6

4

A B

ST

1

(

"

A C

#

) 0

ST

1

( A ) C

T

C 0 0

S1T

( B ) 0

T1T

( B

S1?

(

3

)) B

3

7

7

7

7

7

7

7

7

5

"

x

?

y

#

= 0 ; y

6

= 0 :

Note that

U

Tb

U

3

=

2

6

6

6

4

l

?

r

b

0

k

3

0

k

4

U

44

r

ab?

r

a

0

3

7

7

7

5

;

so, Theorem 7 follows directly from the above Arguments 1, 2 and 3.

Similarily, we also have the dual result of Theorem 7 which characterizes the non-trivial generalized singular values 

1

;



;

k4

and the matrix V

3

in (21). For the sake of simplicity, we omit it here.

4 Conclusion

In this paper, we have studied generalized singular value decompositions. We have given an alternative proof of the variational formulation for the QSVD in [1] and established an analogous variational formulation for the RSVD which provides new understanding of the orthogonal matrices appearing in this decomposition.

5 Acknowledgement

Some of Delin Chu's work was done during his visit in the Department of Mathematics at

The University of Bielefeld in Germany in April 1998. He is grateful to Professor L.Elsner for

his kind hospitality and nancial support. He also thanks Professor L.Elsner for his reading

and correcting the rst version of the present paper.

(12)

Appendix A

In this appendix we prove Lemma 2 constructively.

Proof. We prove Lemma 2 by 4 steps as follows:

Step 1: Perform simulaneous row and column compression:

U

1T

AW

1

=:

"

r

a

r 

ac?

r

a

m

?

r 

ac

r

a

A

11

0 0

n

?

r

a

0 0 0

#

;

V

1T

CW

1

=:

2

6

4

r

a

r 

ac?

r

a

m

?

r 

ac

p

?

r

c

0 0 0

r

a

+ r

c?

r 

ac

C

21

0 0

 r

ac?

r

a

C

31

C

33

0

3

7

5

with A

11

;C

33

nonsingular and C

21

full row rank.

Step 2: Perform a column compression:

C

21

W

2

=:

h

r 

ac?

r

c

r

a

+ r

c?

r 

ac

0 C

22 i

with C

22

nonsingular. Set

A

11

W

2

=:

h

r 

ac?

r

c

r

a

+ r

c?

r 

ac

A

11

A

12 i

; C

31

W

2

=:

h

r 

ac?

r

c

r

a

+ r

c?

r 

ac

C

31

C

32 i

: Step 3: Perform a row compression:

U

3T

A

11

=:

"

r 

ac?

r

c

A

11

r

a

+ r

c?

r 

ac

0

#

with A

11

nonsingular. Set

U

3T

A

12

=:

"

r 

ac?

r

c

A

12

r

a

+ r

c?

 r

ac

A

22

#

: Step 4: Set

U

a

:= U

1

"

U

3

I

#

; W := W

1

"

W

2

I

#

; V

c

:= V

1

:

Then orthogonal matrices U

a

;V

c

and W satisfy (4).

Referenties

GERELATEERDE DOCUMENTEN

Greedy Distributed Node Selection for Node-Specific Signal Estimation in Wireless Sensor NetworksJ. of Information Technology (INTEC), Gaston Crommenlaan 8 Bus 201, 9050

It also leads to a transparent derivation of all different normal forms for pure states of three qubits: a pure state of three qubits is indeed uniquely defined, up to local

In addition to proving the existence of the max-plus-algebraic QRD and the max-plus-algebraic SVD, this approach can also be used to prove the existence of max-plus-algebraic

Zha, Analysis and numerical solution of fixed rank matrix approximation with general weighting matrices, ESAT-SISTA Report 1989-22, 28 pp., October 1989,

Future research topics will include: further investigation of the properties of the SVD in the extended max algebra, development of efficient algorithms to

In this section we treat the first decomposition theorem for matrix sequences (or matrix recurrences). The use of the theorem lies in the fact that the second

Daarbij houdt PRW aandacht voor de belangen en kansen die deze ontwikkeling biedt voor de kokkelvissers en de natuur op de Waddenzee zelf..

Computed examples illustrate the truncated version of the SRSVD to determine more accurate approximate solutions of linear discrete ill-posed problems than the TSVD. When W contains