• No results found

DECOMPOSITION OF A TENSOR INTO MULTILINEAR

N/A
N/A
Protected

Academic year: 2021

Share "DECOMPOSITION OF A TENSOR INTO MULTILINEAR"

Copied!
14
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

RANK-(Mr, Nr, ·) TERMS 2

IGNAT DOMANOV†, NICO VERVLIET†, AND LIEVEN DE LATHAUWER† 3

Key words. multilinear algebra, third-order tensor, block term decomposition, multilinear rank 4

AMS subject classifications. 15A23, 15A69 5

1. Introduction.

6

1.1. Terminology and problem statement. Throughout the paper F denotes

7

the field of real or complex numbers; bold-face lower case, bold-face capital case, and

8

capital calligraphic letters refer to vectors, matrices, and tensors, respectively (e.g.,

9

a1, B2, and T ); O and 0 denote a zero matrix and a zero vector, respectively; Inand 10

Om×n denote the n × n identity matrix and the m × n zero matrix, respectively; On 11

a is shorthand for On×n; Null (·) denotes the null space of a matrix; “∗” , “T”, and 12

“H” denote the conjugation, transpose, and hermitian transpose, respectively.

13

Let tensor T ∈ FI×J ×K and let T1, . . . .TK ∈ FI×J denote the frontal slices of 14 T . The matrices 15 T(1)= [T1 . . . TK] ∈ FI×J K, 16 T(2)= [TT1 . . . TTK] ∈ FJ ×IK, 17

T(3)= [vec(T1) . . . vec(TK)]T ∈ FK×IJ, 18

19

are called the matrix unfoldings of T and the triplet of their ranks (rT(1), rT(2), rT(3))

20

is called the multiLinear (ML) rank of T .

21

Tensor T is ML rank-(M, N, ·) if rT(1) = M , rT(2) = N and rT(3) is not specified.

22

It can easily be shown that T is ML rank-(M, N, ·) if and only if its frontal slices can

23

be simultaneously factorized as

24

(1) Tk = ADkBT, Dk∈ FM ×N, k = 1, . . . , K, 25

where the factors A ∈ FI×M

and B ∈ FJ ×N have full column rank and the matrices

D(1):= [D1 . . . DK] ∈ FM ×N K and D(2):= [DT1 . . . D T K] ∈ F

N ×M K

have full row rank. (One can, for instance, take A and B equal to the “U” factors in

26

the compact SVDs of T(1)and T(2), respectively, and set Dk = AHTkB∗.) It’s worth 27

to be noted that the factors A, Dk, and B in(1)are not unique. Moreover, a generic 28

Submitted to the editors DATE.

Funding: This work was funded by (1) Research Council KU Leuven: C1 project c16/15/059-nD; (2) F.W.O.: project G.0830.14N, G.0881.14N; (3) the Belgian Federal Science Policy Office: IUAP P7 (DYSCO II, Dynamical systems, control and optimization, 2012-2017); (4) EU: The re-search leading to these results has received funding from the European Rere-search Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) / ERC Advanced Grant: BIOTENSORS (no. 339804). This paper reflects only the authors’ views and the Union is not liable for any use that may be made of the contained informationTODO: Nico

Group Science, Engineering and Technology, KU Leuven - Kulak, E. Sabbelaan 53, 8500

Kortrijk, Belgium and Dept. of Electrical Engineering ESAT/STADIUS KU Leuven, Kasteel-park Arenberg 10, bus 2446, B-3001 Leuven-Heverlee, Belgium (ignat.domanov@kuleuven.be,

(2)

ML rank-(M, N, ·) tensor can parameterized with (I − M )M + (J − N )N + M N K

29

parameters. Indeed, if X ∈ FI×I

and Y ∈ FJ ×J are nonsingular matrices, then 30

T1, . . . , TK can also be factorized as 31

(2) Tk = (AX) X−1DkY−T (BY)T = bA bDkBbT, k = 1, . . . , K.

32

Assuming that A, B, and Dk in (2)are generic we can set X (resp. Y) equal to the 33

inverse of the matrix formed by the first M (resp. N ) rows of A (resp. B), implying

34

that the right-hand side of the K equations

35 Tk= bA bDkBbT = IM ∗  ∗IN ∗ T , k = 1, . . . , K. 36

can be parameterized with (I − M )M + (J − N )N + M N · K parameters.

37

We will write the K factorizations in(1)as T = D •1A •2B,

where D denotes an M × N × K tensor with frontal slices D1, . . . , DK. 38

In this paper we study a decomposition of T into a sum of ML rank-(Mr, Nr, ·) 39

tensors (seeFigure 1)

40 (3) T = R X r=1 Dr•1Ar•2Br, Dr∈ FMr×Nr×K, Ar∈ FI×Mr, Br∈ FJ ×Nr. 41 T = A1 B1 D1 + · · · + AR BR DR

Fig. 1. ML rank-(Mr, Nr, ·) decomposition of a third-order tensor

42

Definition 1.1. ML rank-(Mr, Nr, ·) decomposition of tensor T is called unique 43

if for any two decompositions of the form (3)one can be obtained from another by a

44

permutation of summands.

45

Definition 1.2. ML rank-(Mr, Nr, ·) decomposition is called generically unique 46 if 47 48 µ{(A1, . . . , AR, B1, . . . , BR, D1, . . . , DR) : 49 the decomposition of T = R X r=1 Dr•1Ar•2Br is not unique} = 0, 50 51

where µ denotes a measure on FIP Mr+JP Nr+KP MrNr absolutely continuous with

52

respect to the Lebesgue measure.

53

We say that T is a sum of R generic ML rank-(Mr, Nr, ·) terms if the entries of Ar, 54

Br, and Dr in(3)are randomly sampled from an absolutely continuous distribution. 55

Thus, the generic uniqueness means uniqueness that holds with probability one.

(3)

Let

57

(4) A = [A1 . . . AR] ∈ FI×P Mr and B = [B1 . . . BR] ∈ FJ ×P Nr. 58

Throughout the paper we assume that the matrices

59

(5) A and B have full column rank.

60

1.2. Previous results. Let Dr,k∈ FMr×Nr denote the k-th frontal slice of the 61

tensor Dr and let 62

(6) Dk = blockdiag(D1,k, . . . , DR,k) ∈ FP Mr×P Nr, k = 1, . . . , K 63

denote a block-diagonal matrix with the matrices D1,k, . . . , DR,kon the diagonal. By 64

(1), tensor decomposition(3)can be interpreted as the problem of (possibly

nonsym-65

metric and nonsquare) Joint Block Diagonalization of the frontal slices of T :

66

(7) Tk= ADkBT = A blockdiag(D1,k, . . . , DR,k)BT, k = 1, . . . , K, 67

where A and B are defined in(4).

68

Problem(7)(both uniqueness and algorithms) has been relatively well studied in

69

the following cases.

70

1. JBD by congruence and ∗-congruence, i.e., the cases B = A and B = A∗,

71

implying the constraint Mr= Nr for all r (see, for instance, [1,2,8] and the 72

references therein).

73

2. All blocks are row vectors, i.e., M1= · · · = MR= 1, implying that ML rank-74

(Mr, Nr, ·) terms in (3) are actually ML rank-(1, Nr, Nr) (see, for instance, 75

[3,4,10,9]);

76

3. All blocks are column vectors, i.e., N1 = · · · = NR = 1 (follows from 2. by 77

taking transpose of the identities in(7)).

78

Apart from the items 1.–3. above the generic uniqueness of (7) was proved in [3]

79

under the assumptions Mr= Nr for all r and K ≥ 3. 80

1.3. Our contribution. We obtain results on uniqueness and present

algo-81

rithms for the computation of ML rank-(Mr, Nr, ·) tensors in (3) in the case where 82

Mr6= Nr for some r. 83

1.3.1. Exact decomposition.

84

Theorem 1.3. Let T admit the ML rank-(Mr, Nr, ·) decomposition (3), the ma-85

trices A, B and D1, . . . , DK be defined in (4) and (6), respectively, and 86 (8) MD =    DT1 ⊗ II −IJ⊗ D1 .. . ... DTK⊗ II −IJ⊗ DK   . 87 Assume that 88

1. A and B have full column rank and

89

2. dim Null (MD) = R. 90

Then the ML rank-(Mr, Nr, ·) decomposition of T is unique and can be computed by 91

means of Coupled Simultaneous Eigenvalue Decomposition (CS-EVD).

92

Proof. Seesection 2.

(4)

Assumption 2) inTheorem 1.3expresses the fact that the system of matrix equations

94

(9) XDk= DkY, k = 1, . . . , K

95

has exactly R linearly independent solutions (X, Y) ∈ FP Mr×P Mr× FP Nr×P Nr. If

96

PI,r = blockdiag(OM1, . . . , OMr−1, IMr, OMr+1, . . . , OMR) and

PJ,r = blockdiag(ON1, . . . , ONr−1, INr, ONr+1, . . . , ONR), r = 1, . . . , R, (10)

97

then (X, Y) = (PI,r, PJ,r) is, obviously, a solution of (9), 98 99 PI,rDk = DkPJ,r= 100 blockdiag(OM1×N1, . . . , OMr−1×Nr−1, Dr,k, OMr+1×Nr+1, . . . , OMR×NR). 101 102

Thus, assumption 2) in Theorem 1.3 means that the dimension of the subspace of

103

solutions of (9)is minimal.

104

The algebraic procedure related toTheorem 1.3 is summarized in Algorithm 1.

105

By assumption 1) inTheorem 1.3, the number of columns of A and B cannot exceed

106

the number of rows, i.e.,P Mr≤ I andP Nr≤ J . To simplify the presentation we 107

assume inAlgorithm 1 that P Mr= I and P Nr= J , implying that A and B are 108

square nonsingular matrices and T ∈ FP Mr×P Nr×K. Since, by Lemma 2.1 below,

109

T is ML rank-(P Mr,P Nr, ·) tensor, such a compression can be done, for instance, 110

using ML Singular Value Decomposition [5]. Note that Algorithm 1 also recovers

111

the dimensions of the blocks, i.e., the values Mr and Nr. Indeed, in steps 1–4 of 112

Algorithm 1we construct matrices UI,1, . . . , UI,R∈ FI×I and UJ,1, . . . , UJ,R∈ FJ ×J 113

that can be simultaneously diagonalized by A and B−T, respectively. In steps 5 of 114

Algorithm 1we use the fact that for each r the spectra of UI,rand UJ,r relate to each 115

other as in (12)–(13). Thus, if the values α1, . . . , αR are generic, then the distinct 116

eigenvalues of the matrices F = α1UI,1+· · ·+αRUI,Rand G = α1UJ,1+· · ·+αRUJ,R 117

are the same and their multiplicities are M1, . . . , MR and N1, . . . , NR respectively. 118

Hence, the values Mr, Nr and the corresponding blocks Ar, Br can be recovered 119

from the EVDs of F and G.

120

The following corollary completes the results on uniqueness and computation of

121

the ML rank-(1, Nr, Nr) decomposition in [3,4,10]. 122

Corollary 1.4. Let T admit the ML rank-(1, Nr, Nr) decomposition (3), the 123

matrices A, B and D1, . . . , DK be defined in (4)and (6), respectively, and let 124 (14) Cr:= [DTr,1 . . . D T r,K] T ∈ FK×Nr, r = 1, . . . , R. 125 Assume that 126

1. A and B have full column rank and

127

2. r[Ci Cj] ≥ max(Ni, Nj) + 1 for all 1 ≤ i < j ≤ R.

128

Then the ML rank-(1, Nr, Nr) decomposition of T is unique and can be computed by 129

means of CS-EVD.

130

Proof. Seesection 2.

131

The following theorem contains bounds on the tensor dimensions that

guaran-132

tee that assumptions 1) and 2) in Theorem 1.3 hold for generic matrices A, B and

133

D1, . . . , DK. 134

(5)

Algorithm 1 Computation of BTD(3)by CS-EVD (seeTheorem 1.3) Require: tensor T ∈ FI×J ×K such that I =P Mr and J =P Nr

1: Construct the IJ K-by-(I2+ J2) matrix MT as

(11) MT =    TT 1 ⊗ II −IJ⊗ T1 .. . ... TT K⊗ II −IJ⊗ TK   

2: Compute ur= [u1,r . . . uI2+J2,r]T, r = 1, . . . , R that form a basis of Null (MT) 3: For r = 1, . . . , R reshape [u1,r . . . uI2,r]T into the I × I matrix UI,r

4: For r = 1, . . . , R reshape [uI2+1,r . . . uI2+J2,r]T into the J × J matrix UJ,r

5: Solve CS-EVD problem

UI,r= A blockdiag(g1rIM1, . . . , gRrIMR)A −1, (12) UJ,r = B−Tblockdiag(g1rIN1, . . . , gRrINR)B T, r = 1, . . . , R (13)

with respect to the matrices A, B and the values M1, . . . , MR, N1, . . . , NR 6: Solve(7)with respect to D1, . . . , DK

7: For r = 1, . . . , R stack Dr,1, . . . , Dr,K into the Mr× Nr× K tensor Dr 8: return Ar∈ FI×Mr, Br∈ FJ ×Nr and Dr∈ FMr×Nr×K such that(3)holds

Theorem 1.5. Let T be a sum of R generic ML rank-(Mr, Nr, ·) terms. Assume 135 that 136 (15) I ≥XMr, J ≥ X Nr, K ≥ max  max Mr Nr  + max Nr Mr  , 3  , 137

where dxe denotes the smallest integer not less than x. Then the decomposition of T

138

into a sum of ML rank-(Mr, Nr, ·) terms is unique and can be computed by means of 139

CS-EVD.

140

Proof. Seesection 3.

141

The following corollaries easily follow fromTheorem 1.5.

142

Corollary 1.6. Let T be a sum of R1 ≥ 1 generic ML rank-(1, Nr, Nr) terms

and R2≥ 1 generic ML rank-(Mr, 1, Mr) terms. Assume that

I ≥ R1+

X

Mr, J ≥ R2+

X

Nr, and K ≥ max Mr+ max Nr.

Then the decomposition of T into a sum of R1ML rank-(1, Nr, Nr) terms and R2 ML 143

rank-(Mr, 1, Mr) terms is unique and can be computed by means of CS-EVD. 144

Corollary 1.7. Let T be a sum of R generic ML rank-(1, Nr, Nr) terms. As-145

sume that

146

(16) I ≥ R, J ≥XNr, and K ≥ max Nr+ 1. 147

Then the decomposition of T into a sum of ML rank-(1, Nr, Nr) terms is unique and 148

can be computed by means of CS-EVD.

(6)

Corollary 1.8. Let T be a sum of R generic ML rank-(M, N, ·) terms. Assume that I ≥ RM, J ≥ RN, K ≥ max( M N  + 1, N M  + 1, 3).

Then the decomposition of T into a sum of ML rank-(M, N, ·) terms is unique and

150

can be computed by means of CS-EVD.

151

The bounds inCorollary 1.6,Corollary 1.7, andCorollary 1.8are new and guarantee

152

that a generic ML rank (Mr, Nr, ·) decomposition can be computed by Algorithm 1. 153

The bound on K in(16)complements the bounds for generic uniqueness of the ML

154

rank-(1, N, N ) decomposition in [3, Section 4] and is more relaxed than the bound for

155

generic uniqueness of ML rank-(1, Nr, Nr) decomposition in [9, Theorem 1.7 (E)]. 156

In the remaining part of this section we apply Theorem 1.3 to (generic)

con-157

strained JBD problems in Table 1. Uniqueness of the constrained JBD problem

Table 1

Some examples of constrained JBD problems

Type of JBD problem Constraints in(7) F

JBD by congruence B = A R or C

symmetric JBD B = A and Dk = DTk, k = 1, . . . , K R or C

JBD by ∗-congruence B = A∗ C

Hermitian JBD B = A∗ and Dk= DHk , k = 1, . . . , K C 158

means uniqueness of the corresponding constrained ML rank-(Nr, Nr, 1) decomposi-159

tion. Generic uniqueness of the constrained JBD problem can be defined in a similar

160

way as inDefinition 1.2. By way of example, we resort to the following definition of

161

generic uniqueness of the symmetric JBD problem.

162

Definition 1.9. Let Lr denote a linear bijection between FK

Nr (Nr +1)

2 and the

163

subspace of Nr× Nr× K tensors whose frontal slices are symmetric matrices. Let 164

also µ be a measure on FIP Nr+KPNr (Nr +1)2 absolutely continuous with respect to the

165

Lebesgue measure. A solution of the generic symmetric JBD problem is called unique

166 if 167 168 µ{(A1, . . . , AR, d1, . . . , dR) : 169

the constraint decomposition of T =

R X r=1 Lr(dr) •1Ar•2Ar is not unique} = 0. 170 171

Theorem 1.5 guarantees that if K ≥ 3 and the diagonal blocks Dr,k in JBD 172

problem (7) are square and generic, then the decomposition is unique and can be

173

computed by means of CS-EVD. The following theorem states that the bound K ≥

174

3 remains valid for all constrained JBD problems in Table 1 as well and that the

175

computation relies on S-EVD.

176

Theorem 1.10. Assume that I ≥ P Nr, and K ≥ 3. Then the solution of a 177

generic constrained JBD problems inTable 1is unique and can be computed by means

178

of S-EVD.

179

Steps 1–4 and modified step 5 of Algorithm 1 constitute the algebraic procedure

180

related toTheorem 1.10. In step 5, because of the constraints B = A and B = A∗,

181

the CS-EVD problem(12)–(13)turns into the S-EVD problem. It can be shown that

(7)

in the case of the JBD by congruence problem the proposed reduction to S-EVD

183

essentially coincides with the one in [1, Subsection 2.3].

184

Proof. Seesection 3.

185

2. Proof ofTheorem 1.3andCorollary 1.4. We need the following lemmas.

186 187

Lemma 2.1. Let the conditions of Theorem 1.3 hold and let U1 and U2 denote 188

the “U” factors in the compact SVDs of T(1) and T(2), respectively. Then 189

1. tensor T ∈ FI×J ×K is ML rank-(P M

r,P Nr, ·); 190

2. the “compressed” matrices Ac := UH

1A and Bc := UH2 B are square and 191

nonsingular;

192

3. A = U1Ac and B = U2Bc; 193

4. the “compressed” tensor Tc := T •

1UH1 •2 UH2 ∈ FP Mr×P Nr×K is ML 194

rank-(P Mr,P Nr, ·); 195

5. tensor Tc admits the ML rank-(M

r, Nr, ·) decomposition Tc = R X r=1 Dr•1(UH1 Ar) •2(UH2Br),

where the tensors Dr∈ FMr×Nr×K are the same as in decomposition (3). 196

Proof. 1. First we show that the matrices

197

(17) D(1)= [D1 . . . DK] and D(2)= [DT1 . . . DTK] have full row rank. 198

Assume that fTD

(1) = 0. Then 1fTDk = DkOP Nr, where 1 denotes theP Mr× 1

199

vector of ones. Hencef ⊗ 1 0



∈ Null (MD). Since dim Null (MD) = R, it follows that 200

the rank-1 matrix 1fT is a linear combination of the matrices P

I,1, . . . , PI,R. Hence 201

f = 0. Thus D(1) has full row rank. In a similar way one can prove that D(2) has 202

also full row rank. Now, from assumption 1 inTheorem 1.3,(17), and the identities

203

(18) T(1)= AD(1)blockdiag(BT, . . . , BT), T(2) = BD(2)blockdiag(AT, . . . , AT) 204

it follows that

205

(19) col(T(1)) = col(A) and col(T(2)) = col(B), 206

where col(·) denotes the column space of a matrix. Hence rT(1) = rA = P Mr

207

and rT(2) = rB = P Nr, implying that the tensor T ∈ F

I×J ×K is ML rank-208

(P Mr,P Nr, ·). 209

2. and 3. follow from (19)and the fact that U1 and U2 are the “U” factors in 210

the compact SVDs of T(1) and T(2), respectively. 211

4. From the construction of Tc

(1) and(18)it follows that 212 213 (20) Tc(1) = [UH1T1U∗2 . . . U H 1 TKU∗2] = U H 1T(1)blockdiag(U∗2, . . . , U ∗ 2) = 214 UH1 AD(1)blockdiag((UH2B) T, . . . , (UH 2B) T) = 215 AcD(1)blockdiag((Bc)T, . . . , (Bc)T). 216 217

Hence, by statement 2. and (17), rTc

(1) = rAc = P Mr. In a similar way one can

218 prove that rTc (2) = rBc=P Nr. Thus, T c is ML rank-(P M r,P Nr, ·). 219

5. Follows from(20)and 2.

(8)

Lemma 2.2. Let I = P Mr, J = P Nr, A and B be nonsingular, and let the 221

matrices MD and MT be defined in (8) and (11), respectively. Then 222

1) the matrices MD and MT are related by 223 MT = blockdiag(B ⊗ A, . . . , B ⊗ A) · MD· AT ⊗ A−1 O O B−1⊗ BT  ; 224 225

2) the matrices X ∈ FI×I

and Y ∈ FJ ×J satisfy the equations XD

k = DkY, 226 k = 1, . . . , K if and only if MD vec(X) vec(Y)  = 0; 227

3) the matrices X ∈ FI×I

and Y ∈ FJ ×J satisfy the equations XT

k = TkY, 228 k = 1, . . . , K if and only if MT vec(X) vec(Y)  = 0; 229 4) Null (MD) ⊇ span ( vec(PI,r) vec(PJ,r)  : r = 1, . . . , R )

, where the projectors PI,r 230

and PJ,r are defined in (10); 231 5) Null (MT) = A−T ⊗ A O O B ⊗ B−T  Null (MD); 232

6) dim Null (MT) = dim Null (MD) ≥ R; 233

7) if dim Null (MD) = R, then the matrices 234

[AD1BT . . . ADKBT] ∈ FI×J K and [BDT1A

T . . . BDT KA T ] ∈ FJ ×IK 235 236

have full row rank.

237

Proof. 1) follows from(7), (8), and(11); 2)–4) are trivial; 5) follows from 1); 6)

238

follows from 5) and 4); 7) follows fromLemma 2.11. and(18).

239

Now we are ready to proveTheorem 1.3.

240

Proof of Theorem 1.3. ByLemma 2.1, WLOG, we can assume that I = P Mr 241

and J =P Nr. 242

1) First we show that decomposition (3) can be computed algebraically. From assumption 2) andLemma 2.2 4), 6) we have that

Null (MD) = span ( vec(PI,r) vec(PJ,r)  : r = 1, . . . , R )

and dim Null (MT) = R.

Hence, byLemma 2.2 5), 243 (21) Null (MT) = A−T ⊗ A O O B ⊗ B−T  span ( vec(PI,r) vec(PJ,r)  : r = 1, . . . , R ) . 244

Let the columns of U ∈ F(I2+J2)×R

form a basis of Null (MT). We represent U as 245

(22) U =vec(UI,1) . . . vec(UI,R) vec(UJ,1) . . . vec(NJ,R)

 ,

246

where UI,r ∈ FI×I and UJ,r ∈ FJ ×J, r = 1, . . . , R. From (21) and (22), it follows 247

that there exists a unique nonsingular R × R matrix G such that

248

vec(UI,1) . . . vec(UI,R) = A−T ⊗ A [vec(PI,1) . . . vec(PI,R)]G, 249

vec(UJ,1) . . . vec(UJ,R) = B ⊗ B−T [vec(PJ,1) . . . vec(PJ,R)]G. 250

(9)

Hence, (12)–(13) hold. Since the matrices A, B and G are nonsingular, it follows

252

that the matrices UI,1, . . . , UI,R are linearly independent. Thus, we can recover A 253

from(12). Similarly, we can recover B from(13).

254

2) Now we show that decomposition(3)is unique. Assume that(3)holds for Ar, 255

Br, etc. replaced by eAr, eBr, etc. To prove the uniqueness it is sufficient to show that 256

e

A, eB and M

e

D satisfy assumptions 1) and 2), i.e., that eA and eB are nonsingular and 257 dim Null M e D = R. By (7), 258 [AD1BT . . . ADKBT] =[T1 . . . TK] = [ eA eD1BeT . . . eA eDKBeT] = eA[ eD1BeT . . . eDKBeT], (23) 259 [BDT1AT . . . BDTKAT] =[TT1 . . . TTK] = [ eB eDT1AeT . . . eB eDTKAeT] = eB[ eDT1AeT . . . eDTKAeT]. (24) 260 261

Since, byLemma 2.27), the matrices in the left-hand side of (23)and(24)have full

262

row rank, it follows that the matrices eA and eB are nonsingular. Since, by assumption

263

2), dim Null (MD) = R, and, byLemma 2.25), 264 Null (MT) = A−T ⊗ A O O B ⊗ B−T  Null (MD) = " e A−T ⊗ eA O O B ⊗ ee B−T # Null M e D , 265

it follows that dim Null M

e D = R. 266

Proof of Corollary 1.4. It is sufficient to prove that assumption2inCorollary 1.4

267

implies assumption2 in Theorem 1.3. Thus, we need to show that the subspace of

268

solution of (9)is spanned by the trivial solutions (X, Y) = (PI,r, PJ,r), r = 1, . . . , R, 269

where PI,r and PJ,r are defined in(10)and M1= · · · = MR= 1. 270

Let X = (xij)Ri,j=1 and let Y = (Yij)i,j=1R consists of the blocks Yij ∈ FNi×Nj. 271

Since the matrices Dk are block-diagonal, we can rewrite (9)as 272 xijDj,k= Di,kYij, 1 ≤ i, j ≤ R, k = 1, . . . , K, 273 which, by(14), is equivalent to 274 (25) xijCj = CiYij, 1 ≤ i, j ≤ R. 275

We need to show that

276 Yii = xiiINi, 1 ≤ i ≤ R, (26) 277 Yij = O, xij = 0, 1 ≤ i 6= j ≤ R. (27) 278 279

(i) Let us prove(26). By(25),

280

(28) Ci(xiiINi− Yii) = O, 1 ≤ i ≤ R.

281

Since the 1 × Ni× K tensor Di in(3) is ML rank-(1, Ni, Ni) and the second matrix 282

unfolding of Di coincides with CTi , it follows that CTi has full row rank or that Ci 283

has full column rank. Hence(26)follows from (28).

(10)

(ii) Let us prove(27). It is clear that(25)can be rewritten as 285 (29) O = [Ci Cj]  Yij xijINj  . 286

Assume to the contrary that xij 6= 0. Then from (29), Sylvester’s rank inequality,

and assumption2 it follows that 0 ≥ r[Ci Cj]+ r[YT

ij xijINj]T − (Ni+ Nj) ≥ max(Ni, Nj) + 1 + Nj− (Ni+ Nj) ≥ 1, which is a contradiction. Thus, xij = 0. Hence, by (25), CiYij = O. Since Ci has 287

full column rank, it follows that Yij = O. 288

3. Proofs of Theorem 1.5 and Theorem 1.10 . We need the following two

289 lemmas. 290 Lemma 3.1. Let 291 (30) K = max( M N  + 1, N M  + 1, 3). 292

Then there exist matrices E1, . . . , EK ∈ FM ×N such that the system of matrix equa-293

tions

294

(31) XEk= EkY, X ∈ FM ×M, Y ∈ FN ×N k = 1, . . . , K 295

has only the trivial solutions X = cIM and Y = cIN, where c ∈ F. 296

Proof. It is clear that by taking the transpose of (31) we obtain an equivalent

297

problem of the form (31) in which M and N are switched. Thus, w.l.o.g., we can

298

always assume that M ≥ N . We consider three cases: M = N , N < M ≤ 2N , and

299

2N < M .

300

Case M = N . By assumption(30), K = 3. Let a ∈ FN be a vector with distinct 301

entries and let D ∈ FN ×N be a matrix with nonzero entries. We show that if E1= IN, 302

E2= diag(a), and E3= D, then(31)has only trivial solutions. 303

The equation XE1= E1Y implies that X = Y. Hence,

Y diag(a) = YE2= XE2= E2Y = diag(a)Y.

Since the entries of a are distinct, it follows that Y is a diagonal matrix, Y = diag(y1, . . . , yN). Finally, since the entries of E3= D are nonzero, the equation

diag(y1, . . . , yN)D = YE3= XE3= E3Y = D diag(y1, . . . , yN)

implies that y1= · · · = yN =: c. Thus, X = Y = cIN . 304

Case N < M ≤ 2N . By assumption(30), K = 3. Let a ∈ FN be a vector with

distinct entries, D ∈ F(M −N )×(2N −M )be a full row rank matrix with nonzero entries,

and let bIM −N denote the matrix formed by the first M − N rows of IN. We show

that if E1= IN O  , E2= O D  , E3= diag (a) bIM −N  ,

then(31)has only trivial solutions. Let X =X1 X3 X2 X4



with X1∈ FN ×N, X2, XT3 ∈ 305

F(M −N )×N and X4 ∈ F(M −N )×(M −N ). Then the equations XE1 = E1Y, XE2 = 306

(11)

E2Y, and XE3= E3Y imply 307 X1= Y and X2= 0, (32) 308 X3D = O and X4D = DY, 309

X1diag (a) + X3bIM −N = diag (a)Y and X2diag (a) + X4bIM −N = bIM −NY,

(33)

310 311

respectively. Since X3D = O and D has full row rank, it follows that X3= O. Hence,

by(32)and(33),

Y diag (a) = X1diag (a) + X3bIM −N = diag (a)Y.

Since all entries of a are distinct, it follows that Y is a diagonal matrix, Y =

312

diag(y1, . . . , yN). From the second identities in (32) and (33) it follows that X4 = 313

diag(y1, . . . , yM −N). Finally, since the entries of D are nonzero, it follows from X4D = 314

DY that y1= · · · = yN =: c. Hence, Y = cIN and X = blockdiag(X1, X4) = cIM. 315

Case 2N < M . By assumption (30), K =M

N + 1. Let r = M − (K − 2)N . 316

Then 0 < r ≤ N . We define E1, . . . , EK−2 ∈ FM ×N and bEK−1∈ FM ×r as blocks in 317

the partition IM = [E1 . . . EK−2 EbK−1] and set EK−1= [ bEK−1 O], where the zero 318

matrix has N − r columns. We also set Ek =

  diag(a) D O  , where a ∈ FN is a vector 319

with distinct entries and D ∈ FN ×N is a matrix with nonzero entries. We show that 320

for such choice of the matrices Ek system(31) has only the trivial solution. From 321

the first K − 1 equations in(31)it follows that X = blockdiag(Y, . . . , Y, bY), where

322

b

Y denotes a matrix located at the intersection of the first r rows and r columns of

323

Y. The remaining equation XEK= EKY in(31)implies that Y diag(a) = diag(a)Y 324

and YD = DY. Now, in exactly the same way as “Case M = N ” one can obtain

325

that Y = cIN for some c ∈ F. Hence, X = cIM. 326 Lemma 3.2. Let 327 (34) K =          2, if M1> N1 and M2< N2, dM2/N2e + 1, if M1> N1 and M2≥ N2, dN1/M1e + 1, if M1≤ N1 and M2< N2, dM2/N2e + dN1/M1e , if M1≤ N1 and M2≥ N2. 328

Then there exist matrices E1, . . . , EK∈ FM2×N2 and F1, . . . , FK ∈ FM1×N1 such that 329

the system of matrix equations

330

(35) XEk= FkY, X ∈ FM1×M2, Y ∈ FN1×N2 k = 1, . . . , K 331

has only the zero solution.

332

Proof. We consider each of the four cases in(34)separately.

333

Case M1> N1 and M2< N2. By(34), K = 2. By definition put

E1= O, E2= [IM2 O], F1= IN1

O 

, E2= O.

Then the equations XE1 = F1Y and XE2 = F2Y imply that Y = O and X = O, 334

respectively.

(12)

Case M1> N1 and M2≥ N2. By(34), K = dM2/N2e + 1. Let E1, . . . , EK−1 be 336

matrices such that [E1 . . . EK−1] has full row rank, let EK be an arbitrary matrix, 337

F1, . . . , FK−1be zero matrices, and FK=

IN1 O



. Then from the first K − 1 equations

338

in (35) it follows that X[E1 . . . EK−1] = 0. Hence X = O. Now the remaining 339

equation XEK = FKY implies that Y = O. 340

Case M1 ≤ N1 and M2 < N2. By (34), K = dN1/M1e + 1. The problem can 341

be reduced to the setting used in the previous case by taking the transpose of the

342

equations in(35).

343

Case M1 ≤ N1 and M2 ≥ N2. By (34), K = dM2/N2e + dN1/M1e. Let

E1, . . . , EdM2/N2e be matrices such that [E1 . . . EdM2/N2e] has full row rank and let EdM2/N2e+1= EK = O. Let also F1= · · · = FdM2/N2e = 0 and FdM2/N2e+1, . . . , FK be matrices such that [FT

dM1/N2e+1 . . . F

T K]

T has full column rank. Then from the

first dM2/N2e and the last dN1/M1e equations in(35)it follows that

X[E1 . . . EdM2/N2e] = O and O = [F T dM1/N2e+1 . . . F T K] TY,

respectively. Hence, X = O and Y = O.

344

Now we are ready to proveTheorem 1.5.

345

Proof of Theorem 1.5. We show that assumptions 1) and 2) inTheorem 1.3hold

346

for generic matrices A, B, and D1, . . . , DK. The first two inequalities in(15)express 347

the fact that the number of columns of A and B cannot exceed the number of rows.

348

Since A and B are generic this means that assumption 1) inTheorem 1.3 holds. To

349

prove that assumption 2) inTheorem 1.3 also holds it is sufficient to show that the

350

subspace of solutions of (9)has dimension R. In the remaining part of the proof we

351

show that for generic matrices D1, . . . , DK the subspace of solutions of (9)is spanned 352

by (PI,1, PJ,1), . . . , (PI,R, PJ,R), where the projectors PI,r, PJ,r are defined in(10). 353

Let X = (Xij)Ri,j=1 and Y = (Yij)i,j=1R consist of the blocks Xij ∈ FMi×Mj and 354

Yij ∈ FNi×Nj, respectively. Since the matrices Dkare block-diagonal, we can rewrite 355

(9)in terms of the blocks Xij and Yij as 356

(36) XijDj,k= Di,kYij, k = 1, . . . , K, 357

where i, j ∈ {1, . . . , R}. Let the subsets

Ωij, Ωi⊆ FM1×N1× · · · × FMR×NR × · · · × FM1×N1× · · · × FMR×NR

 consist of the elements (D1,1, . . . , DR,1, . . . , D1,K, . . . , DR,K) and be defined as fol-358

lows: the elements in Ωij (i 6= j) are such that (36)has only the zero solution and 359

the elements in Ωi are such that (36)has only the trivial solution, i.e., Xii = ciIMi

360

and Yii= ciINi for some ci∈ F.

361

To prove that (X, Y) belongs to the linear subspace spanned by (PI,1, PJ,1), . . . , 362

(PI,R, PJ,R) we have to show that (∩Ωij)T(∩Ωi) is a set of full measure. Since the 363

intersection of a finite number of sets of full measure is also a set of full measure it

364

is sufficient to prove that all sets in the intersection have full measure. This can be

365

done as follows.

366

1) Let i 6= j. W.l.o.g. we can assume that i = 1 and j = 2. Then, by(36),

367

X12D2,k= D1,kY12, k = 1, . . . , K.

(37)

368 369

(13)

It is clear that system(37)has only the zero solution if and only if the matrix M =    DT 2,1⊗ IM1 − IN2⊗ D1,1 .. . DT 2,K⊗ IM1 − IN2⊗ D1,K   

has full column rank. Hence, from Lemma 3.2 and assumption (15) it follows that

370

there exist matrices D1,1, . . . , D1,K and D2,1. . . D2,K such that M has full column 371

rank. Now we assume that D1,1, . . . , D1,K and D2,1. . . D2,K in (37) are generic. 372

Then, by [6, Lemma 6.3], M has full column rank, implying that system(37)has only

373

the zero solution. Hence, by Fubini’s theorem [7, Theorem C, p.148], Ω12 is a set of 374

full measure.

375

2) Let i = j. W.l.o.g. we can assume that i = 1. We have to prove that the

376

system

377

(38) X11D1,k= D1,kY11, k = 1, . . . , K, 378

has only the trivial solution X11 = cIM1 and Y11 = cIN1, where c ∈ F. It is clear that system(38)has only the trivial solution if and only if the dimension of the null space of the matrix

M =    DT 1,1⊗ IM1 − IN1⊗ D1,1 .. . DT 1,K⊗ IM1 − IN1⊗ D1,1   

is one. Hence, fromLemma 3.1and assumption(15)it follows that there exist matrices

379

D1,1, . . . , D1,K such that M12+ N12− 1 columns of M are linearly independent. Now 380

we assume that D1,1, . . . , D1,K are generic. Then, by [6, Lemma 6.3], the same 381

M12+ N12− 1 columns of M are linearly independent, implying that system(38)has

382

only the trivial solution. Hence, by Fubini’s theorem [7, Theorem C, p.148], Ω11 is a 383

set of full measure.

384

Proof of Theorem 1.10. We show that assumptions 1) and 2) inTheorem 1.3hold

385

for generic matrix A, B, and D1, . . . , DK, where B = A or B = A∗ and D1, . . . , DK 386

may be symmetric, Hermitian or unconstrained.

387

1) Since A is generic and I ≥ P Nr, it follows that A and B has full column 388

rank.

389

2) WLOG we assume that K = 3. We proceed as in the proof ofTheorem 1.5.

390

First we show that the subspace of the solutions of (9) has dimension R for a

spe-391

cific choice of D1, D2, and D3. Let D1 = IP Nr, D2 be a diagonal matrix with

392

distinct real values on the main diagonal and let D3 = blockdiag(D1,3, . . . , DR,3), 393

where the entries of D1,3, . . . , DR,3 are nonzero. Then XD1 = D1Y implies that 394

X = Y. From XD2 = D2Y = D2X, it follows that X = diag(x1, . . . , xP Nr).

395

Finally, from diag(x1, . . . , xP Nr)D3 = XD3 = D3Y = D3diag(x1, . . . , xP Nr) it

396

follows that X = Y = blockdiag(c1IN1, . . . , cRINR) for some c1, . . . , cR ∈ F. Thus,

397

assumption 2) in Theorem 1.3holds for a specific choice of D1, D2, and D3. Hence 398

some 2(P Nr)2− R columns of MD are linearly independent. From [6, Lemma 6.3] 399

it follows that if D1, D2, and D3 are generic symmetric, Hermitian or unconstrained 400

matrices, then the same 2(P Nr)2− R columns of MD are linearly independent, 401

implying that dim Null (MD) = R. 402

(14)

REFERENCES 403

[1] Y. Cai and C. Liu, An algebraic approach to nonorthogonal general joint block diagonaliza-404

tion, SIAM J. Matrix Anal. Appl., 38(1) (2017), pp. 50–71. 405

[2] G. Chabriel, M. Kleinsteuber, E. Moreau, H. Shen, P. Tichavsky, and A. Yere-406

dor, Joint matrices decompositions and blind source separation: A survey of methods, 407

identification, and applications, IEEE Signal Processing Magazine, 31 (2014), pp. 34–43. 408

[3] L. De Lathauwer, Decompositions of a higher-order tensor in block terms — Part II: Defi-409

nitions and uniqueness, SIAM J. Matrix Anal. Appl., 30 (2008), pp. 1033–1066. 410

[4] L. De Lathauwer, Blind separation of exponential polynomials and the decomposition of a 411

tensor in rank-(Lr, Lr, 1) terms, SIAM J. Matrix Anal. Appl., 32 (2011), pp. 1451–1474.

412

[5] L. De Lathauwer, B. De Moor, and J. Vandewalle, A multilinear singular value decom-413

position, SIAM J. Matrix Anal. Appl., 21 (2000), pp. 1253–1278. 414

[6] I. Domanov and L. De Lathauwer, On the uniqueness of the canonical polyadic decompo-415

sition of third-order tensors — Part II: Overall uniqueness, SIAM J. Matrix Anal. Appl., 416

34 (2013), pp. 876–903. 417

[7] P. R. Halmos, Measure theory, Springer-Verlag, New-York, 1974. 418

[8] T. Maehara and K. Murota, Algorithm for error-controlled simultaneous block-419

diagonalization of matrices, SIAM J. Matrix Anal. Appl., 32 (2011), pp. 605–620. 420

[9] Y. Ming, On partial and generic uniqueness of block term tensor decompositions, Annali 421

Dell’Universita’Di Ferrara, 60 (2014), pp. 465–493. 422

[10] M. Sørensen and L. De Lathauwer, Coupled canonical polyadic decompositions and (cou-423

pled) decompositions in multilinear rank-(Lrn, Lrn, 1) terms—Part I: Uniqueness, SIAM

424

J. Matrix Anal. Appl., 36 (2015), pp. 496–522. 425

Referenties

GERELATEERDE DOCUMENTEN

Tensors, or multiway arrays of numerical values, and their decompositions have been applied suc- cessfully in a myriad of applications in, a.o., signal processing, data analysis

Alternating Least Squares Body Surface Potential Mapping Blind Source Separation Blind Source Subspace Separation Canonical Decomposition Comon-Lacoume Direction of

Keywords: complex damped exponential; harmonic retrieval; parameter estimation; decimation; Vandermonde decomposition; multilinear algebra; higher-order tensor; Tucker

multilinear algebra, higher-order tensor, canonical decomposition, parallel factors model, simultaneous matrix diagonalization.. AMS

In this paper, we show that the Block Component De- composition in rank-( L , L , 1 ) terms of a third-order tensor, referred to as BCD-( L , L , 1 ), can be reformulated as a

In the case where the common factor matrix does not have full column rank, but one of the individual CPDs has a full column rank factor matrix, we compute the coupled CPD via the

The results are (i) necessary coupled CPD uniqueness conditions, (ii) sufficient uniqueness conditions for the common factor matrix of the coupled CPD, (iii) sufficient

a) Memory complexity: The tensor that is to be de- composed has IJ N entries. For large values of I, J and N , storing and accessing this tensor in local memory can become too