• No results found

Characterizing Higher-Order Tensors by Means of Subspaces

N/A
N/A
Protected

Academic year: 2021

Share "Characterizing Higher-Order Tensors by Means of Subspaces"

Copied!
7
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Characterizing Higher-Order Tensors by

Means of Subspaces

Tech. Report No. 11-32, SCD-SISTA, Dept. Electrical Engineering (ESAT), K.U.Leuven, 2011

Lieven De Lathauwer

February 21, 2011

Lemma 1. Consider A ∈ KN ×N with EVD A = E · D

A · E−1. Assume

that B ∈ KN ×N commutes with A. Then B can be decomposed as B =

E · DB · E−1, in which DB ∈ KN ×N is block-diagonal, where the blocks

correspond to the different eigenvalues of A. The size of a block is equal to the algebraic multiplicity of the corresponding eigenvalue of A. Assume that the diagonal block of DA corresponding to eigenvalue λ is given by

blockdiag(λIP, J(λ, L1), . . . , J(λ, LQ)), in which J(λ, Lq) denotes a Jordan

cell of size (Lq × Lq) associated with λ. Then the corresponding diagonal

block of DB has the structure blockdiag(MP, ˜J(µ1, L1), . . . , ˜J(µQ, LQ)), in

which MP ∈ KP ×P and in which ˜J(µq, Lq) is a Jordan cell of size (Lq× Lq)

associated with µq or a block diagonal matrix of size (Lq× Lq) consisting of

several Jordan cells, each associated with µq.

Consider two matrices A, B ∈ KN ×N. Assume that A has full column and

row rank. Knowledge that B has the same column space and row space as A obviously does not give us much information about the structure of B, besides the fact that it has full rank. However, the situation turns out to be very different for higher-order tensors. The “row space”, “column space” and “mode-3 space” of a third-order tensor tell a lot about its structure.

(2)

1. We consider A, B ∈ KN ×N ×N. Tensor A has full multilinear rank. We

know that

span(A[2,3;1]) = span(B[2,3;1]) and span(A[1,3;2]) = span(B[1,3;2]). (1)

We examine how A and B are related.

From span(A[2,3;1]) = span(B[2,3;1]) we know that B = A ·1 M1 for some

nonsingular matrix M1 ∈ KN ×N. From span(A[1,3;2]) = span(B[1,3;2]) we

know that B = A ·2 M2 for some nonsingular matrix M2 ∈ KN ×N. Hence,

we have that A ·1M1 = A ·2M2 for some nonsingular matrices M1 and M2.

This is equivalent with

M1· A:,:,n = A:,:,n· MT2 ∀n. (2)

Since this condition holds for all slices, it also holds for a generic linear combination Ac1 of them, which by construction has full rank:

M1 = Ac1· M T 2 · A −1 c1 . (3) Substitution of (3) in (2) yields: MT2 · A−1 c1 · A:,:,n= A −1 c1 · A:,:,n· M T 2 ∀n. (4) In other words, MT 2 commutes with A −1

c1 · A:,:,n, 1 6 n 6 N, and with all

linear combinations of these matrices. Hence, MT

2 commutes with A −1 c1 · Ac2,

in which Ac2 is a second generic linear combination of the slices A:,:,n. If the

EVD of A−1

c1 · Ac2 is given by E · S · E

−1, then we have that MT

2 = E · ˜S· E −1,

in which ˜Sis structured as explained in Lemma 1. On the other hand, since A−1

c1 · A:,:,n commutes with M

T

2, it has the form

A−1

c1 · A:,:,n= E · Dn· E

−1, (5)

in which Dn is structured as explained in Lemma 1, 1 6 n 6 N. From (5)

we have that

A:,:,n = (Ac1 · E) · Dn· E

−1 ∀n. (6)

We distinguish different cases, corresponding to the different possibilities for the structure of ˜S:

(3)

(i) All eigenvalues of M2 are identical and M2 can be diagonalized, i.e.,

˜

S = α IN. In this case, M2 = α IN and, consequently, A and B are

scaled versions of each other. Note that also M1 = α IN.

(ii) All eigenvalues of M2 are distinct. In this case, all matrices Dn are

diagonal and (6) is the conventional slice-wise representation of a CP decomposition of A. Let us write:

A=

N

X

n=1

un ⊗vn ⊗wn, (7)

in which ⊗ denotes the tensor outer product and U = [u1 · · · uN] =

Ac1·E, V = [v1 · · · vN] = E

−T and W = [w

1 · · · wN] = [diag(Dn) · · · diag(Dn)]T.

The latter three matrices have full rank, otherwise A would not have full multilinear rank. We have:

B= N X n=1 (M1un)⊗vn ⊗wn = N X n=1 un ⊗(M2vn)⊗wn. (8)

Since sufficient conditions for CP essential uniqueness are satisfied [5], (8) implies that M1 = U · Λ1 · U−1 and M2 = V · Λ2· V−1, in which

Λ1, Λ2 ∈ KN ×N are nonsingular diagonal. In other words, A and B

consist of the same rank-1 terms, possibly differently scaled.

(iii) M2 can be diagonalized and has eigenvalues αr with multiplicity Lr,

1 6 r 6 R:

M2 = V · blockdiag(α1IL1, . . . , αRILR)) · V

−1, (9)

in which V = E−T. In this case, the matrices D

n are block-diagonal

with blocks of size (Lr × Lr), 1 6 r 6 R. Eq. (6) is the slice-wise

representation of a decomposition of A in rank-(Lr, Lr,·) terms [2, 3, 4].

We have:

B:,:,n= (Ac1· E) · Dn·(E

1

· MT2) ∀n. (10) Substitution of the EVD of MT

2 yields:

B:,:,n = (Ac1 · E) · (Dn·blockdiag(α1IL1, . . . , αRILR)) · E

−1 ∀n. (11)

In other words, A and B consist of the same rank-(Lr, Lr,·) terms,

possibly differently scaled. Note that

M1 = U · blockdiag(α1IL1, . . . , αRILR)) · U

−1, (12)

(4)

(iv) M2 has eigenvalues αr with multiplicity Lr, 1 6 r 6 R. For α2, . . . ,

αR the geometric multiplicity equals the algebraic multiplicity. The

geometric multiplicity of α1 is equal to 1. In this case, ˜Sis diagonal, up

to the upper left diagonal block, which is a Jordan cell of size (L1×L1).

We have:

MT2 = E · blockdiag(J(α1, L1), . . . , αRILR)) · E

−1. (13)

The matrices Dn are block-diagonal with blocks of size (Lr× Lr), 1 6

r 6 R. The first (L1× L1) block is a Jordan cell or a block diagonal

matrix consisting of several smaller Jordan cells, all associated with the same eigenvalue. The possible splitting in smaller Jordan cells and the eigenvalue can be different for different n. We can again interpret (6) as the slice-wise representation of a decomposition of A in rank-(Lr, Lr,·) terms, where the components related to α1 are grouped in

the first term. We have:

B:,:,n= (Ac1· E) · Dn·(E

−1· MT

2) ∀n. (14)

Substitution of the EVD of MT

2 yields:

B:,:,n = (Ac1·E)·(Dn·blockdiag(J(α1, L1), . . . , αRILR))·E

1

∀n. (15) The first block term in the expansion of A and B is the same up to multiplication of the core by a Jordan cell. The other block terms are the same up to scaling. Note that:

M1 = U · blockdiag(J(α1, L1), . . . , αRILR)) · U

−1, (16)

in which U = Ac1 · E.

(v) . . .

The conclusion is the following. Apart from the specific situation with Jordan cells, A and B are possibly different linear combinations of the same rank-(Lr, Lr,·) terms, where the size of the blocks depends on the multiplicities

of the eigenvalues of M2. (It is also possible to start from M1.) In the case

L1 = L2 = . . . = LN = 1, A and B are linear combinations of the same

(5)

For N > 3, all decompositions in two or more rank-(Lr, Lr,·) terms are

sub-generic, as can easily be verified by counting degrees of freedom. This means that, for a generic tensor A ∈ KN ×N ×N, with N > 3, the conditions (1)

imply that A and B are the same up to scaling. Over C, the generic rank of (2 × 2 × 2)-tensors is 2. Over R, the Lebesgue measure of both the set of rank-2 (2 × 2 × 2)-tensors and the set of rank-3 (2 × 2 × 2)-tensors is different from zero. See [1] and references therein. Hence, over C, conditions (1) for N = 2 generically imply that A and B are possibly different linear combinations of the same two rank-1 terms. Over R, the same holds true for (2 × 2 × 2)-tensors on the rank-2 manifold. For tensors on the rank-3 manifold conditions (1) for N = 2 imply equality up to scaling.

2. We consider A, B ∈ KN ×N ×N. Tensor A has full multilinear rank. We

know that

span(A[2,3;1]) = span(B[2,3;1]), span(A[1,3;2]) = span(B[1,3;2])

and span(A[1,2;3]) = span(B[1,2;3]). (17)

We examine how A and B are related.

The first two conditions yield the relations derived under Problem 1. The last two conditions yield similar relations, this time involving modes 2 and 3. Combining the relations, we obtain the following variants of the cases discussed under Problem 1:

(i) A and B are scaled versions of each other.

(ii) A and B consist of the same rank-1 terms, possibly differently scaled. (iii) A and B consist of the same rank-(Lr, Lr, Lr) terms, possibly differently

scaled.

(iv) A and B admit decompositions in rank-(Lr, Lr, Lr) terms that are

re-lated. The first block term, related to α1, is the same up to

multipli-cation of the core by a Jordan cell. It can be verified that the only possibility to satisfy (17) is a core C of the form

C[1,2;3] = λ 0 0 λ 0 λ 0 0



(6)

with λ 6= 0 and L1 = 2. The other block terms are the same up to

scaling. (v) . . .

Acknowledgements

Research supported by: (1) Research Council K.U.Leuven: GOA-MaNet, CoE EF/05/006 Optimization in Engineering (OPTEC), CIF1 and STRT1/08/023 (2) F.W.O.: (a) project G.0427.10N, (b) Research Communities ICCoS, AN-MMM and MLDM, (3) the Belgian Federal Science Policy Office: IUAP P6/04 (DYSCO, “Dynamical systems, control and optimization”, 2007–2011), (4) EU: ERNSI.

References

[1] P. Comon, J.M.F. ten Berge, L. De Lathauwer, J. Castaing, “Generic and Typical Ranks of Multi-Way Arrays”, Lin. Alg. Appl., Vol. 430, No. 11–12, June 2009, pp. 2997–3007

[2] L. De Lathauwer, “Decompositions of a Higher-Order Tensor in Block Terms — Part I: Lemmas for Partitioned Matrices”, SIAM J. Matrix Anal. Appl., Special Issue Tensor Decompositions and Applications, Vol. 30, No. 3, 2008, pp. 1022–1032.

[3] L. De Lathauwer, “Decompositions of a Higher-Order Tensor in Block Terms — Part II: Definitions and Uniqueness”, SIAM J. Matrix Anal. Appl., Special Issue Tensor Decompositions and Applications, Vol. 30, No. 3, 2008, pp. 1033–1066.

[4] L. De Lathauwer, D. Nion, “Decompositions of a Higher-Order Tensor in Block Terms — Part III: Alternating Least Squares Algorithms”, SIAM J. Matrix Anal. Appl., Special Issue Tensor Decompositions and Applica-tions, Vol. 30, No. 3, 2008, pp. 1067–1083.

(7)

[5] R.A. Harshman, Foundations of the PARAFAC procedure: Model and conditions for an“explanatory” multi-mode factor analysis, UCLA Work-ing Papers in Phonetics, 16 (1970), pp. 1–84.

Referenties

GERELATEERDE DOCUMENTEN

Index Terms— tensor, convolutive independent component analysis, tensorization, deconvolution, second-order

Tensors, or multiway arrays of numerical values, and their decompositions have been applied suc- cessfully in a myriad of applications in, a.o., signal processing, data analysis

If matrix A or B is tall and full column rank, then its essential uniqueness implies essential uniqueness of the overall tensor decomposition..

EUSPACE-AWE shall maximise cost effectiveness of the activities by joining with and supplementing existing space teacher training networks and courses and exploiting and

Urban symbolism expresses itself through different phenomena, such as the lay- out of a city, architecture, statues, street and place names, poems, as well as rituals, festivals

The aim of my master project is to study several tensor products in Riesz space theory and in particular to give new constructions of tensor products of integrally closed

We will then prove that the fundamen- tal group of a monoid is its groupification and that for commutative monoids and free monoids, the groupification map induces a

Necessary and sufficient conditions in terms of openings In the previous section we defined the notions of opening en minimal opening and derived a number of properties which