by
Mahlare Gerald Sehoana
Thesis presented in partial fullment of the requirements for
the degree of Master of S ien e (Mathemati s) in the Fa ulty
of S ien e at Stellenbos h University
Supervisor: Prof. L. vanWyk
De laration
By submittingthis thesis ele troni ally,I de lare that the entirety of the work ontained
therein is my own, original work, that I am the sole author thereof (save to the extent
expli itly otherwise stated), that reprodu tion and publi ation thereof by Stellenbos h
University will not infringe any third party rights and that I have not previously in its
entirety orin part submitted itfor obtainingany quali ation.
2015/09/30
Date: ...
Copyright© 2015 Stellenbos h University
Abstra t
On Commutativity and Lie nilpoten y in Matrix Algebras
M.G.Sehoana
Department of Mathemati al S ien es,
Stellenbos h University,
Private Bag X1, Matieland 7602, South Afri a.
Thesis: MS (Mathemati s)
September 2015
In thisthesis we rstdis uss the proofbyMirzakhani [9℄ofS hur's Theorem whi hgives
the maximumnumberoflinearlyindependentmatri esina ommutativealgebraof
n × n
matri es over a eldF
. An example illustrating the appli ation of S hur's Theorem is given.Se ondly,wedis ussthe Cayley-HamiltonTheoremwhi hassertsthatany
n×n
matrixA
satisesits hara teristi polynomial. Adedu tionofaCayley-Hamiltontra eidentityfora
2 × 2
matrixA
over a ommutativering fromthe Cayley-HamiltonTheorem is shown. Wethen dis uss the Cayley-Hamilton tra e identity forany matrixA ∈ M
2
(R)
when (i)R
is ommutative,(ii)
R
isnot ne essarily ommutative,(iii)
R
is not ne essarily ommutative andtr(A) = 0
,(iv)
R
is not ne essarily ommutativeand satises the identity[[x, y], [x, z]] = 0.
Lastly,wedis ussthematrixalgebras
U
∗
n
(R)
,inparti ularthematrixalgebrasU
∗
3
(R)
andU
∗
4
(R)
, in relation to polynomial identities[[. . . [[x
1
, x
2
], x
3
], . . .], x
n
] = 0
,[x, y][w, z] = 0
and[[x, y], [w, z]] = 0
.Uittreksel
Oor Kommutatiwiteit en Lie nilpotensie in Matriksalgebras
(On Commutativityand Lienilpoten y inMatrix Algebras)
M.G.Sehoana
Departement Wiskundige Wetenskappe,
UniversiteitStellenbos h,
PrivaatsakX1, Matieland 7602, Suid Afrika.
Tesis: MS (Wiskunde)
September 2015
In hierdie tesis beskryf ons eerstens die bewys deur Mirzakhani [9℄ van S hur se Stelling
wat die maksimum aantal lineêr onafhanklike matrikse in 'n kommutatiewe algebra van
n × n
matrikse oor 'n liggaamF
gee. 'n Voorbeeld word gegee wat die toepassing van S hur seStelling illustreer.TweedensbespreekonsdieCayley-HamiltonStellingwatbeweer datelke
n × n
matriksA
sykarakteristiekepolinoombevredig. 'nAeidingvan'nCayley-Hamiltonspooridentiteitvir 'n
2 × 2
matriksA
oor 'n kommutatiewe ring vanuit die Cayley-Hamilton Stelling word gegee. Ons bespreek dan die Cayley-Hamilton spoor identiteit vir enige matriksA ∈ M
2
(R)
wanneer (i)R
kommutatief is,(ii)
R
nienoodwendig kommutatief is nie,(iii)
R
nie noodwendig kommutatief is nieen sp(A) = 0
,(iv)
R
nienoodwendigkommutatief isnie en dieidentiteit[[x, y], [x, z]] = 0
bevredig.Laastensbespreekons diematriksalgebras
U
∗
n
(R)
,inbesonder diematriksalgebrasU
∗
3
(R)
en
U
∗
4
(R)
, met betrekking totdiepolinoomidentiteite[[. . . [[x
1
, x
2
], x
3
], . . .], x
n
] = 0
,A knowledgements
I would like toexpress my sin ere gratitude to the following people and organisations:
⋄
Prof LeonvanWyk forthe guidan e, dire tionand enlightenmentthroughout my stud-ies. His patien e and onstru tive omments kept my hopes alive. He brought ba k mylove forMathemati s.
⋄
Prof David Kubayi for getting me started on this proje t and for the support that, together with the Deanof FSA at UL, provided duringhis time asHOD.⋄
My family for theirmoral support and motivation.⋄
Department of Mathemati al S ien es at Stellenbos h University for allowing me the opportunity tostudy for anMS degree.⋄
Fellow students Sandile Mkhaliphi and Andry Rabenantoandro for assisting me with Latex.⋄
Colleagues in the s hool of Mathemati al and Computer S ien es at the University of Limpopofor their en ouragement.⋄
DST-NRF Centre of Ex ellen e in Mathemati al and Statisti alS ien es (CoE-Mass) for their nan ialsupport.Dedi ations
Contents De laration i Abstra t ii Uittreksel iii A knowledgements iv Dedi ations v Contents vi 1 Introdu tion 1
2 Mirzakhani's Simple Proof of S hur's Theorem 5
3 Cayley-Hamilton Theorem 14
4 Commutativity and Lie nilpoten y in the Matrix Algebra
U
∗
n
(R)
21 4.1 The ringU
∗
3
(R)
. . . 21 4.2 The ringU
∗
4
(R)
. . . 345 Cayley-Hamilton Tra e Identity for
2 × 2
Matri es 45 5.1 Tra e identities for2 × 2
matri es . . . 45 5.2 Relationshipbetween identities . . . 55Chapter 1
Introdu tion
This hapter is mainlyabrief ba kgroundand overview of the subsequent Chapters
2
to5
. Also inthis hapterwedis uss some of the on epts whi h we have used anumber of times in this thesis. Where possible, we have supplied examples to substantiate laimsmade. Chapter
5
anbeviewed asa ontinuationof Chapter3
. Though inChapter3
we dealwithmatri esovertheeldR
ofrealnumbersandinChapter5
wedealwithmatri es over an arbitrary ringR
. Coin identally, Chapters2
and4
deal with upper triangular matri es. InChapter2
ween ounter uppertriangularmatri eswhi hmutually ommute whereas in Chapter4
ommutativity of the said matri es isnot ne essary.A binary operation
∗
ona setA
issaid to be ommutativeif and onlyifx ∗ y = y ∗ x
for allx, y ∈ A.
Were all that aringR
isan algebrai stru ture with two binary operations alled additionand multipli ation. One of the axioms of a ringR
states that addition is ommutative. However, ina ringR
multipli ationis not ne essarily ommutative. Thus, if multipli ation is ommutative su h a ring is alled a ommutative ring. A nonemptysubset
B
of a ringR
is said tobe a subringofR
ifB
is itself a ring with respe t to the operations of addition and multipli ation inR
. Fora nonempty subsetB
of a ringR
to be a subring it is su ient thatab ∈ B
anda − b ∈ B
for alla, b ∈ B.
The olle tionM
n
(R)
,whereR
is aring, of alln × n
matri eshaving elements ofR
asentries is aring. Forn ≥ 2
,M
n
(R)
isnot a ommutativering. But, for a ommutativeringR
,the ringB =
a b
0 a
| a, b ∈ R
is ommutativeand
B
is asubringofM
2
(R)
. ForeveryringR
the trivialsubringB={0} is ommutative. Thus every ring has at least one subring whi h is ommutative.In Chapter
2
we shall en ounter a maximal ommutative subalgebraofM
n
(F )
whi hwe use to illustrate anappli ation of S hur's Theorem. (A subringB
of a ringR
is said to bea maximalsubringwithrespe t topropertyY
ifB 6= R
and thereexists nosubringC
inR
with the propertyY
su h thatB ⊂ C ⊂ R
.) In fa t, a ording toS hur's Theorem the numberoflinearly independentmatri esinthe subalgebraB
above is⌊2
2
/4⌋ + 1 = 2
.
Moreover, we see that the number of elements in a basis for
B
is2
. A basis of a ve tor spa eV
is a subsetW ⊆ V
whi h is linearly independent and spansV
. A basis of aCHAPTER 1. Introdu tion 2
ve tor spa e isa maximal linearlyindependent subset of that ve tor spa e.
The expression
⌊x⌋
represents the greatest integer less than or equal tox
. Ifx
is an integer, we have⌊x⌋ = x
and⌊x + b⌋ = x
for0 < b < 1.
The hara teristi polynomialof an
n × n
matrixA
overa ommutativeringR
isdened tobep(λ) =
det(A − λI) = k
n
λ
n
+ k
n−1
λ
n−1
+ . . . + k
1
λ + k
0
,
k
i
∈ R.
Bythe tra eofa squarematrixA
,denotedtr(A)
, ismeantthe sum ofthe entries onthemaindiagonalofA
,fromtheupperlefttothelowerright. Thefollowingproperties of tra es of any matri esA, B ∈ M
n
(R)
are used in this thesis:(i)
tr(A ± B) = tr(A) ± tr(B),
(ii)
tr(cA) = c tr(A),
wherec
isa onstant.Thenontrivial oe ientsofthe hara teristi polynomialofamatrix
A
an beexpressed expli itly in terms of tra es of powers of A, (see [15℄, [13℄). In [13℄, it is shown that ifn = 4
, thenp(λ) = λ
4
− T
1
λ
3
+
1
2
T
2
1
− T
2
λ
2
−
1
6
T
3
1
− 3T
1
T
2
+ 2T
3
λ
+
1
24
T
4
1
− 6T
1
2
T
2
+ 8T
1
T
3
+ 3T
2
2
− 6T
4
= λ
4
− tr(A)λ
3
+
1
2
tr
2
(A) − tr(A
2
)λ
2
−
1
6
tr
3
(A) − 3 tr(A) tr(A
2
) + 2 tr(A
3
)λ
+
1
24
tr
4
(A) − 6 tr
2
(A) tr(A
2
) + 8 tr(A) tr(A
3
) + 3 tr
2
(A
2
) − 6 tr(A
4
),
where
T
m
= tr(A
m
).
In this thesis we deal with the ase when
n = 2
, that is,p(λ) = λ
2
− tr(A)λ +
1
2
tr
2
(A) − tr(A
2
)
and obviously repla ing
λ
byA
and introdu ingthe identity matrixI ∈ M
2
(R)
yieldsp(A) = A
2
− tr(A)A +
1
2
tr
2
(A) − tr(A
2
)I.
The equation just given leads to the Cayley-Hamilton tra e identity for a
2 × 2
matrix whi hwe dis uss inChapter5
under various hypotheses with1
2
∈ R.
Denition 1. Let
R
be aring (not ne essarily ommutative) anda, b ∈ R
. An element of the form[a, b] = ab − ba
is alled a ommutator, more pre isely, the ommutator ofa
andb.
Denition2. Aring
R
is alledLienilpotentofindexn (n ≥ 2)
ifR
satisestheidentity[[[. . . [[x
1
, x
2
], x
3
], . . .], x
n
], x
n+1
] = 0
but not the identity
Denition 3. A ring
R
is alled Lie nilpotent if it is Lie nilpotent of indexn
for somen ≥ 2.
(Wenotethata ommutativeringmaybe alleda"Lienilpotentringofindex 1".)The on eptsof ommutativityandLienilpoten yare"somehow"relatedinthesensethat
one implies the other. Commutativity always implies Lie nilpoten y and Lie nilpoten y
implies ommutativity only if the on erned ring is of Lie nilpotent index 1. Thus, we
an say ommutativity is a stronger ondition than Lie nilpoten y. For example, in a
ommutative ring the identity
[x
1
, x
2
] = 0
holds whi h in turn implies[[x
1
, x
2
], x
3
] = 0
, whi hinturn implies[[[x
1
, x
2
], x
3
], x
4
] = 0
andso on. Weshall see inProposition33that the ringU
∗
3
(R)
satises the hypothesis for Lie nilpoten y but by Example 30,U
∗
3
(R)
is not ommutative.Wefurthernoti ethat inthe algebras
U
∗
n
(R)
wedis uss inthisthesis,R
isrequiredtobe ommutative in order to attain Lie nilpoten y. This fa t is observed in Theorem 44. Itis shown in Example 45that Lienilpoten y may not ne essarily beattained in
U
∗
n
(R)
ifR
is non ommutative. (In ontrast, a subringV
∗∗
n
(R) ⊂ U
n
∗
(R)
, with a non ommutative ringR
,dened in Chapter4
isLie nilpotentof index2
,for alln
. The subringV
∗∗
n
(R)
is a tually a version of the subringF
n
given in Chapter2
but with a non ommutative ringR
.)We see again the fundamental role of ommutativity in Corollary 37 where an algebra
U
∗
3
(U
3
∗
(R))
, with non ommutative ringU
∗
3
(R)
and ommutativeR
, satises the identity[[x, y], [w, z]] = 0
and does not satisfy either of the stronger identities[[x, y], z] = 0
and[x, y][w, z] = 0
. On the other hand,the algebraU
∗
4
(U
3
∗
(R))
with ommutativeR
doesnot satisfy[[x, y], [w, z]] = 0
but[[[x, y], [w, z]], [[u, v], [r, s]]] = 0
.Fora ommutativering
R
,anyprodu t[x
1
, y
1
][x
2
, y
2
]
inU
∗
n
(R)
,n ≤ 4,
isequaltozeroand the dis ussion just after Remark 48 shows that forn ≥ 7
a produ t[x
1
, y
1
][x
2
, y
2
][x
3
, y
3
]
may not ne essarilybe equaltozero. This leadsto Theorems52 (ii)and 53whi h give asmallest value of
k
forwhi h any produ t of the form[x
1
, y
1
][x
2
, y
2
] · · · [x
k
, y
k
]
(1.1) inU
∗
n
(R), R
ommutative,isequal tozero. This value ofk
dependsonn
. WhenR
isnot ommutative, (1.1) may not ne essarilybe equalto zero for su h values ofk
as shown in Remark 56(i.e, thereare matri esinU
∗
n
(R)
for whi h[x
1
, y
1
][x
2
, y
2
] · · · [x
k
, y
k
] 6= 0
). Denition4. LetR
beanarbitraryring. ThematrixunitE
i,j
inM
n
(R)
with1 ≤ i, j ≤ n,
is dened to bethe matrix with1
in position(i, j)
and zeroselsewhere.Notethat
E
i,j
E
k,l
=
E
i,l
if
j = k,
0
ifj 6= k.
AlternativelyE
i,j
E
k,l
= δ
j,k
E
i,l
,
whereδ
j,k
=
1
if
j = k,
CHAPTER 1. Introdu tion 4
The fun tion
δ
j,k
is alled the Krone ker delta. We observe that fora, b ∈ R,
(aE
i,j
)(bE
k,l
) = (ab)E
i,j
E
k,l
=
(ab)E
i,l
ifj = k,
0
ifj 6= k.
(1.2)
It should be noted that in all the examples that show that ertain statements do not
ne essarilyhold when
R
is non ommutative, we used spe i matri esin the ringU
∗
n
(R)
be ause there may be matri esin
U
∗
n
(R)
for whi h the statements hold. For example, ifR
is non ommutative,a produ tof the form(1.1) an be zero as in[E
1,2
, E
1,3
][A
2
, B
2
] · · · [A
k
, B
k
] = 0,
for all
k
andE
1,2
, E
1,3
, A
k
, B
k
inU
∗
n
(R)
. Wehave[E
1,2
, E
1,3
] = 0
,a ording toDenition 1 and the observation that follows afterDenition 4.Lastly, atthe end of Chapter 5 we dis uss the proof of the fa tthat in a ring ontaining
1
Chapter 2
Mirzakhani's Simple Proof of S hur's
Theorem
Foraeld
F
, the ringM
n
(F )
ofn × n
matri eswithn > 1
andelementsinF
is non om-mutative. But there are subalgebrasS ⊂ M
n
(F )
whi h are ommutative. Theorem10
in this hapterknown astheTheoremofS hurdealswithsubalgebrasofM
n
(F )
ofmutually ommutative matri es. It gives the maximum number of linearly independent matri esin a maximal ommutative subalgebra. Mirzakhani [9℄ provided a simpler proof, using
indu tion, of this theorem whi hwe disse t inthis hapter. Mirzakhani [9℄alsoprovided
anexample of a ommutative subalgebrasatisfying the hypothesis of Theorem10.
Denition 5. A set
X = {x
1
, x
2
, . . . , x
n
}
is said to be a spanning set of a ve tor spa eW
overa eldF
if everyw ∈ W
is alinear ombinationof thex
i
∈ X
, i.e.,w = a
1
x
1
+ a
2
x
2
+ · · · + a
n
x
n
where
a
i
∈ F
.Denition 6. A set
V = {v
1
, v
2
, . . . , v
r
}
of non-zero ve tors in a ve tor spa e over eldF
issaid to be linearly independent whenevera
1
v
1
+ a
2
v
2
+ · · · + a
r
v
r
= 0
implies that
a
i
= 0
for alli
, wherea
i
∈ F.
Denition 7. Let
A = [A
1
|A
2
| . . . |A
n
]
be anm × n
matrix over a eldF
, with theA
i
's as olumns ofA
. The rank ofA
is dened to be the maximum number of linearly independent ve tors inthe set{A
1
, A
2
, . . . , A
n
}.
Denition 8. Let
A
be anm × n
matrix over a eldF
. The null spa e ofA
is the set ofallx
inF
n
su hthat
Ax = 0.
Thedimension ofthe nullspa e ofA
is alledthe nullity ofA
.Remark 9. The nullity of a matrix
A
is the dimension of the solution spa e ofAx = 0
, whi h is the same as the number of parameters in the general solution ofAx = 0
and whi his the same as the number of freevariables.CHAPTER 2. Mirzakhani'sSimple Proof of S hur's Theorem 6
Theorem 10. Let
A
be anm × n
matrix. Thenrank A + nullity A = n.
AsubmatrixofanygivenmatrixAisamatrixwhi hisobtained fromAby removingany
numberofrows and/or olumns. Amatrixissaidtobepartitionedwheneveritisdivided
into submatri esby drawing verti al and horizontallines between its rows and olumns.
Example 11. If
A =
a
1,1
a
1,2
· · · a
1,n
a
2,1
a
2,2
· · · a
2,n
. . . . . . . . . . . .a
m,1
a
m,2
· · · a
m,n
,
thenapartitioningof
A
intosubmatri esB, C, D
andE
oforder3×2, 3×(n−2), (m−3)×2
and(m − 3) × (n − 2)
respe tively is
a
1,1
a
1,2
a
1,3
· · · a
1,n
a
2,1
a
2,2
a
2,3
· · · a
2,n
a
3,1
a
3,2
a
3,3
· · · a
3,n
a
4,1
a
4,2
a
4,2
· · · a
4,n
. . . . . . . . . . . . . . .a
m,1
a
m,2
a
m,3
· · · a
m,n
=
B C
D E
where
B, C, D
andE
an bethought of aselements ofA
. We note that the partitioning of a matrix intosubmatri es is not unique.Denition 12. A eld
F
is said tobealgebrai ally losed if every polynomialequationa
0
+ a
1
x + a
2
x
2
+ · · · + a
n
x
n
= 0
with oe ients in
F
has a solutioninF
.Theeld
R
ofrealnumbersisnot algebrai ally losedastheequationx
2
+ 1 = 0
doesnot
havea solutionin
R
.
Butthe eldC
of omplexnumbers isalgebrai ally losed, (see[6 ℄).Westate, withoutproof,the followingtheorem (whi h an befound in[4℄) be auseof its
signi an e inthe proof of Theorem 14.
Theorem 13. Let
F
n
be a family of ommuting matri es of ordern
over analgebrai ally losed eldF
. Then there exists a nonsingular matrixP
of ordern
with entries inF
su h thatP
−1
F
Theorem 14. The maximum number of mutually ommuting linearly independent
ma-tri es of order
n
over a eldF
is⌊n
2
/4⌋ + 1
.
Proof. The proof is by mathemati alindu tion.
(i)For
n = 1
the statementis obviously true.(ii) Assume the theorem is true for
n − 1,
i.e., a set onsisting of mutually ommuting matri es of order(n − 1) × (n − 1)
has at most⌊(n − 1)
2
/4⌋ + 1
linearly independent
matri es.
(iii)Let
F
n
bea familyof ommutingmatri esof ordern
overa eldF
. SupposeF
n
has more than⌊n
2
/4⌋ + 1
linearly independent matri es. Assume, withoutloss of generality,
that
F
is algebrai ally losed, then by Theorem 9 there exists a nonsingular matrixP
withentries inF
su hthatP
−1
F
n
P
isafamilyofuppertriangularmatri es. AlsoforanyT, U ∈ F
n
,
(P
−1
UP )(P
−1
T P ) = P
−1
U(P P
−1
)T P = P
−1
(UT )P
and
(P
−1
T P )(P
−1
UP ) = P
−1
T (P P
−1
)UP = P
−1
(T U)P.
But
UT = T U
and thus(P
−1
UP )(P
−1
T P ) = (P
−1
T P )(P
−1
UP )
showing that the
ma-tri esin
P
−1
F
n
P
aremutually ommuting. LetV
bethe ve torspa espanned by the setP
−1
F
n
P
. ThenV
onsists of uppertriangularmatri eswhi h ommute. Thus,P
−1
F
n
P ⊆
span{P
−1
F
n
P } = V
and dimV ≥ ⌊n
2
/4⌋ + 2.
Sin e a subset of a linearly independent set is also linearly
independent there exists a linearly independent subset
{A
1
, A
2
, . . . , A
q
}
of a basis ofV
,q = ⌊n
2
/4⌋ + 2
. Sin e ea h of the
A
i
's is an uppertriangularmatrix, the partitioning of ea hA
i
intosubmatri esM
i
,H
i
and a zero matrix of order(n − 1) × (n − 1)
,1 × n
and(n − 1) × 1
respe tively,yields matri esof the formA
i
=
H
i
0
0
M
i
. . .0
.
Now, partitioningthe
H
i
further (whereL
i
, N
i
andM
i
aresubmatri es oforder1 × 1, 1 ×
(n − 1)
and(n − 1) × (n − 1)
respe tively) we haveA
i
A
j
=
L
i
N
i
0
M
i
L
j
N
j
0
M
j
=
L
i
L
j
L
i
N
j
+ N
i
M
j
0
M
i
M
j
andA
j
A
i
=
L
j
N
j
0
M
j
L
i
N
i
0
M
i
=
L
j
L
i
L
j
N
i
+ N
j
M
i
0
M
j
M
i
.
CHAPTER 2. Mirzakhani'sSimple Proof of S hur's Theorem 8
Sin e
A
i
A
j
= A
j
A
i
it follows thatM
i
M
j
= M
j
M
i
, i.e., theM
i
's ommute.Let
W
be the ve tor spa e spanned by the set{M
1
, M
2
, . . . , M
q
}
, so the elements ofW
are(n − 1) × (n − 1)
matri es. Supposek =
dimW
. Then it follows from the indu tion hypothesis thatk ≤ ⌊(n − 1)
2
/4⌋ + 1
. Assume, without loss of generality, that
W
is spanned by the linearly independent set of matri es{M
1
, M
2
, . . . , M
k
}.
Then fori > k
ea hM
i
is expressible asa linear ombinationM
i
=
P
k
j=1
c
i,j
M
j
with s alarsc
i,1
, . . . , c
i,k
inF.
Now, fori > k
, letB
i
= A
i
−
P
k
j=1
c
i,j
A
j
.
Thenfori = k + 1, . . . , ⌊n
2
/4⌋ + 2
,B
i
= A
i
−
k
X
j=1
c
i,j
A
j
=
H
i
0
M
i
−
P
k
j=1
c
i,j
H
j
0
P
k
j=1
c
i,j
M
j
=
t
i
0
0
=
t
i
0
wheret
i
= H
i
−
P
k
j=1
c
i,j
H
j
is a1 × n
matrix. Now onsiderq
X
i=k+1
d
i
B
i
=
q
X
i=k+1
d
i
A
i
−
k
X
j=1
c
i,j
A
j
= 0.
Sin e the
A
i
's are linearly independent it follows that the s alarsd
i
= d
i
c
i,j
= 0
and so theB
i
'sare linearlyindependent. Furthermore,ifq
X
i=k+1
d
i
t
i
= 0
thenq
X
i=k+1
d
i
t
0
i
=
q
X
i=k+1
d
i
B
i
= 0
and thus, the s alars
d
i
= 0
, hen e the set{t
k+1
, . . . , t
q
}
is linearly independent overF
.Now onsider an alternative partitioning of ea h of
A
1
, A
2
, . . . , A
q
,q = ⌊n
2
/4⌋ + 2
into submatri esM
′
i
,H
′
i
and a zero matrix of order(n − 1) × (n − 1)
,n × 1
and1 × (n − 1)
respe tively, as follows:A
i
=
M
′
i
0 0 · · · 0
H
i
′
forall
i
. Again,sin eA
i
A
j
= A
j
A
i
itfollowsasbeforethatM
′
i
M
j
′
= M
j
′
M
i
′
,i.e., theM
′
i
's ommute. LetW
′
be the ve tor spa e spanned by the set
{M
′
1
, M
2
′
, . . . , M
q
′
}
, so the ele-mentsofW
′
are
(n − 1) × (n − 1)
matri es. Supposes =
dimW
′
. Thenfromtheindu tion
hypothesisit follows that
s ≤ ⌊(n − 1)
2
/4⌋ + 1
. Assume,again withoutlossof generality,
that
W
′
is spanned by the linearly independent set of matri es
{M
′
1
, M
2
′
, . . . , M
s
′
}.
Then fori > s
ea hM
′
i
is expressible as a linear ombinationM
′
i
=
P
s
j=1
c
′
i,j
M
j
′
with s alarsc
′
i,1
, . . . , c
′
i,s
inF.
Now, fori > s
, letB
′
i
= A
i
−
P
s
j=1
c
i,j
′
A
j
.
A similar argument asbefore shows thatB
i
′
=
0 t
′
i
wheret
′
i
= H
i
′
−
P
s
j=1
c
′
i,j
H
j
′
is ann × 1
matrix,i = s + 1, . . . , ⌊n
2
/4⌋ + 2.
TheB
i
's are linear ombinations of theA
i
's, so theB
i
's are inV
. Similarly theB
′
j
's are inV
and hen eB
i
B
′
j
= B
j
′
B
i
. FromB
i
B
j
′
=
t
i
0
0 t
′
j
=
0 t
i
t
′
j
0
0
andB
j
′
B
i
=
0 t
′
j
t
i
0
= 0
itfollowsthatt
i
t
′
j
= 0
foreveryi = k + 1, . . . , q
andj = s + 1, . . . , q
,whereq = ⌊n
2
/4⌋+ 2
. LetA =
t
k+1
t
k+2
. . .t
q
=
a
k+1,1
a
k+1,2
· · · a
k+1,n−1
a
k+1,n
a
k+2,1
a
k+2,2
· · · a
k+2,n−1
a
k+2,n
. . . . . .· · ·
. . . . . .a
q,1
a
q,2
· · ·
a
q,n−1
a
q,n
whereq = ⌊n
2
/4⌋ + 2
and
t
i
= (a
i,1
, a
i,2
, . . . , a
i,n
)
fori = k + 1, . . . , q.
Sin e thet
i
's are linearly independent it follows thatrank
A ≥ ⌊n
2
/4⌋ + 2 − (k + 1) + 1 = ⌊n
2
/4⌋ − k + 2.
Andt
i
t
′
j
= 0
impliesAt
′
j
= 0
whi h implies thatt
′
j
is in the null spa e ofA
for allj = s + 1, . . . , ⌊n
2
/4⌋ + 2
. Furthermore, the
t
′
i
's are linearly independent,thus,nullity
A ≥ ⌊n
2
/4⌋ + 2 − (s + 1) + 1 = ⌊n
2
/4⌋ − s + 2.
Nown =
rankA +
nullityA
≥
n
2
4
− k + 2
+
n
2
4
− s + 2
= 2
n
2
4
+ 4 − (k + s).
CHAPTER 2. Mirzakhani'sSimple Proof of S hur's Theorem 10 But
k + s ≤ (⌊
(n−1)
2
4
⌋ + 1 + ⌊
(n−1)
2
4
⌋ + 1)
,son ≥ 2
n
2
4
+ 4 −
(n − 1)
2
4
+ 1 +
(n − 1)
2
4
+ 1
= 2
n
2
4
−
(n − 1)
2
4
+ 1
.
Ifn
iseven, sayn = 2m, m ∈ Z
+
,
thenn ≥ 2
(2m)
2
4
−
(2m − 1)
2
4
+ 1
= 2
m
2
−
(m
2
− m) +
1
4
+ 1
= 2(m
2
− m
2
+ m + 1)
= 2m + 2
= n + 2.
Ifn
isodd, sayn = 2m + 1, m ∈ Z
+
,
thenn ≥ 2
(2m + 1)
2
4
−
(2m)
2
4
+ 1
= 2
(m
2
+ m) +
1
4
−
m
2
+ 1
= 2(m
2
+ m − m
2
+ 1)
= 2m + 2
= n + 1.
Wehavethusarrived ata ontradi tion inboth ases. Thus, the assumptionthat
F
n
has more than⌊n
2
/4⌋ + 1
linearly independent matri es leads to a ontradi tion. Therefore
F
n
has atmost⌊n
2
/4⌋ + 1
linearly independent matri es.
Thefollowing ommutativesubalgebraof
M
n
(F )
givenin[9℄isanexampleofasubalgebra ontaining⌊n
2
/4⌋ + 1
linearly independent matri es. The example istrue for allpositive
integer
n
. This shows that the upper bound⌊n
2
/4⌋ + 1
annot be lowered. We look at
the ase when
n = 5
for illustrativepurposes. Example 15. The subalgebraF
n
= {aI + a
i,j
E
i,j
| 1 ≤ i ≤ ⌊n/2⌋, ⌊n/2⌋ + 1 ≤ j ≤ n, a, a
i,j
∈ F }
of
M
n
(F )
,F
aeld,isa ommutativesubalgebrawhi hhas⌊n
2
/4⌋+1
linearlyindependent
matri es.
We show that
F
n
is indeed ommutative. Sin e, by denition ofF
n
,i
is never equal tobI + b
i,j
E
i,j
inF
n
we have(aI + a
i,j
E
i,j
)(bI + b
i,j
E
i,j
) = abI + ab
i,j
E
i,j
+ a
i,j
bE
i,j
+ a
i,j
b
i,j
E
i,j
E
i,j
= abI + ab
i,j
E
i,j
+ a
i,j
bE
i,j
and
(bI + b
i,j
E
i,j
)(aI + a
i,j
E
i,j
) = baI + ba
i,j
E
i,j
+ b
i,j
aE
i,j
+ b
i,j
a
i,j
E
i,j
E
i,j
= baI + ba
i,j
E
i,j
+ b
i,j
aE
i,j
.
Sin e
F
isa eld, itthen follows that elements ofF
n
ommute.Wenoti ethat every element
aI + a
i,j
E
i,j
ofF
n
an be expressed asa linear ombinationaI + a
i,j
E
i,j
= aI + a
1,⌊n/2⌋+1
E
1,⌊n/2⌋+1
+ . . . + a
1,n
E
1,n
+ a
2,⌊n/2⌋+1
E
2,⌊n/2⌋+1
+ . . . + a
2,n
E
2,n
. . .+ a
⌊n/2⌋,⌊n/2⌋+1
E
⌊n/2⌋,⌊n/2⌋+1
+ . . . + a
⌊n/2⌋,n
E
⌊n/2⌋,n
and alsothe set
W = {I, E
i,j
| 1 ≤ i ≤ ⌊n/2⌋, ⌊n/2⌋ + 1 ≤ j ≤ n}
is linearly independent. Thus,
W
is abasis forF
n
. The numberof elementsinW
isn
2
n −
n
2
+ 1.
Now, if
n
is even, sayn = 2m
,m ∈ Z
+
,
thenn
2
n −
n
2
+ 1 =
2m
2
2m −
2m
2
+ 1
= ⌊m⌋(2m − ⌊m⌋) + 1
= m(2m − m) + 1
= m
2
+ 1
=
(2m)
2
4
+ 1
=
n
2
4
+ 1.
CHAPTER 2. Mirzakhani'sSimple Proof of S hur's Theorem 12 If
n
isodd, sayn = 2m + 1, m ∈ Z
+
,thenn
2
n −
n
2
+ 1 =
2m + 1
2
2m + 1 −
2m + 1
2
+ 1
=
m +
1
2
2m + 1 −
m +
1
2
+ 1
= m(2m + 1 − m) + 1
= m
2
+ m + 1
=
m
2
+ m +
1
4
+ 1
=
(2m + 1)
2
4
+ 1
=
n
2
4
+ 1.
Example 16. In parti ular, if
n = 5
andF
is aeld, thenF
5
= {aI + a
i,j
E
i,j
| 1 ≤ i ≤ ⌊5/2⌋, ⌊5/2⌋ + 1 ≤ j ≤ 5, a, a
i,j
∈ F }
= {aI + a
i,j
E
i,j
| 1 ≤ i ≤ 2, 3 ≤ j ≤ 5, a, a
i,j
∈ F }.
Let
A, B ∈ F
5
su h thatA =
a 0 a
1,3
a
1,4
a
1,5
0 a a
2,3
a
2,4
a
2,5
0 0
a
0
0
0 0
0
a
0
0 0
0
0
a
= aI + A
1,2
andB =
b 0 b
1,3
b
1,4
b
1,5
0 b b
2,3
b
2,4
b
2,5
0 0
b
0
0
0 0
0
b
0
0 0
0
0
b
= bI + B
1,2
whereA
1,2
=
0 0 a
1,3
a
1,4
a
1,5
0 0 a
2,3
a
2,4
a
2,5
0 0
0
0
0
0 0
0
0
0
0 0
0
0
0
andB
1,2
=
0 0 b
1,3
b
1,4
b
1,5
0 0 b
2,3
b
2,4
b
2,5
0 0
0
0
0
0 0
0
0
0
0 0
0
0
0
.
Wesee that
A
1,2
B
1,2
= B
1,2
A
1,2
= 0
and thus,AB = (aI + A
1,2
)(bI + B
1,2
)
and
BA = (bI + B
1,2
)(aI + A
1,2
)
= baI + bA
1,2
+ aB
1,2
.
Therefore
AB = BA
indi atingthat the elements ofF
5
are indeed mutually ommuting. It is lear that{I
5
, E
1,3
, E
1,4
, E
1,5
, E
2,3
, E
2,4
, E
2,5
}
isalinearlyindependentset ofmatri esin
F
5
.
Moreover, notethatthis sethas7
elements and⌊5
2
/4⌋ + 1 = 7.
Weobservethat inthe subalgebra
F
n
inExample15 the number⌊n
2
/4⌋ + 1
isalsoequal
Chapter 3
Cayley-Hamilton Theorem
Inthis hapter wegiveadetailedproof ofthe Cayley-HamiltonTheoremfor anarbitrary
matrixin
M
n
(R),
whereR
denotestheeldofrealnumbers. Wefurtherdis ussinbriefthe tra eidentity fora2 × 2
matrixoverR
whi h arisesfromthe Cayley-HamiltonTheorem.Consider an
n × n
matrixA ∈ M
n
(R)
given byA = (a
i,j
) =
a
1,1
a
1,2
· · · a
1,n
a
2,1
a
2,2
· · · a
2,n
·
·
· · ·
·
·
·
· · ·
·
·
·
· · ·
·
a
n,1
a
n,2
· · · a
n,n
.
(3.1)The followingdenitions are given in relationtothe general
n × n
matrixA ∈ M
n
(R)
in Equation 3.1.Denition 17. (Nering E.D. [10℄)
The determinant of the matrix
A = (a
i,j
)
is dened to be the s alardet(A) = |a
i,j
|
omputed a ording tothe ruledet(A) = |a
i,j
| =
X
π
(
sgnπ)a
1,π(1)
a
2,π(2)
· · · a
n,π(n)
,
where the sum is taken overall permutationsof the elements of the set
S = {1, . . . , n}.
Wenote the following with regardto the above denition:
(i)"sgn
π
" means the "the sign ofπ
".(ii)A permutation
π
of a setS
is dened to bea one to one mappingofS
ontoitself. (iii)The element whi hthe permutationπ
asso iates withi
is denoted byπ(i).
(iv) sgn
π = +1
ifπ
is aneven permutation. (v) sgnπ = −1
ifπ
isan odd permutation.Example 18. Consider a
3 × 3
matrixB ∈ M
3
(R)
given byB =
b
1,1
b
1,2
b
1,3
b
2,1
b
2,2
b
2,3
b
3,1
b
3,2
b
3,3
.
(3.2) The determinant ofB
isdet(B) =
X
π
(
sgnπ)b
1,π(1)
b
2,π(2)
b
3,π(3)
= b
1,1
b
2,2
b
3,3
+ b
1,2
b
2,3
b
3,1
+ b
1,3
b
2,1
b
3,2
− b
1,2
b
2,1
b
3,3
− b
1,3
b
2,2
b
3,1
− b
1,1
b
2,3
b
3,2
= b
1,1
(b
2,2
b
3,3
− b
2,3
b
3,2
) + b
1,2
(b
2,3
b
3,1
− b
2,1
b
3,3
) + b
1,3
(b
2,1
b
3,2
− b
2,2
b
3,1
).
Thus,det(B) = b
1,1
B
1,1
+ b
1,2
B
1,2
+ b
1,3
B
1,3
(3.3) whereea hB
i,j
isadeterminantofamatrix obtainedfromB
bydeletingtheith
rowandjth
olumnofB.
That is,B
1,1
=
b
2,2
b
2,3
b
3,2
b
3,3
, B
1,2
= −
b
2,1
b
2,3
b
3,1
b
3,3
, B
1,3
=
b
2,1
b
2,2
b
3,1
b
3,2
.
Remark 19. The determinantof the
n × n
matrixA
in(3.1) an be writtenin the formdet(A) = a
i,j
A
i,j
+ (
terms whi hdo not ontaina
i,j
as a fa tor).
Example18andRemark19leadustothefollowingdenition,(seeJainandGunawardena
[5℄).
Denition 20. The ofa torof any entry
a
p,q
of ann × n
matrixA = (a
i,j
)
isdened to beA
p,q
= (−1)
p+q
·det(
the(n−1)×(n−1)
matrix obtained by deletingthep th
row andq th
olumn).
InRemark19thes alar
A
i,j
isthe ofa toroftheentrya
i,j
andinExample18,B
1,1
, B
1,2
, B
1,3
are ofa tors ofb
1,1
, b
1,2
, b
1,3
, respe tively.Example 21. Usingthe matrix
B ∈ M
3
(R)
in Example 18, we haveb
2,1
B
1,1
+ b
2,2
B
1,2
+ b
2,3
B
1,3
(3.4)= b
2,1
(b
2,2
b
3,3
− b
2,3
b
3,2
) + b
2,2
(b
2,3
b
3,1
− b
2,1
b
3,3
) + b
2,3
(b
2,1
b
3,2
− b
2,2
b
3,1
).
(3.5) Multiplying and grouping onthe right hand side of Equation 3.5givesb
2,1
B
1,1
+ b
2,2
B
1,2
+ b
2,3
B
1,3
= b
2,1
b
2,2
b
3,3
− b
2,2
b
2,1
b
3,3
+ b
2,3
b
2,1
b
3,2
− b
2,1
b
2,3
b
3,2
+ b
2,2
b
2,3
b
3,1
− b
2,3
b
2,2
b
3,1
= 0.
CHAPTER 3. Cayley-Hamilton Theorem 16
Remark 22. In general
n
X
j=1
a
i,j
A
k,j
= a
i,1
A
k,1
+ a
i,2
A
k,2
+ . . . + a
i,n
A
k,n
= 0
whenever
i 6= k.
Denition 23. The transpose of the matrix
A = (a
i,j
)
is the matrixA
T
whose element
a
i,j
appearing in rowi
and olumnj
is the elementa
j,i
appearingin rowj
and olumni
ofA.
Againin relationto (3.1)and Denition20,the matrix
(A
i,j
) =
A
1,1
A
1,2
· · · A
1,n
A
2,1
A
2,2
· · · A
2,n
·
·
· · ·
·
·
·
· · ·
·
·
·
· · ·
·
A
n,1
A
n,2
· · · A
n,n
whoseentries are ofa torsof allentries of an
n × n
matrixA = (a
i,j
)
is alled a ofa tor matrix.Denition 24. The transpose
(A
j,i
) =
A
1,1
A
2,1
· · · A
n,1
A
1,2
A
2,2
· · · A
n,2
·
·
· · ·
·
·
·
· · ·
·
·
·
· · ·
·
A
1,n
A
2,n
· · · A
n,n
of the ofa tor matrix
(A
i,j
)
is alled the adjointof matrixA
and itis denoted byadj A.
Lemma 25. The produ t of any matrixA ∈ M
n
(R)
and its adjointadj A
is equal to(det(A))I,
i.e.,A(adj A) = (det(A))I
where
I
is the identity matrix inM
n
(R)
. Lemma26. LetA
0
+A
1
λ+. . .+A
m
λ
m
= 0
forall
|λ|
su iently large,whereA
i
∈ M
n
(R)
for alli
and whereR
denotes the eld of real numbers. Then ea hA
i
= 0.
Proof. Multiplying
A
0
+ A
1
λ + . . . + A
m
λ
m
= 0
(3.6) by1
λ
m
yieldsA
0
1
λ
m
+ A
1
1
λ
m−1
+ . . . + A
m−1
1
λ
+ A
m
= 0.
(3.7)Letting
|λ| → ∞
in(3.7) yieldsA
m
= 0
and thus(3.7) be omesA
0
1
λ
m
+ A
1
1
λ
m−1
+ . . . + A
m−2
1
λ
2
+ A
m−1
1
λ
1
= 0.
(3.8) Multiplying (3.8) byλ
yieldsA
0
1
λ
m−1
+ A
1
1
λ
m−2
+ . . . + A
m−2
1
λ
+ A
m−1
= 0.
(3.9)Letting
|λ| → ∞
in(3.9) yieldsA
m−1
= 0
and thus (3.9) be omesA
0
1
λ
m−1
+ A
1
1
λ
m−2
+ . . . + A
m−2
1
λ
1
= 0.
(3.10)Continuing tomultiplyby
λ
and lettingλ → ∞
resultseventuallyinA
0
λ
+ A
1
= 0.
(3.11)Finally,againletting
λ → ∞
givesA
1
= 0
and so(3.6)impliesthatA
0
= 0.
Hen ealltheA
i
's are zero.Corollary 27. Let
A
i
andB
i
ben × n
matri es overR
and suppose thatA
0
+ A
1
λ + . . . + A
m
λ
m
= B
0
+ B
1
λ + . . . + B
m
λ
m
for all
|λ|
large enough. ThenA
i
= B
i
for alli
.Proof. Sin e
A
0
− B
0
+ (A
1
− B
1
)λ + . . . + (A
m
− B
m
)λ
m
= 0,
Lemma 26implies that
A
i
− B
i
= 0
for alli
. Hen eA
i
= B
i
for alli
.ThefollowingtheoremisthewellknownCayley-HamiltonTheoremanditstatesthatany
n × n
matrixA
isa solution of its hara teristi polynomialp(λ).
Theorem 28. Let
A
be ann × n
matrix overR
and letp(λ) = det(A − λI)
be the hara teristi polynomial ofA
. Thenp(A) = 0.
Proof. Consider a matrix
A ∈ M
n
(R)
su hthatA = (a
i,j
).
ThenA − λI =
a
1,1
− λ
a
1,2
· · ·
a
1,n
a
2,1
a
2,2
− λ · · ·
a
2,n
· · ·
· · ·
· · ·
· · ·
a
n,1
a
n,2
· · · a
n,n
− λ
and the determinantof
A − λI
isa polynomial inλ
of degreen
, say,CHAPTER 3. Cayley-Hamilton Theorem 18
for some
k
0
, k
1
, . . . , k
n
∈ R.
Now the ofa torA
1,1
=
a
2,2
− λ · · ·
a
2,n
· · ·
· · ·
· · ·
a
n,2
· · · a
n,n
− λ
is apolynomialin
λ
of degreen − 1
. In fa tea h ofa torA
i,j
is apolynomialinλ
of at most(n−1)
thdegree. Hen etheentriesoftheadjointmatrixadj(A−λI)
arepolynomials inλ
of atmost(n − 1)
thdegree. Thus,adj(A − λI) = B
n−1
λ
n−1
+ B
n−2
λ
n−2
+ . . . + B
1
λ + B
0
where ea h
B
i
is ann × n
matrix with entries fromR
. Now(A − λI) adj(A − λI)
= (A − λI)(B
n−1
λ
n−1
+ B
n−2
λ
n−2
+ . . . + B
1
λ + B
0
)
= AB
n−1
λ
n−1
− IB
n−1
λ
n
+ AB
n−2
λ
n−2
− IB
n−2
λ
n−1
+ . . . + AB
1
λ − IB
1
λ
2
+ AB
0
− IB
0
λ,
i.e.,
(A − λI) adj(A − λI) = C
n
λ
n
+ C
n−1
λ
n−1
+ C
n−2
λ
n−2
+ . . . + C
1
λ + C
0
(3.13) whereC
n
= −IB
n−1
C
n−1
= AB
n−1
− IB
n−2
C
n−2
= AB
n−2
− IB
n−3
. . .C
2
= AB
2
− IB
1
C
1
= AB
1
− IB
0
C
0
= AB
0
.
Multiplying the above equationsfrom the left by
A
n
, A
n−1
, A
n−2
, . . . , A
2
, A, I
respe tively, yieldsA
n
C
n
= A
n
(−IB
n−1
)
A
n−1
C
n−1
= A
n−1
(AB
n−1
− IB
n−2
)
A
n−2
C
n−2
= A
n−2
(AB
n−2
− IB
n−3
)
. . .A
2
C
2
= A
2
(AB
2
− IB
1
)
AC
1
= A(AB
1
− IB
0
)
IC
0
= IAB
0
.
Addingthe aboveequations yields
A
n
C
n
+ A
n−1
C
n−1
+ A
n−2
C
n−2
+ . . . + A
2
C
2
+ AC
1
+ C
0
= −A
n
B
n−1
+ A
n
B
n−1
− A
n−1
B
n−2
+ A
n−1
B
n−2
− A
n−2
B
n−3
+ . . . + A
3
B
2
−A
2
B
1
+ A
2
B
1
− AB
0
+ AB
0
= 0.
That is,A
n
C
n
+ A
n−1
C
n−1
+ A
n−2
C
n−2
+ . . . + A
2
C
2
+ AC
1
+ C
0
= 0.
(3.14) Now, itfollows fromLemma 25 and (3.12)that(A − λI)
adj(A − λI) = I(
det(A − λI)) = I(k
n
λ
n
+ k
n−1
λ
n−1
+ . . . + k
1
λ + k
0
).
(3.15)Substituting (3.13)into (3.15) gives
C
n
λ
n
+ C
n−1
λ
n−1
+ C
n−2
λ
n−2
+ . . . + C
1
λ + C
0
= I(k
n
λ
n
+ k
n−1
λ
n−1
+ . . . + k
1
λ + k
0
).
By Corollary27 wehave
C
n
= Ik
n
, C
n−1
= Ik
n−1
, C
n−2
= Ik
n−2
, · · · , C
2
= Ik
2
, C
1
= Ik
1
, C
0
= Ik
0
.
Substituting the
C
i
byIk
i
in(3.14) we getk
n
A
n
+ k
n−1
A
n−1
+ k
n−2
A
n−2
+ . . . + k
2
A
2
+ k
1
A + k
0
I = 0.
Consider a
2 × 2
matrixA =
a b
c d
over
R
with hara teristi polynomialp(λ) = λ
2
− (a + d)λ + ad − bc
= λ
2
− tr(A)λ + det(A).
The Cayley-Hamilton Theorem asserts that
p(A) = A
2
− tr(A)A + I det(A) = 0,
(3.16)
where
I
is the identity matrix inM
2
(R).
Sin etr(I) = 2
, taking the tra e of (3.16), it follows thattr(A
2
) − tr
2
(A) + 2 det(A) = 0.
(3.17)Thus,
det(A) =
1
2
tr
2
(A) − tr(A
2
).
(3.18)
Substituting (3.18)into (3.16) gives
A
2
− tr(A)A +
1
2
tr
2
(A) − tr(A
2
)I = 0.
CHAPTER 3. Cayley-Hamilton Theorem 20
The identity in(3.19) isknown as the Cayley-Hamiltontra e identity for
2 × 2
matri es. Weshall see in Chapter5
that (3.19) is not ne essarilyzero when we deal with matri es over anon ommutativering.We note that, although we have dealt with the Cayley-Hamilton Theorem for square
matri esover the eld of real numbers, there exist several proofs of this theorem for
ma-tri esoverthe eld
C
of omplexnumbers andarbitrary ommutativerings. Forinstan e, Zhang[14℄provided a prooffor square matri esoverthe eldC
of omplexnumbers and Straubing [12℄ provided a proof for a square matrix over anarbitrary ommutativering.Furthermore,toeverylineartransformationofave torspa eofdimension
n
overaeldF
there orresponds ann × n
matrix overF
and onversely,toeverysu h matrixthere or-responds a linear transformation of the ve tor spa e. A ordingly, the Cayley-HamiltonTheorem hold for linear transformations and Hungerford [3℄ provides a proof for linear
Chapter 4
Commutativity and Lie nilpoten y in
the Matrix Algebra
U
∗
n
(R)
Inthis hapterwedisse tthepaper[8℄byMeyer, etal. Weexplorethering
U
∗
n
(R)
,in par-ti ularthe ringsU
∗
3
(R)
andU
∗
4
(R)
,inrelationtothe polynomial identities[[x, y], z] = 0,
[x, y][u, v] = 0
and[[x, y], [u, v]] = 0.
Wedis ussboth ases whenaringR
is ommutative andwhenR
isnot ommutative. Itisshown inTheorem44thatU
∗
n
(R)
isLienilpotentof indexn − 1
. A ring satisfying anidentity[[x, y], [u, v]] = 0
is alled Lie solvable of index2
. In [8℄, Meyer, etal,giveanexample of a Liesolvable ring of index2
whi h wedis uss in Corollary37.4.1 The ring
U
∗
3
(R)
Denition 29. Let
R
be an arbitrary ring (not ne essarily ommutative) with unity1
, and letU
n
∗
(R) =
a a
1,2
· · ·
a
1,n
0
a
. . . . . . . . . . . . . . .a
n−1,n
0 · · ·
0
a
: a, a
i,j
∈ R
be the subring of