• No results found

Subdirectly irreducible algebras in varieties of universal algebras

N/A
N/A
Protected

Academic year: 2021

Share "Subdirectly irreducible algebras in varieties of universal algebras"

Copied!
56
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Subdirectly irreducible algebras in varieties

of universal algebras

Dorien Zwaneveld

August 15, 2014

Bachelorthesis

Supervisors: dr. N. Bezhanishvili, J. Ilin Msc

Korteweg-de Vries Institute for Mathematics Institute for Logic, Language and Computation

(2)

Abstract

In this thesis we give a characterization of subdirectly irreducible algebras within the varieties of Boolean algebras and of Heyting algebras. In order to do this we will start with an introduction into universal algebra and in particular in lattice theory. We will define congruences and we will define varieties via the operations of homomorphic images, subalgebras and products. Then we will prove Birkhoff’s Theorem which states that a class of algebras is an equational class if and only if it is a variety. In the last chapter we will define subdirectly irreducible algebras. We will give characterizations of subdirectly irreducible universal algebras via congruences. For subdirectly irreducible Boolean and Heyting algebras we will work out a more convenient characterization via filters. As a result we obtain a direct characterization of subdirectly irreducible Boolean and Heyting algebras.

Title: Subdirectly irreducible algebras in varieties of universal algebras Author: Dorien Zwaneveld, tcam.zwaneveld@gmail.com, 6114571 Supervisors: dr. N. Bezhanishvili, J. Ilin Msc

Second Grader: Prof. dr. Y. Venema Deadline: August 15, 2014

Korteweg-de Vries Institute for Mathematics University of Amsterdam

Science Park 904, 1098 XH Amsterdam http://www.science.uva.nl/math

Institute for Logic, Language and Computation University of Amsterdam

(3)

Acknowledgements

During my research for this thesis I was an intern at a high school, on my way to becoming a teacher, like most of the people who contributed to the theory discussed in this thesis. This gave me a lot of stress and therefore I would like to thank my boyfriend, Jelle Spitz, for supporting me, loving me and keeping faith even when I went crazy and did not have much faith myself. I would like to thank my teachers and friends at the ILO for supporting me and Chris Zaal for cutting me some slack during these stressful times. I would also like to thank my friends, Hanneke van der Beek and Timo de Vries for getting up early so many times during the summer and supporting me in writing this thesis. Furthermore I would like to thank my parents and sisters for their love and support.

Most of all I give many thanks to my supervisors Nick Bezhanishvili and Julia Ilin, whose doors were always open and who pointed me in the right direction when I got stuck. Without these people I would have never made it this far.

(4)

Contents

1 Introduction 5

2 An introduction to universal algebra 6

2.1 Universal algebras . . . 6

2.2 Lattice theory . . . 8

2.3 Boolean and Heyting algebras . . . 13

3 Varieties of universal algebras 17 3.1 Congruences . . . 17 3.2 Varieties . . . 20 3.2.1 Homomorphisms . . . 20 3.2.2 Subalgebras . . . 22 3.2.3 Products . . . 23 3.2.4 Tarski’s Theorem . . . 24 3.3 Term algebras . . . 27 3.4 Identities . . . 30

3.5 The K-free algebra . . . 33

4 Subdirectly irreducible algebras 36 4.1 Building blocks of varieties . . . 36

4.2 Determining subdirectly irreducible algebras . . . 40

5 Conclusions 53

6 Popular summary 54

(5)

1 Introduction

During a typical bachelors in mathematics one learns about groups and rings and it gets called ‘algebra’. But what is an algebra? A question raised by many mathematicians during the 19th century, but it took until 1933 that the first definition of a universal algebra was given. During the studies of algebraic structures (or algebras) one finds that certain theorems do not only hold for groups, but they also hold for rings and for other structures as well. (Think of the homomorphism theorem for example.) In universal al-gebra one does not only look at groups, or rings, or . . . , one looks at the whole picture. In this approach one is taking a step back and is looking at things from a distance. It underlines general patterns that various algebraic structures have in common. This is something logicians do more often than mathematicians. Since this thesis is written at the ILLC, taking a step back and looking at the whole picture is what I will be doing in the first part of my thesis. I will be looking at the concept of universal algebra and in particular lattice theory. Universal algebra is a relatively new part of mathematics founded by Garrett Birkhoff (1911-1996) during the 1930s when he was teaching at the Harvard University. Garrett Birkhoff did not have a doctoral degree or even a masters degree, but still he is one of the main influences in the mathematical branch called uni-versal algebra. He was the one who gave the definition of a uniuni-versal algebra and he also contributed greatly to lattice theory which is why we will come by his name fairly often in this thesis.

We will also take a look at some algebraic structures that are less familiar to mathemati-cians, but more familiar to logicians: Boolean algebras and Heyting algebras. Boolean algebras are named after George Boole (1815-1864), who was an English mathematician as well as logician and philosopher. Boolean algebras are models for classical proposi-tional logic. Because of this, Boolean algebras have applications in computer science that turn out to be very useful. At the end of the thesis we briefly discuss the connec-tion between Boolean algebras and classical proposiconnec-tional logic and we will see that the 2-element Boolean algebra plays a special role for classical logic.

We will also discuss Heyting algebras and show that every Boolean algebra is a Heyt-ing algebra. These algebras were named after a Dutch mathematician and logician: Arend Heyting (1898-1980). Heyting’s calculus formalizes the ideas of the Dutch math-ematician and philosopher Luitzen Egbertus Jan Brouwer (1881-1966) on constructive mathematics. They both studied and taught mathematics at the University of Amster-dam. In short, Heyting algebras are to intuitionistic propositional logic what Boolean algebras are to classical propositional logic.

Boolean and Heyting algebras form varieties (as do groups and rings). In this thesis we will take a closer look at these algebraic structures with the main goal as to give a characterization of the building blocks for the varieties of Heyting and Boolean algebras.

(6)

2 An introduction to universal

algebra

The goal of this first chapter is to define and develop an understanding of varieties. To do this, we will start with a universal algebra. What is a universal algebra? Do we already know some examples? We will answer these question and then we will move on to lattice theory. This is an area of logic and universal algebra one is unlikely to encounter in a bachelors in mathematics. It is also very important for the contents of this thesis, because this is one of the major building blocks for the rest of the theory. After discussing the basics of lattice theory, we will turn to some examples of algebras arising from logic: Boolean algebras and Heyting algebras.

2.1 Universal algebras

The reader may have encountered examples of algebras in various contexts. But in most of these contexts we often do not know what an algebra actually is. So let us start with giving a definition of an algebra. But first let us define the type of an algebra.

Definition 2.1. A type of algebras is a set F of function symbols. Each of these function symbols has a nonnegative integer n assigned to it. This integer is called the arity of the symbol and we say that the function symbol is n-ary.

Definition 2.2. A (universal) algebra A of type F is an ordered pair hA, F i where A is a nonempty set and F is a family of finitary operations on A indexed by the type F such that every n-ary function symbol corresponds to an n-ary operation on A.

We call the set A the underlying set of A.

Note that the set A is closed under the finitary operations in F . So the operations in an algebra are the interpretation of the function symbols of the type.

Now let us look at some examples of universal algebras. The reader may have seen some of these before.

Example 2.3. • Groups

A group is a set G together with the binary operation ·, the unary operation −1 and the nullary operation 1. Notation: hG, ·,−1, 1i.

• Rings

(7)

• Sigma-algebras

Also studying the mathematical field of measure theory one encounters algebras. Given a set Σ of elements one can define different sigma-algebras. For let S be the some collection of subsets of Σ, then hS, ∩, ∪,0i is one of the sigma-algebras on Σ. • Lattices

This is an algebra we have not yet seen before. A lattice has two binary operations hL, ∧, ∨i. Lattices are important for the contents of this thesis and therefore we will look at them more closely.

• Boolean algebras

A Boolean algebra is a lattice together with some extra properties and operations hB, ∧, ∨,0, 0, 1i. We will find out more about these algebras further in this thesis.

• Heyting algebras

We will also discuss some properties and examples of Heyting algebras. A Heyting algebra is also a lattice with the same type as Boolean algebras, but they satisfy different axioms hH, ∧, ∨, →, 0, 1i.

In these algebras there are usually finitely many operations, we put the operation with the biggest arity first and the operation with the smallest arity last.

Definition 2.4. Let G and G0 be two groups. A map f : G → G0 is called a homomor-phism if for all x, y ∈ G

(i) f (x · y) = f (x) · f (y); (ii) f (x−1) = f (x)−1; (iii) f (1) = 10.

Here 10 is the nullary operation in G0.

For two rings R and R0, a mapping f : R → R0 is called a homomorphism if for all x, y ∈ R we have

(i) f (x + y) = f (x) + f (y); (ii) f (x · y) = f (x) · f (y); (iii) f (−x) = −f (x); (iv) f (0) = 00.

As with the groups, 00 is the nullary operation in R0. Now we will generalize these to any universal algebra.

(8)

Definition 2.5. When we have two algebras A and B of the same type, a mapping α : A → B is called a homomorphism if

α(fA(a1, . . . , an)) = fB(α(a1), . . . , α(an)).

Here a1, . . . , an are elements in A and f is an n-ary operation. fA is the interpretation

of f on A.

Now that we have the concept of a homomorphism it is convenient to extend this idea and specify a couple of different kinds of homomorphisms.

Definition 2.6. An injective (one-to-one) homomorphism is called an embedding or a monomorphism. When a homomorphism is surjective (onto) it is an epimorphism. A homomorphism that is both injective and surjective is called an isomorphism.

With these homomorphisms we prove some results later on.

2.2 Lattice theory

In the previous section we have seen that a lattice is an algebra. In this section we will see some different definitions of a lattice and we will prove that these definitions are indeed equivalent. We will also look at some special lattices and some of their properties. We will start by giving a definition of a lattice as given in the first paragraph.

Definition 2.7. A lattice is an algebra hL, ∧, ∨i with two binary operations which satisfy the identities: (commutative laws) x ∧ y ≈ y ∧ x x ∨ y ≈ y ∨ x (associative laws) x ∧ (y ∧ z) ≈ (x ∧ y) ∧ z x ∨ (y ∨ z) ≈ (x ∨ y) ∨ z (idempotent laws) x ∧ x ≈ x x ∨ x ≈ x (absorption laws) x ≈ x ∧ (x ∨ y) x ≈ x ∨ (x ∧ y)

The binary operation ∧ is also called the ‘meet’-operation and ∨ is called the ‘join’. Since a lattice is also an algebra it is implicit that our set L is non-empty.

The reader may have seen these operation symbols before, namely in the propositional logic. There ∧ is called and or conjunction and ∨ is called or or disjunction. When we look at it this way, the properties from propositional logic form the laws. We model this by taking the propositions as elements of a set and note that this actually forms a lattice. Recall that a partially ordered set is a set with a partial order on it. A partial or-der is an oror-dering which is reflexive, antisymmetric and transitive. We can use partials

(9)

Definition 2.8. A lattice is a partially ordered set L for which the following holds: ∀a, b ∈ L : ∃ sup{a, b}, inf{a, b} ∈ L.

Here sup{a, b} is the least upper bound of a and b. Thus sup{a, b} = p ⇔ a ≤ p, b ≤ p and for all c such that a ≤ c and b ≤ c we have p ≤ c. Here ≤ is the partial order on the set.

The infimum is the greatest lower bound i.e. inf{a, b} = p ⇔ a ≥ p, b ≥ p and for all c such that a ≥ c and b ≥ c we have p ≥ c.

So a partially ordered set L is a lattice if the supremum and infimum of every 2-element set exist in L. We have now seen two different definitions of a lattice. But are they in fact equivalent? We will show this in the following theorem.

Theorem 2.9. Definition 2.7 and Definition 2.8 are equivalent.

Proof. Suppose that L is a lattice by Definition 2.7. We have to create a partial order on L. Define:

a ≤ b ⇐⇒ a = a ∧ b.

Now all we have to do is show that the ≤-relation defined above is a partial order and that by this definition the supremum and infimum of every 2-element set exist.

By the idempotent law we find that a = a∧a and therefore we have established reflexivity. Now suppose we have a ≤ b and b ≤ a. We find a = a ∧ b = b ∧ a = b. Thus by the commutative law and the above we also have antisymmetry.

For the transitivity, suppose a ≤ b and b ≤ c. The above tells us that a = a ∧ b and b = b∧c thus a = a∧(b∧c). Now we can use the associative law to find that a = (a∧b)∧c. But a = a ∧ b so this tells us that a = a ∧ c. Therefore we have a ≤ c and we have proved the transitivity.

Now suppose that a and b are two elements from L. We want to find an element p such that p = sup{a, b}. Since by absorption a ∧ (a ∨ b) = a and b ∧ (a ∨ b) = b holds, we find a ≤ a ∨ b and b ≤ a ∨ b. Thus a ∨ b is an upper bound for both a and b. Now suppose there exists some element c such that a ≤ c and b ≤ c. Then a ∧ c = a and b ∧ c = b so a ∨ c = (a ∧ c) ∨ c = c and b ∨ c = (b ∧ c) ∨ c = c by absorption again. Now

(a ∨ b) ∨ c = (a ∨ b) ∨ (c ∨ c) (idempotent) = (a ∨ c) ∨ (b ∨ c) (associative) = c ∨ c

= c

Thus we find (a ∨ b) ∧ c = (a ∨ b) ∧ ((a ∨ b) ∨ c) = a ∨ b by using absorption. Therefore a ∨ b ≤ c and sup{a, b} = a ∨ b. Now we use the fact that a lattice is an algebra and therefore L is closed under ∨ thus sup{a, b} exists in L.

We also want to find an element q such that q = inf{a, b}. Since sup{a, b} = a ∨ b it seems to be a rather safe assumption that a ∧ b = inf{a, b}. So let us see why this is true. (a ∧ b) ∧ a = a ∧ b by associativity. In the same way we have (a ∧ b) ∧ b = a ∧ b.

(10)

So we find a ∧ b ≤ a and a ∧ b ≤ b. Now assume there exists some element d such that d ≤ a and d ≤ b. Then a ∧ d = d = b ∧ d. Now

(a ∧ b) ∧ d = (a ∧ b) ∧ (d ∧ d) (idempotent) = (a ∧ d) ∧ (b ∧ d) (associative) = d ∧ d

= d

Thus we find d ≤ a ∧ b and therefore inf{a, b} = a ∧ b as we claimed.

So our first definition (Definition 2.7) implies our second definition (Definition 2.8). Now let us assume that we have a partially ordered set L such that for every 2-element set {a, b} ⊆ L their infimum and supremum exist within L.

For a, b ∈ L define sup{a, b} = a ∨ b and inf{a, b} = a ∧ b.

a ∧ b = inf{a, b} = inf{b, a} = b ∧ a (commutative laws) a ∨ b = sup{a, b} = sup{b, a} = b ∨ a

a ∧ (b ∧ c) = inf{a, inf{b, c}} = inf{a, b, c} (associative laws) = inf{inf{a, b}, c} = (a ∧ b) ∧ c

a ∨ (b ∨ c) = sup{a, sup{b, c}} = sup{a, b, c} = sup{sup{a, b}, c} = (a ∨ b) ∨ c

a ∧ a = inf{a, a} = a (idempotent laws) a ∨ a = sup{a, a} = a

a ∧ (a ∨ b) = inf{a, sup{a, b}} = a (absorption laws) a ∨ (a ∧ b) = sup{a, inf{a, b}} = a

So the partially ordered set satisfies the identities from Definition 2.7. It becomes an algebra by noticing that L is also closed under ∧ and ∨ since the supremum and infimum are in L for every 2-element set. Thus the definitions are indeed equivalent.

In Figure 2.1 one will find some examples of lattices while in Figure 2.2 one will find a picture of a Hasse diagram that is not a lattice. This is because sup{c, d} and inf{a, b} do not exist. By Definition 2.8 we deduce that Figure 2.2 is not a lattice.

(11)

b = a ∧ b a = a ∨ b c = a ∧ c = b ∧ c b = a ∧ b = b ∨ c a = a ∨ c = a ∨ c a ∧ b a b a ∨ b a ∧ c a b c a ∨ b (a ∧ b) ∧ c c a ∧ b a b a ∨ b c ∧ d c d a ∧ b a b c ∨ d a ∨ b a ∧ b a b a ∨ b d a ∧ b c b a b ∨ c

Figure 2.1: some lattices

c d

a b

Figure 2.2: not a lattice

Definition 2.10. A lattice is called distributive if it satisfies one of the distributive laws: (distributive laws) x ∧ (y ∨ z) ≈ (x ∧ y) ∨ (x ∧ z)

x ∨ (y ∧ z) ≈ (x ∨ y) ∧ (x ∨ z)

Definition 2.11. A lattice is called modular if it satisfies the modular law: (modular law) x ≤ y → x ∨ (y ∧ z) ≈ y ∧ (x ∨ z)

In Figure 2.3 one of the lattices is modular but not distributive and the other lattice is neither modular nor distributive. These two lattices are called M5 and N5, respectively.

(12)

0 x y z 1 (a) M5 0 x z y 1 (b) N5 Figure 2.3

To see that M5 is not distributive let us take x, y and z as in Figure 2.3a. Now

x ∧ (y ∨ z) = x ∧ 1 = x and

(x ∧ y) ∨ (x ∧ z) = 0 ∨ 0 = 0

therefore x ∧ (y ∨ z) 6= (x ∧ y) ∨ (x ∧ z) and we deduce that M5 is not distributive. But

M5 is modular, there are ten cases to distinguish, and checking them takes a little time

but is routine.

As stated before, N5 is not distributive, and not modular. To see this, let us take x, y

and z as in Figure 2.3b. We find x ∨ (y ∧ z) = x ∨ 0 = x 6= y = y ∧ 1 = (x ∨ y) ∧ (x ∨ z) therefore N5 cannot be distributive. Also, x ≤ y so for N5 to be modular we should have

x ∨ (y ∧ z) = y ∧ (x ∨ z). But we have

x ∨ (y ∧ z) = x ∨ 0 = x 6= y = y ∧ 1 = y ∧ (x ∨ z). Thus N5 is neither a modular nor distributive lattice.

This gives rise to a question: does the implication ‘distributive ⇒ modular’ hold for every lattice?

Theorem 2.12. Any distributive lattice is modular.

Proof. Suppose L is a distributive lattice and x, y, z ∈ L. Then x ≤ y means x ∧ y = x but we have also seen that it means x ∨ y = y. Now the distributive law states x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z) = y ∧ (x ∨ z). And thus we have satisfied the modular law.

The next theorem proved by R. Dedekind gives an easy way to identify modular lat-tices using N5 as in Figure 2.3. He also uses the fact that an embedding of a lattice

L0 into another lattice L can be seen as if L contains a copy of L0 where the lattice operations of L hold.

(13)

Proof. [1, Theorem 3.5]

G. Birkhoff used this theorem to prove another theorem which gives a characterization of distributive lattices. Since we will be looking at these lattices, we will state the theorem here.

Theorem 2.14 (Birkhoff). L is a nondistributive lattice if and only if M5 or N5 can be

embedded into L. Proof. [1, Theorem 3.6]

2.3 Boolean and Heyting algebras

Now that we introduced distributive lattices, let us look at some special distributive lattices: Boolean and Heyting algebras. We will start this section with the definitions and then we will discuss some properties of these algebras. At the end of the thesis we will look at classes of Boolean and Heyting algebras.

Definition 2.15. A Boolean algebra is an algebra hB, ∧, ∨,0, 0, 1i with two binary, one unary, and two nullary operations which satisfy:

• hB, ∧, ∨i is a distributive lattice • x ∧ 0 ≈ 0; x ∨ 1 ≈ 1

• x ∧ x0 ≈ 0; x ∨ x0 ≈ 1.

In short, a Boolean algebra is a distributive lattice with a top element 1 and a bottom element 0 and where every element has a unique complement. In Figure 2.4 some examples are shown.

0 1 0 a a0 1 0 c0 b0 a0 a b c 1

Figure 2.4: Some Boolean algebras

Now that we have seen some Boolean algebras, let us define Heyting algebras.

Definition 2.16. An algebra hH, ∧, ∨, →, 0, 1i with three binary and two nullary oper-ations is a Heyting algebra if it satisfies:

(14)

(ii) x ∧ 0 ≈ 0; x ∨ 1 ≈ 1 (iii) x → x ≈ 1

(iv) (x → y) ∧ y ≈ y; x ∧ (x → y) ≈ x ∧ y

(v) x → (y ∧ z) ≈ (x → y) ∧ (x → z); (x ∨ y) → z ≈ (x → z) ∧ (y → z).

The binary operation → sends two elements x and y to the element x → y. In Figure 2.5 we determine this element for some cases.

0 = 1 → 0 1 0 x = 1 → x 1 0 x = y → x y = x → 0 1 0 y = x → 0 z w x = y → z 1 0 1 1 0 Figure 2.5: Some Heyting algebras

When determining a → b when given two elements a and b the following theorem can be very useful. To prove it we first need some auxiliary lemmas.

Lemma 2.17. In any Heyting algebra the following holds: (i) x → (y ∧ x) ≈ x → y

(ii) a ≤ b implies x → a ≤ x → b. Proof. (i) We have

x → (y ∧ x) ≈ (x → y) ∧ (x → x) (by (v) of 2.16) ≈ (x → y) ∧ 1 (by (iii) of 2.16) ≈ x → y (Since for all a we have a ≤ 1 thus a ∧ 1 = a)

(ii) Suppose a ≤ b then we have that x → a ≈ x → (a ∧ b) since a ≈ a ∧ b follows from the fact that a ≤ b. Now by (v) of Definition 2.16 it follows that a ≤ b implies (x → a) ∧ (x → b).

(15)

With these properties we can prove the following theorem.

Theorem 2.18. If hH, ∧, ∨ →, 0, 1i is a Heyting algebra and a, b ∈ H then a → b is the largest element c of H such that a ∧ c ≤ b.

Proof. Let us start with the easy part. One of the properties of a Heyting algebra is x ∧ (x → y) ≈ x ∧ y. So the first part follows directly from the properties of the Heyting algebra.

Now we use the previous lemma to prove the other direction. So let us assume that (a ∧ c) ≤ b. Then by Lemma 2.17 (ii) a → (a ∧ c) ≤ (a → b). But by Lemma 2.17 (i) this means that (a → c) ≤ (a → b). Now, by the same property of the Heyting algebra we used before, we find c ≤ (a → b).

Thus a → b is indeed the largest element c in H such that a ∧ c ≤ b. Lemma 2.19. If hH, ∧, ∨ →, 0, 1i is a Heyting algebra and a, b ∈ H then

a ≤ b if and only if a → b = 1.

Proof. If a ≤ b then a = a ∧ b. Since (a ∧ b) ∧ 1 = a ∧ b we have that (a ∧ b) ∧ 1 ≤ b. Since 1 is the biggest element in H we find by Theorem 2.18 that a → b = 1.

Another way to prove this is the following: a → b = (a → b) ∧ 1

= (a → b) ∧ (a → a) = a → (b ∧ a)

= a → a = 1

Now suppose that a → b = 1 then again by Theorem 2.18 we find that a = a ∧ 1 ≤ b. In a Heyting algebra the infinite distributive law also holds as we see in the following lemma. We will need this result later in the thesis when we determine subdirectly irreducible Heyting algebras.

Lemma 2.20. For any Heyting algebra A, a set S ⊆ A and x ∈ A, the following holds: x ∧_S = _{x ∧ s | s ∈ S}.

Proof. The proof of the lemma can be found in [2, Proposition 2.2.7].

As the reader may have noticed, examples of Boolean algebras are also examples of Heyting algebras. This is a fact that holds for every Boolean algebra.

Lemma 2.21. Let hB, ∧, ∨,0, 0, 1i be a Boolean algebra. Then hB, ∧, ∨, →, 0, 1i is a Heyting algebra with a → b = a0∨ b for a, b ∈ B.

(16)

Proof. We have to check the five axioms (or identities as we will call them later) that hold for any Heyting algebra.

Suppose x, y, z ∈ B then we can say the following:

(i) It follows from the definition of a Boolean algebra that hB, ∧, ∨i is a distributive lattice.

(ii) x ∧ 0 ≈ 0 and x ∨ 1 ≈ 1 are also axioms of Boolean algebras, so they hold. (iii) x → x ≈= x0∨ x = 1. This follows from the definition of 0 in a Boolean algebra.

(iv) Since hB, ∧, ∨i is a distributive lattice, we can use the absorption law to find (x → y) ∧ y = (x0∨ y) ∧ y = y.

Now for the other part, we use the distributivity of the lattice and the definition of the complement 0 and the nullary operation 0 to see that

x ∧ (x → y) = x ∧ (x0 ∨ y) = (x ∧ x0) ∨ (x ∧ y) = 0 ∨ (x ∧ y) = x ∧ y.

(v) Again from the distributivity we have

x → (y ∧ z) = x0∨ (y ∧ z) = (x0∨ y) ∧ (x0∨ z) = (x → y) ∧ (x → z).

The last part of the last property uses the next claim. Claim: In a Boolean algebra B : (x ∨ y)0 ≈ x0 ∧ y0.

To see this, just checking the properties of the complement 0 in a Boolean algebra is enough. So suppose x, y ∈ B then we have

(x ∨ y) ∨ (x0∧ y0) = ((x ∨ y) ∨ x0) ∧ ((x ∨ y) ∨ y0) = ((x ∨ x0) ∨ y) ∧ (x ∨ (y ∨ y0)) = 1. and we obtain (x ∨ y) ∧ (x0∧ y0) = (x ∧ (x0∧ y0)) ∨ (y ∧ (x0 ∧ y0)) = ((x ∧ x0) ∧ y0) ∨ (x0 ∧ (y ∧ y0)) = 0.

So now we can prove the last property of the Heyting algebras.

This prove uses the claim as stated and it uses the distributivity of the lattice again:

(17)

3 Varieties of universal algebras

We have introduced Boolean and Heyting algebras and we have discussed some of their properties. So now let us take a step back again and go back to universal algebra. In this chapter we will define the notion of varieties. The goal of this chapter is to obtain two characterizations of varieties, proved by Alfred Tarski and by Garrett Birkhoff, respectively In order to do this we will define congruences in the first section and we will discuss the K-free algebra with some of its properties in the last.

3.1 Congruences

In this section we will discuss the notion of congruences. We will define them and we will discuss some of their properties. We will see that all congruences on a universal algebra form a lattice where ⊆ is the order. We will also give some examples of congruence lattices on Heyting algebras. We will end this section by defining the quotient algebra. Definition 3.1. An equivalence relation on a set V is a subset R of V × V such that for all x, y, z ∈ V :

(reflexivity) (x, x) ∈ R;

(symmetry) If (x, y) ∈ R then (y, x) ∈ R;

(transitivity) If (x, y) ∈ R and (y, z) ∈ R then (x, z) ∈ R.

Definition 3.2. Let A be an algebra. An equivalence relation θ on A is a congruence if for every n-ary fundamental operation of A and a1, . . . , an, b1, . . . , bn ∈ A the following

holds:

hai, bii ∈ θ =⇒ hfA(a1, . . . , an), fA(b1, . . . , bn)i ∈ θ.

Let us denote the set of all congruences on A by Con(A).

So a congruence on an algebra A is an equivalence relation on A with the extra property that a congruence is compatible with the operations of A. Now let us define the meet and join of two congruences in order to deduce that the set of all congruences Con(A) forms a lattice.

Definition 3.3. For two congruences θ and φ on an algebra A let us define • θ ∧ φ := θ ∩ φ

(18)

Here ◦ is the relational product defined by

ha, bi ∈ θ ◦ φ ⇐⇒ ∃c ∈ A : ha, ci ∈ θ, hc, bi ∈ φ. Then Con(A) = hCon(A), ∧, ∨i is the congruence lattice of A.

Every congruence lattice has a largest and smallest element. These are the trivial con-gruences on the algebra. On an algebra A, the smallest congruence is ∆ = {ha, ai ∈ A2}

and the largest one is ∇ = A × A.

In Figure 3.1 we show some of the Heyting algebras we have seen in the previous section together with their congruence lattices.

0 1 ∆ ∇ 0 x 1 ∆ θ1 ∇ 0 x y 1 ∆ θ2 θ3 ∇ z x y 1 0 ∆ θ5 θ6 θ4 ∇

(19)

0 z y x 1 θ9 θ7 θ8 ∇ ∆ 0 z y x w 1 ∆ θ12 θ13 θ10 θ11 ∇

Figure 3.1: Some Heyting algebras on the left with their congruences lattices on the right.

In Figure 3.1 θ1 to θ13 are defined as in Table 3.1.

θ1 ∆ ∪ {h1, xi, hx, 1i}

θ2 ∆ ∪ {h1, xi, hx, 1i, hy, 0i, h0, yi}

θ3 ∆ ∪ {h1, yi, hy, 1i, hx, 0i, h0, xi}

θ4 ∆ ∪ {ha, bi | a, b ≥ z}

θ5 ∆ ∪ {h1, xi, hx, 1i, hy, zi, hz, yi}

θ6 ∆ ∪ {h1, yi, hy, 1i, hx, zi, hz, xi}

θ7 ∆ ∪ {ha, bi | a, b ≥ y} ∪ {hz, 0i, h0, zi}

θ8 ∆ ∪ {ha, bi | a, b ≥ z} ∪ {hy, 0i, h0, yi}

θ9 ∆ ∪ {h1, xi, hx, 1i}

θ10 ∆ ∪ {ha, bi | a, b ≥ y} ∪ {hc, di | c, d ≤ x}

θ11 ∆ ∪ {ha, bi | a, b ≥ z} ∪ {hy, 0i, h0, yi}

θ12 ∆ ∪ {h1, wi, hw, 1i, hx, zi, hz, xi}

θ13 ∆ ∪ {h1, xi, hx, 1i, hw, zi, hz, wi}

Table 3.1

We know that an equivalence relation on a group or ring gives rise to a quotient. Since a congruence is an equivalence relation this also gives rise to a quotient. We will generalize this concept and define the quotient algebra via a congruence:

(20)

Definition 3.4. The quotient algebra of A by θ (notation: A/θ) is the algebra with A/θ, the set of all congruence classes of A by θ, as its underlying set. For any element a ∈ A, we denote the congruence class of a via θ by a/θ. The operations of the quotient algebra satisfy:

fA/θ(a

1/θ, . . . , an/θ) = fA(a1, . . . , an).

Here a1, . . . , an∈ A and f is an n-ary operation of A.

Let us end this section with a useful property of the congruence, the proof of this lemma is straightforward see, e.g. the book A Course in Universal Algebra by Stanley Burris and H.P. Sankappanavar [1, Theorem 5.9].

Lemma 3.5. Let A be an algebra and suppose θ, φ ∈ Con(A). Then the following are equivalent:

(i) θ ◦ φ = φ ◦ θ (ii) θ ∧ φ = φ ◦ θ (iii) θ ◦ φ ⊆ φ ◦ θ.

3.2 Varieties

In this section we will take another look at homomorphisms, discuss some of their prop-erties and define homomorphic images. Then we will define the notion of a subalgebra and after that we will define products of algebras. We will need these three notions in order to give a definition of a variety. We will end this section with Tarski’s Theorem which states that whenever we have a class of algebras of the same type, we can first take all products, then all subalgebras and then all homomorphic images to get the variety generated by this class.

3.2.1 Homomorphisms

When we think of homomorphisms, the first theorem that comes to mind is of course the homomorphism theorem. So in this subsection we will generalize this theorem for all universal algebras. Let us start with a recap of this theorem in group theory.

Theorem 3.6 (Homomorphism theorem (for groups)). Let G and G0 be groups, and let f : G → G0 be a surjective homomorphism. Then there exists an isomorphism h : G/ker(f ) → G0 defined by f = h ◦ g. Here g is the natural map from G to G/ker(f ). In order to generalize this for all universal algebras we have to generalize the notion of the kernel and of the natural map. Let us start with the latter together with one of its properties.

(21)

Definition 3.7. Let A be an algebra and let θ be a congruence on A. The natural map νθ : A → A/θ is defined by νθ = a/θ. It sends an element a to its congruence class by θ.

Lemma 3.8. The natural map from an algebra to the quotient of the algebra by a congruence is a surjective homomorphism.

Proof. Recall that a mapping α : A → B is called a homomorphism if α(fA(a

1, . . . , an)) =

fB(α(a

1), . . . , α(an)). Given a congruence θ on A it is easy to see that the natural map

νθ as defined above is onto. So all we have to show is that νθ is a homomorphism. To

see this, suppose f is an n-ary operation in the type of A and B and a1, . . . , an ∈ A.

Then we find νθ(fA(a1, . . . , an)) = fA(a1, . . . , an)/θ = fA/θ(a 1/θ, . . . , an/θ) = fA/θ θ(a1), . . . , νθ(an)).

Thus νθ is a surjective homomorphism.

Now that we have defined the natural map, let us define the kernel of a homomorphism. Definition 3.9. Let α : A → B be a homomorphism. Then the kernel of α is defined by

ker(α) = {ha, bi ∈ A2 | α(a) = α(b)}

Lemma 3.10. Let α : A → B be a homomorphism. Then ker(α) is a congruence on A. Proof. Suppose A and B are algebras of the same type and α : A → B is a homomor-phism. To see that ker(α) is a congruence we have to check the following properties: reflexivity, symmetry, transitivity and compatibility with the operations of A. In order to check this, suppose a, b, c ∈ A.

(reflexivity) Trivially we have α(a) = α(a) thus ha, ai ∈ ker(α).

(symmertry) If ha, bi ∈ ker(α) then α(a) = α(b) and of course α(b) = α(a) so we find hb, ai ∈ ker(α). (transitivity) Suppose ha, bi ∈ ker(α) and hb, ci ∈ ker(α). Then α(a) = α(b) and α(b) = α(c)

thus α(a) = α(c) and therefore ha, ci ∈ ker(α).

(compatibility) Now suppose f is an n-ary operation in the type of A and B. Also let us suppose that we have hai, bii ∈ ker(α) where 1 ≤ i ≤ n. Then we have α(ai) = α(bi) for all

i and therefore we have α(fA(a 1, . . . , an)) = fB(α(a1), . . . , α(an)) = fB(α(b 1), . . . , α(bn)) = α(fA(b 1, . . . , bn)). Therefore hfA(a 1, . . . , an), fA(b1, . . . , bn)i ∈ ker(α).

(22)

Now we have obtained that ker(α) is indeed a congruence on A.

Now that we have generalized the notions of natural map and kernel we can state the homomorphism theorem for universal algebras.

Theorem 3.11 (Homomorphism Theorem). Suppose α : A → B is an onto homomor-phism. Then there is an isomorphism β : A/ker(α) → B defined by α = β ◦ ν. Here ν is the natural map from A to A/ker(α).

Proof. A proof of this theorem can be found in [1, Theorem 6.12].

So when we take the natural homomorphism from an algebra A to A/ker(α) for any surjective homomorphism α from A to another algebra B, then we obtain an isomorphism between A/ker(α) and B. This is exactly what the homomorphism theorem tells us. Now let us define a homomorphic image of an algebra.

Definition 3.12. Let α : A → B be a homomorphism. Then α(A) is the homomorphic image of A by α.

Note that when α is in addition onto, then α(A) = B and we have that B is a homomorphic image of A. We denote the set of all homomorphic images of an algebra A by H(A). Analogous we denote the set of all isomorphic images of an algebra A by I(A).

3.2.2 Subalgebras

The second ingredient for creating a variety is the notion of a subalgebra. In this section we give a definition of a subalgebra and we give an example of how this notion can be used.

Definition 3.13. Let A and B be two algebras of type F. Then B is a subalgebra of A if B ⊆ A and for every n-ary f ∈ F and b1, . . . , bn ∈ B we have fB(b1, . . . , bn) =

fA(b

1, . . . , bn) i.e. every operation of B is the restriction of the corresponding operation

of A.

We can relate the definition of homomorphic image and subalgebra as follows.

Lemma 3.14. Suppose α : A → B is a one-to-one homomorphism. Then the homomor-phic image of A is a subalgebra of B.

Proof. By definition of α it follows that α(A) ⊆ B. For the second property, suppose f is an n-ary operation of the type of A and B and suppose a1, . . . , an ∈ A. Then

α(a1), . . . , α(an) ∈ α(A) and thus α(a1), . . . , α(an) ∈ B. Now we find

fα(A)(α(a1), . . . , α(an)) = fB(α(a1), . . . , α(an))

(23)

3.2.3 Products

The last notion we need in order to start defining varieties is products. We will start this subsection by defining a (direct) product of two algebras together with the projection map. Then we will generalize this to the product of more (possibly infinitely many) algebras. At the end of this subsection we will show one of the properties of a product we are going to need later in this thesis.

Definition 3.15. Let A1 and A2 be two algebras of the same type. Define the (direct)

product A1 × A2 to be the algebra whose set is A1× A2, and such that for every n-ary

operation and ai ∈ A1, a0i ∈ A2 we have

fA1×A2(ha

1, a01i, . . . , han, a0ni) = hfA1(a1, . . . , an), fA2(a01, . . . , a 0 n)i.

The product defined above gives rise to a map that projects an element to its ith

coordinate. We call this map the projection of the algebra. Definition 3.16. For i ∈ {1, 2} the mapping

πi :A1× A2 → Ai

ha1, a2i 7→ ai

is called the projection map on the ith coordinate of the product. Lemma 3.17. The projection map is an onto homomorphism.

Proof. The fact that πi is onto follows directly from the definition. To see that πi is a

homomorphism, let a1, . . . , an∈ A1, a01, . . . , a 0

n ∈ A2and suppose f is an n-ary operation

then for i = 1 we find

π1(fA1×A2(ha1, a01i, . . . , han, an0i)) = π1(hfA1(a1, . . . , an), fA2(a01, . . . , a 0 n)i)

= fA1(a

1, . . . , an)

= fA1

1(ha1, a01i), . . . , π1(han, a0ni)).

We can do the same for every i = 2 and then obtain that the projection map is indeed a homomorphism.

Now let us generalize the notion of product to a product of any family of algebras. Definition 3.18. Let {Ai}i∈I be an indexed family of algebras of the same type. The

(direct) product A = Q

i∈IAi is an algebra with set

Q

i∈IAi and with the operations

defined coordinate-wise: fA(a

1, . . . , an)(i) = fAi(a1(i), . . . , an(i)).

Here i ∈ I, f is n-ary and a1, . . . , an ∈Qi∈IAi.

And we also have projection maps as defined before: πj :

Y

i∈I

(24)

Given an indexed family of algebras of the same type K = {Ai}i∈I, let us define the

set of all products of algebras in K by P (K). We will need the next property when we give a characterization of subdirectly irreducible algebras in the last chapter.

Lemma 3.19. For an indexed family of maps αi : A → Ai, i ∈ I, the following are

equivalent:

(i) The map α : A →Q

i∈IAi defined coordinate-wise by α(a)(i) = αi(a) is injective.

(ii) T

i∈Iker(αi) = ∆.

Proof. Suppose a1, a2 ∈ A such that a1 6= a2, then

α is injective ⇔ α(a1) 6= α(a2)

⇔ ∃i ∈ I : α(a1)(i) 6= α(a2)(i)

⇔ ∃i ∈ I : αi(a1) 6= αi(a2)

⇔ ∃i ∈ I : ha1, a2i 6∈ ker(ai)

⇔\

i∈I

ker(αi) = ∆.

Thus α is injective if and only if T

i∈Iker(αi) = ∆ which is what we had to show.

3.2.4 Tarski’s Theorem

Now that we have established the definitions of homomorphic images, subalgebras and products, we can define the notion of varieties. In this subsection we will discuss different classes of algebras and some of their properties that will lead up to Tarski’s Theorem. Definition 3.20. A nonempty class K of algebras of the same type is called a variety if it is closed under homomorphic images, subalgebras and direct products.

In the previous subsections we already have encountered the following notions in the special case where the class consists of only one algebra. We will now generalize this notion to classes consisting of more algebras.

Definition 3.21. Let K be a class of algebras of the same type. Then A ∈ H(K) if and only if A is a homomorphic image of some member of K. A ∈ I(K) if and only if A is isomorphic to some member of K.

A ∈ S(K) if and only if A is a subalgebra of some member of K.

A ∈ P (K) if and only if A is a direct product of a nonempty family of algebras in K. From one class we can get bigger classes by applying H, I, S, or P to another class and in this way we can expand a class by first applying one operation and then another etc. In the following lemma we will find that some of the operations create bigger classes than others.

(25)

• SH(K) ⊆ HS(K) • P H(K) ⊆ HP (K) • P S(K) ⊆ SP (K)

• For the operations H, S and IP the idempotent law holds. Proof. Suppose K is a class of algebras of the same type.

• SH(K) ⊆ HS(K)

Let A an algebra in SH(K). Then there exists an algebra B ∈ H(K) such that A is a subalgebra of B. Since B ∈ H(K) there also exists an algebra C ∈ K such that B is a homomorphic image of C i.e. there exists an onto homomorphism α : C → B. We have to prove that A ∈ HS(K) i.e. there exists an algebra D ∈ S(K) and an onto homomorphism β : D → A and there exists an algebra E ∈ K such that D is a subalgebra of E.

Claim: α−1(A) is a subalgebra of C.

It is easy to see that α−1(A) = {c ∈ C | α(c) ∈ A} ⊆ C. So for the second condition, let f be an n-ary operation and c1, . . . , cn∈ α−1(A) then

α(fα−1(A)(c1, . . . , cn)) = fA(α(c1), . . . , α(cn)) (α is a homomorpism)

= fB(α(c

1), . . . , α(cn)) (A is a subalgebra of B)

= α(fC(c

1, . . . , cn)). (α is a homomorpism)

So we have proved the claim. Now since α(α−1(A)) = A and α is a homomorpism, we have established that A ∈ HS(K).

• P H(K) ⊆ HP (K)

Suppose A is an algebra in P H(K). Then there exist Ai ∈ H(K) such that

A =QAi. Thus there are Bi ∈ K such that there are αi : Bi → Ai that are onto

homomorphisms. Now let us define α : B → A

α(b)(i) 7→ αi(b(i)).

Where B = Q

Bi. Now if α is an onto homomorphism, then A ∈ HP (K). Since

for all i, αi is a surjective homomorphism, we find that α is also surjective and we

have for b1, . . . , bn ∈ B:

α(fB(b

1, . . . , bn))(i) = αi(fBi(b1(i), . . . , bn(i)))

= fAi i(b1(i)), . . . , αi(bn(i))) = fAi(α(b 1)(i), . . . , α(bn)(i)) = fA(α(b 1), . . . , α(bn))(i).

(26)

• P S(K) ⊆ SP (K)

Given an algebra A ∈ P S(K) then there exist Ai ∈ S(K) such that A = QAi.

This yields that there exist Bi ∈ K such that for every i, Ai is a subalgebra of Bi.

Now let us define B to be Q

Bi then A ∈ SP (K) if A is a subalgebra of B. Since

for every i we have Ai is a subalgebra of Bi thus for every i we have Ai ⊆ Bi which

implies A =Q Ai ⊆Q Bi = B and for f an n-ary operation we have

fA(a

1, . . . , an)(i) = fAi(a1(i), . . . , an(i))

= fBi(a

1(i), . . . , an(i))

= fB(a

1, . . . , an)(i).

Here a1, . . . , an∈ A. Therefore we have that A ∈ SP (K).

• H, S and IP are idempotent

In order to see this, let A be an algebra.

First suppose A ∈ HH(K) then there exists an algebra B ∈ H(K) and an onto homomorphism α : B → A i.e. there exists an algebra C ∈ K and a homomorphism β : C → B that is onto. Now let us show that α ◦ β is also an onto homomorphism, because then A ∈ H(K) and we find that for the operation H the idempotent law holds.

Since α and β are both onto we have that α ◦ β is onto as well. Now suppose c1, . . . , cn∈ C then from the fact that both α and β are homomorphisms it follows

that (α ◦ β)(fC(c 1, . . . , cn)) = α(β(fC(c1, . . . , cn))) = α(fB(β(c 1), . . . , β(cn))) = fA(α(β(c 1)), . . . , α(β(cn))) = fA((α ◦ β)(c 1), . . . , (α ◦ β)(cn).

Thus α ◦ β is also a surjective homomorphism and A ∈ H(K).

Now suppose A ∈ SS(K) then there exists an algebra B ∈ S(K) such that A is a subalgebra of B. Thus there is an algebra C ∈ K such that B is a subalgebra of C while A is a subalgebra of B. Now if we can show that A is also a subalgebra of C then we have established that SS is idempotent as well. Since A is a subalgebra of B and B is a subalgebra of C we have A ⊆ B and B ⊆ C thus by transitivity of ⊆ we also have A ⊆ C. Now suppose a1, . . . , an∈ A then for f an n-ary operation

we have fA(a

1, . . . , an) = fB(a1, . . . , an) = fC(a1, . . . , an) hence A is a subalgebra

of C.

To see that IP is also idempotent, suppose that A ∈ IP IP (K). Then there exists an algebra B ∈ P IP (K) that is isomorphic to A. Thus there exist Bi ∈ IP (K)

such that B = Q

Bi. Therefore there exist Ci ∈ P (K) such that for every i,

Ci ∼= Bi. Therefore there exist Cij ∈ K such that for every i we have Ci =QCij.

So we have

(27)

Hence A ∼= Q Q

Cij. Since the product of a product is again a product we now

have A ∈ IP (K).

All algebras of the same type form a variety and since the intersection of any class of varieties is also a variety we know that there exists a smallest variety. Let us denote V (K) for the smallest variety that contains K. We will end this section with a theorem first proved by Alfred Tarski in 1946 [3, pp 163-165].

Theorem 3.23 (Tarski). Let K be a class of algebras of the same type then we have V (K) = HSP (K).

Proof. In order to prove this theorem we have to show two things: V (K) ⊆ HSP (K) and V (K) ⊇ HSP (K) where K is a class of algebras of the same type. Since V (K) is a variety we have that V is closed under H, S and P and thus HSP (K) ⊆ V (K) follows directly from the definition.

To see that HSP (K) is a variety (and thus V (K) ⊆ HSP (K)) we have to show that HSP (K) is closed under homomorphic images, subalgebras and direct products. By Lemma 3.22 we know that the operation H is idempotent thus HHSP (K) = HSP (K) and we find that HSP (K) is closed under homomorphic images.

From Lemma 3.22 we also know that the operation S is idempotent and SH(K) ⊆ HS(K). Therefore we have SHSP (K) ⊆ HSSP (K) = HSP (K). Thus HSP (K) is also closed under subalgebras.

In order to see that HSP (K) is closed under products as well we use Lemma 3.22 again and find

P HSP (K) ⊆ HP SP (K) (P H(K) ⊆ HP (K)) ⊆ HSP P (K) (P S(K) ⊆ SP (K)) ⊆ HSIP IP (K) ( The identity map is an isomorphism) = HSIP (K) (IP is idempotent) ⊆ HSHP (K) (any isomorphism is a homomorphism) ⊆ HHSP (K) (SH(K) ⊆ HS(K)) = HSP (K).

Hence HSP (K) is closed under homomorphic images, subalgebras and products and thus HSP (K) is a variety. Now since HSP (K) ⊆ V (K) and V (K) is the smallest variety that contains K we have established the equality V (K) = HSP (K).

3.3 Term algebras

Now that we have defined varieties, we will use the rest of this chapter to characterize these classes as Garrett Birkhoff did in his paper in 1935 [4, pp 433-454]. He stated that

(28)

a class of algebras K is a variety if and only if K is an equational class. This theorem is known as one of Birkhoff’s most famous theorems. In this paper we call it Birkhoff’s Theorem.

In order to fully understand what the theorem states we have to give a definition of an equational class. For this we first have to define terms in order to be able to define term algebras. We will use this to construct identities and then we can give a definition of an equational class.

For the full proof of Birkhoff’s Theorem we also need to understand the notion of the free algebra and some of its properties. For now, let us start by defining terms.

Definition 3.24. Given a type of algebras F . Let X be a set of variables. The set T (X) of terms of F over X is the smallest set such that

(i) X ∪ {f | f is a nullary operation in F } ⊆ T (X) (ii) T (X) is closed under all the operations of F .

For p in T (X) we say that p is n-ary if there are n or less variables occurring in p and we write p(x1, . . . , xn).

The notion of terms is syntactic, just like the operations in a type: only the interpre-tation within an algebra has a meaning. So let us look at the meaning of a term in an algebra.

Definition 3.25. Given an algebra A and a term p(x1, . . . , xn) ∈ T (X) of the same

type, let us define the map pA: An → A inductively:

(i) If p(x1, . . . , xn) = xi for 1 ≤ i ≤ n, then

pA(a

1, . . . , an) = ai

and pA is the ith projection map.

(ii) If p = f (p1(x1, . . . , xn), . . . , pk(x1, . . . , xn)) for f a k-ary operation, then

pA= fA(pA

1(x1, . . . , xn), . . . , pAk(x1, . . . , xn)).

pA is the interpretation of the term p in the algebra A.

Now let us state some properties of terms. The proof can be found in [1, Theorem 10.3].

Lemma 3.26. For two algebras A and B of the same type and p ∈ T (X) n-ary, the following holds:

(i) Let θ be a congruence on A and suppose hai, bii ∈ θ for 1 ≤ i ≤ n. Then

(29)

(ii) If α : A → B is a homomorphism, then α(pA(a

1, . . . , an)) = pB(α(a1), . . . , α(an)).

The set of terms also gives rise to an algebra: the term algebra.

Definition 3.27. Given a set X of variables and a type F of algebras, if T (X) 6= ∅ then T(X) is the term algebra of F over X. The underlying set of this algebra is T (X) and the operations satisfy:

fT(X) : T (X)n→ T (X)

(p1, . . . , pn) 7→ f (p1, . . . , pn).

Here f ∈ F is n-ary and p1, . . . , pn∈ T (X).

Note that for T(X) to exist, either X or the set of nullary operations within the type has to be non-empty.

Remark that T(X) is generated by the set X of variables i.e. T(X) is the smallest algebra such that the underlying set of T(X) contains X. The term algebra is one of the algebras for which the universal mapping property holds. Let us first define this property and finish this section with a prove of this statement.

Definition 3.28. Let K be a class of algebras of the same type F and let U(X) be an algebra generated by a set X. The algebra U(X) also has type F. If for every A ∈ K and for every map

α : X → A there is a homomorphism

β : U(X) → A

x 7→ α(x) for all x ∈ X.

Then we say that β extends α and that U(X) has the universal mapping property for K over X. Moreover, X is called the set of free generators of U(X), and U(X) is said to be freely generated by X.

Note that for the universal mapping property to hold the map α need not be a homo-morphism, α can be any map.

Theorem 3.29. For any type of algebras F and any set of variables X, the term algebra T(X) has the universal mapping property for the class of all algebras of type F over X. Proof. Suppose A is an algebra of type F and α : X → A is a map. Now let us define another map β : T(X) → A as follows:

β(x) = α(x) for all x ∈ X; β(p(x1, . . . , xn)) = p(β(x1), . . . , β(xn));

β(fT(X)(p

1(x1, . . . , xn), . . . , pk(x1, . . . , xn))) = fA(β(p1(x1, . . . , xn)), . . . , β(pk(x1, . . . , xn))).

(30)

3.4 Identities

In some literature, identities are called ‘equations’ or ‘axioms’, in this thesis however, we will stick to the term ‘identities’. In this section we will define the notion of identities and discuss some properties. We will end this section with a proof of one side of Birkhoff’s Theorem.

Definition 3.30. Given a type of algebras F and a set X of variables. Let p, q ∈ T (X). An identity of F over X is an expression of the form

p ≈ q.

Let us define Id(X) to be the set of all identities of the type over X. Now we will define when an identity holds in an algebra.

Definition 3.31. An identity p ≈ q holds in an algebra A if the interpretations of p and q are the same in A. Thus

A |= p(x1, . . . , xn) ≈ q(x1, . . . , xn) ⇔ pA(a1, . . . , an) = qA(a1, . . . , an)

for (a1, . . . , an) ∈ A.

When an identity holds in an algebra we write A |= p ≈ q

and we say that p ≈ q is true in A. A class K of algebras satisfies p ≈ q if each member of K satisfies p ≈ q, notation: K |= p ≈ q.

For a set of identities Σ we say that K satisfies Σ if K |= p ≈ q for every identity p ≈ q ∈ Σ and we write K |= Σ.

Given a class of algebras K and a set X of variables, define IdK(X) = {p ≈ q ∈ Id(X) | K |= p ≈ q}.

We can characterize this as follows:

Lemma 3.32. Given a class of algebras K of type F , a set of variables X and an identity of F over X, p ≈ q, then

K |= p ≈ q

if and only if for every A ∈ K and for every homomorphism α : T(X) → A we have α(p) = α(q).

Proof. Suppose A ∈ K, p and q are n-ary terms and α : T(X) → A is a homomorphism. Then by definition of |= we have K |= p ≈ q if and only if for every B ∈ K we have

(31)

a1, . . . , an ∈ A we have pA(a1, . . . , an) = qA(a1, . . . , an). Since α is a homomorphism, by

(ii) of Lemma 3.26 we have α(pT(X)(x 1, . . . , xn)) = pA(α(x1), . . . , α(xn)) = qA(α(x 1), . . . , α(xn)) = α(qT(X)(x 1, . . . , xn)). Hence α(p) = α(q).

Now for the other direction again suppose A ∈ K and p and q are n-ary terms. Further-more assume that a1, . . . , an ∈ A. Now by Theorem 3.29 there exists a homomorphism

α : T(X) → A such that α(xi) = ai for 1 ≤ i ≤ n. Then we have

pA(a 1, . . . , an) = pA(α(x1), . . . , α(xn)) = α(pT(X)(x 1, . . . , xn)) = α(p) = α(q) = α(qT(X)(x 1, . . . , xn)) = qA(α(x 1), . . . , α(xn) = qA(a 1, . . . , an). Hence K |= p ≈ q.

Now that we have defined identities (or equations as some might say), we can define equational classes:

Definition 3.33. Given a type of algebras F and a set of identities Σ of F . Define M (Σ) = {A | A |= Σ}.

A class K of algebras is an equational class if there is a set of identities Σ such that K = M (Σ). In this case we say that K is defined or axiomatized by Σ.

This is all we need to understand Birkhoff’s Theorem which states that for every class of algebras K we have that K is an equational class if and only if K is a variety. We will prove one direction right now. In order to be able to prove the other direction we will need some more theory. We will prove it at the end of this chapter.

Theorem 3.34 (Birkhoff part 1). Every equational class is a variety.

Proof. Let K be an equational class. Thus there exists a set of identities Σ such that M (Σ). A variety is a class of algebras which is closed under the operations H, S and P . So we have to show that M (Σ) is closed under these operations.

(32)

H Suppose A ∈ HM (Σ) i.e. there exists a B ∈ M(Σ) and a homomorphism α : B → A which is onto.

What we want is to show that A satisfies Σ. So let p ≈ q ∈ Σ and a1, . . . , an∈ A.

Since α is onto, we have that there exist b1, . . . , bn ∈ B such that αbi = ai for

1 ≤ i ≤ n. B ∈ M (Σ) thus pB(b 1, . . . , bn) = qB(b1, . . . , bn) holds. So by Lemma 3.32 we have pA(a 1, . . . , an) = pA(αb1, . . . , αbn) = αpB(b 1, . . . , bn) = αqB(b 1, . . . , bn) = qA(αb 1, . . . , bn) = qA(a 1, . . . , an).

Thus A |= Σ and therefore A ∈ M(Σ).

S Suppose A ∈ SM (Σ) i.e. there exists a B ∈ M (Σ) such that A is a subalgebra of B. So there exists an embedding ι : A → B defined by the restriction of the identity map to A.

The identity map is a homomorphism thus again by Lemma 3.32 we have pA(a 1, . . . , an) = ι(pA(a1, . . . , an)) = pB(a 1, . . . , an) = qB(a 1, . . . , an) = ιqA((a 1, . . . , an)) = qA(a 1, . . . , an).

Now A |= Σ and thus A ∈ M (Σ).

P Suppose A ∈ P M (Σ) i.e. there exist Ai ∈ M (Σ) such that A =

Q

i∈IAi for

some set I. Since for every i ∈ I the projection map πi : A → Ai is a surjective

homomorphism we have pA(a 1, . . . , an)(i) = πi(pA(a1, . . . , an)) = pAi i(a1), . . . , πi(an)) = pAi(a 1(i), . . . , an(i)) = qAi(a 1(i), . . . , an(i)) = qAi i(a1), . . . , πi(an)) = πi(qA(a1, . . . , an)) = qA(a 1, . . . , an)(i).

This holds for every i ∈ I so A |= p ≈ q and A ∈ M (Σ).

(33)

3.5 The K-free algebra

In order to prove the second part of Birkhoff’s Theorem, we need K-free algebras. Just like the term algebras, these algebras have the universal mapping property. We will also discuss some other properties we will need in order to prove the theorem. We will end this section by proving the second part of Birkhoff’s Theorem. For now let us start with the K-free algebras.

Definition 3.35. Let K be a class of algebras of the same type. Given a set X of variables define the congruence θK(X) on T(X) to be

θK(X) =

\

{φ ∈ Con(T(X)) | T(X)/φ ∈ IS(K)}. Now define the K-free algebra over X/θK(X), write ¯X, by

FK( ¯X) = T(X)/θK(X).

The K-free algebra is the algebra satisfying precisely the same identities as the class K. The K-free algebra is of the same type as the class K and has T (X)/θK(X) as its

underlying set. It satisfies the universal mapping property for K over ¯X which was first proved by Garrett Birkhoff. One proof of this theorem can be found in [1, Theorem 10.10].

Theorem 3.36 (Birkhoff). Suppose T(X) exists. Then FK( ¯X) has the universal

map-ping property for K over ¯X.

The next corollary follows from this fact. It states that for any algebra A in a class K consisting of algebras of the same type, there is a surjective homomorphism α : FK( ¯X) →

A. Thus if K is a variety, then K = V (FK( ¯X)) = H(FK( ¯X)).

Corollary 3.37. For a class K of algebras of the same type, A ∈ K, and a large enough set of variables X, we have A ∈ H(FK( ¯X)).

Proof. Suppose | ¯X| ≥ |A| and let α : ¯X → A be a surjective map. Since FK( ¯X)

has the universal mapping property for K over ¯X, there exists a homomorphism β : FK( ¯X) → A that extends α. Because of this, β is also surjective and we deduce that

A ∈ H(FK( ¯X)).

The following property of the K-free algebra, also first proved by Garrett Birkhoff, states that for every variety V , the V -free algebra FV( ¯X)), is an element of V . One can

find the proof in [1, Theorem 10.12]

Theorem 3.38 (Birkhoff). If T(X) exists, then for any nonempty set K of algebras of the same type, FK( ¯X) ∈ ISP (K).

We stated before that the K-free algebra satisfies exactly the same identities as K does. We will prove this among other facts in the following theorem.

(34)

Theorem 3.39. Given a class K of algebras of type F and terms p, q ∈ T (X) also of type F , then we have

K |= p ≈ q ⇔FK( ¯X) |= p ≈ q

⇔p/θK(X) = q/θK(X) in FK( ¯X)

⇔hp, qi ∈ θK(X).

Proof. Suppose p = p(x1, . . . , xn), q = q(x1, . . . , xn) ∈ T (X). Then K |= p ≈ q if

and only if for every A ∈ K we have A |= p ≈ q. Since by Theorem 3.38 we have that the K-free algebra is an element from the variety generated by K. From this it follows that FK( ¯X) |= p ≈ q. (One can find this proof in [1, Lemma 11.3].) For

r1, . . . , rn∈ T (X) we find that FK( ¯X) |= p ≈ q if and only if pFK( ¯X)(r1/θK, . . . , rn/θK) =

qFK( ¯X)(r

1/θK, . . . , rn/θK). From (i) of Lemma 3.26 we have that terms are compatible

with congruences and therefore we have pFK( ¯X)(r

1, . . . , rn)/θK = qFK( ¯X)(r1, . . . , rn)/θK

and hence p/θK = q/θK in FK( ¯X). Thus in FK( ¯X) we have that the congruence class

by θK of p and of q are the same. Therefore hp, qi ∈ θK. The last part of the proof

(hp, qi ∈ θK ⇒ K |= p ≈ q follows from Lemma 3.32 and the second isomorphism

theorem which lies out of the scope of this thesis. This theorem with its proof can be found in [1, Theorem 6.15].

Now that we have established this we are able to prove the second part of Birkhoff’s Theorem.

Theorem 3.40 (Birkhoff part 2). Every variety is an equational class.

Proof. Let K be a variety. We want to show that there exists a set of identities Σ such that K = M (Σ).

To see this we have to show two things: K ⊆ M (Σ) and K ⊇ M (Σ) for some set Σ of identities. Let us start with the first of these two. For any set of variables X we know that K |= IdK(X), so let us take an infinite set of variables for X. Since K |= IdK(X)

by definition, we know that K ⊆ M (IdK(X)).

Now if K ⊇ M (IdK(X)) also holds, then we have found our Σ and we are done.

So suppose A ∈ M(IdK(X)). By Theorem 3.34 we know that M (IdK(X)) is a variety.

By Corollary 3.37 we find that A ∈ H(FK( ¯X)) since we have chosen our X to be infinitely

large.

If we can prove that IdK(X) = IdM (IdK(X))(X) then by Theorem 3.39 we find that

hp, qi ∈ θK ⇐⇒ FK( ¯X) |= p ≈ q

⇐⇒ K |= p ≈ q ⇐⇒ M (IdK(X)) |= p ≈ q

⇐⇒ FM (IdK(X)) |= p ≈ q

⇐⇒ hp, qi ∈ θM (IdK(X)).

(35)

so we can use Theorem 3.38 to find that A ∈ K, which means that we are done. So all that is left to do is to prove that IdK(X) = IdM (IdK(X))(X).

First let us show that IdK(X) ⊆ IdM (IdK(X))(X).

For this, let p ≈ q ∈ IdK(X), then we find that K |= p ≈ q so for all A ∈ K we have that

A |= p ≈ q. But then p ≈ q ∈ {p ≈ q | A |= IdK(X) ⇒ A |= p ≈ q} since A |= IdK(X)

and A |= p ≈ q. We also have

{p ≈ q | A |= IdK(X) ⇒ A |= p ≈ q} = {p ≈ q | {A|A |= IdK(X)} |= p ≈ q}

= {p ≈ q | M (IdK(X) |= p ≈ q}

= IdM (IdK(X))

So we find that whenever p ≈ q ∈ IdK(X), we find that p ≈ q ∈ M (IdK(X)) and

therefore the first part is proved.

Now for the second part, IdK(X) ⊇ IdM (IdK(X))(X), we know that M (IdK(X)) |= (IdM (IdK(X))(X)

which means that for all A ∈ M (IdK(X)) we have A |= IdM (IdK(X))(X). In the first

part of the proof we have shown that K ⊆ M (IdK(X)) so it follows that for all A ∈ K

we also have A |= IdM (IdK(X))(X). We now have established that K |= IdM (IdK(X))(X)

and since K |= IdK(X), we now know that IdM (IdK(X))(X) ⊆ IdK(X).

Therefore IdM (IdK(X))(X) = IdK(X) and K = M (IdK(X)) thus K is an equational

class.

Together with the first part of Birkhoff’s Theorem, this leads to the full theorem. Corollary 3.41. For a class K of algebras of the same type we have:

K is a variety ⇐⇒ K is an equational class Proof. This result follows directly from Theorems 3.34 and 3.40.

(36)

4 Subdirectly irreducible algebras

In this chapter we will look at the building blocks of varieties: the subdirectly irreducible algebras. We will give a characterization of subdirectly irreducible universal algebras in terms of congruences. After that we will give a concrete characterization of subdirectly irreducible Boolean and Heyting algebras.

In the first section we will define them and find out why they are useful. In the next section we will give a couple of characterizations of subdirectly irreducible algebras in varieties of Boolean and Heyting algebras. We will look at characterizations via the congruence lattice (as defined in Section 3.1) and then we will define the notion of filters and use these obtain a characterization of the subdirectly irreducible Boolean and Heyting algebras. At the end of this section the reader should be able to determine subdirectly irreducible Boolean and Heyting algebras by just looking at their Hasse diagrams.

4.1 Building blocks of varieties

In the first chapter we have introduced direct products. Before we can start defining subdirectly irreducible algebras we need to define the subdirect product. From this we can define a subdirect embedding, which is what we need for the definition of a subdirectly irreducible algebra. The definition is as follows:

Definition 4.1. An algebra A is a subdirect product of some family of algebras (Ai)i ∈ I

if the following holds:

(i) A is a subalgebra of Q

i∈IAi and

(ii) for every i ∈ I : πi(A) = Ai.

We call an embedding subdirect α : A → Q

i∈IAi if α(A) is a subdirect product of

(Ai)i∈I. A2 A1 A A2 A1 A A2 A1 A

(37)

To get a better understanding of the second condition of the subdirect product, let us turn to Figure 4.1. There one can find a schematic drawing of a product of two algebras A1 and A2 with an algebra A in it in grey. On the left the projection of A is not equal

to any of the Ai for i = 1, 2. In the middle drawing the property also does not hold

because the projection has to be equal to Ai for both i = 1 and i = 2. On the right the

projection is equal to both algebras of the product and therefore the second property holds.

Now we know how the second property works, let us look at an example of a subdirect product.

Example 4.2. Consider the Heyting algebra A:

z

x y

1

0 Figure 4.2: A

and let A1 and A2 the following Heyting (Boolean) algebras:

0 1 (a) A1 0 x y 1 (b) A2 Figure 4.3

Then the product A1× A2 is the following Heyting (Boolean) algebra:

0 b z c x a y

1

(38)

Now we find that A ⊆ A1× A2 and for 1, x, y, z, 0 ∈ A1× A2 we have 1 ∧ 1 = 1 1 ∨ 1 = 1 1 → 1 = 1 z → 1 = 1 1 ∧ x = x 1 ∨ x = 1 1 → x = x z → x = 1 1 ∧ y = y 1 ∨ y = 1 1 → y = y z → y = 1 1 ∧ z = z 1 ∨ z = 1 1 → z = z z → z = 1 1 ∧ 0 = 0 1 ∨ 0 = 1 1 → 0 = 0 z → 0 = 0 x ∧ x = x x ∨ x = x x → 1 = 1 0 → 1 = 1 x ∧ y = z x ∨ y = 1 x → x = 1 0 → x = 1 x ∧ z = z x ∨ z = x x → y = y 0 → y = 1 x ∧ 0 = 0 x ∨ 0 = x x → z = z 0 → z = 1 y ∧ y = y y ∨ y = y x → 0 = 0 0 → 0 = 1 y ∧ z = z y ∨ z = y y → 1 = 1 y ∧ 0 = 0 y ∨ 0 = y y → x = x z ∧ z = z z ∨ z = z y → y = 1 z ∧ 0 = 0 z ∨ 0 = z y → z = z 0 ∧ 0 = 0 0 ∨ 0 = 0 y → 0 = 0

Thus A is closed under ∧, ∨ and → as a subset of A1×A2 and therefore A is a subalgebra

of A1× A2. So now we only have to check the second condition.

It is easy to see that π1(A) = A1 and π2(A) = A2. And therefore A is a subdirect

embedding of A1 and A2.

We have now reached the point that we can define the main subject of this thesis: subdirectly irreducible algebras.

Definition 4.3. An algebra A is subdirectly irreducible if there exists an i ∈ I such that πi◦ α : A → Ai

is an isomorphism. Here α : A →Q

i∈IAi is any subdirect embedding.

In his book [5, Proposition 4.4, page 576] Pierre Antoine Grillet shows that only the cyclic groups of order pn where p is a prime number and n an integer or the Pruffer

group are subdirectly irreducible Abelian groups. The theory we need to see this lies out of the scope of this thesis. So instead we will look at the subdirectly irreducible Boolean and Heyting algebras. In the next section we will prove some characterizations. The proof of the following result can be found in an article written by Garrett Birkhoff in 1944 [6, pp. 764-768].

Theorem 4.4 (Birkhoff). Let A be an algebra. Then A is isomorphic to a subdirect product of subdirectly irreducible algebras which are homomorphic images of A.

The theorem states the following: Let KA

si denote the set of subdirectly irreducible algebras that are homomorphic images

(39)

Then there exist Ai ∈ KsiA such that A ∼= B where B is a subalgebra of

Q

Ai and for all

i, πi(B) = Ai.

We will now concentrate at a very useful corollary of Birkhoff’s Theorem.

Corollary 4.5. Every variety is determined by its subdirectly irreducible members. Proof. Given a variety V let us define

Vsi = {A ∈ V | A is subdirectly irreducible}

the set of the subdirectly irreducible members of V . We need to prove that V = HSP (Vsi).

In order to prove this, we only have to show that V ⊆ HSP (Vsi) since V ⊇ HSP (Vsi)

follows from the fact that Vsi ⊆ V and that V is a variety and thus closed under

homo-morphic images, subalgebras and direct products.

Now suppose V is a variety and that A ∈ V . Then by the previous theorem we have that there exist Ai ∈ KsiA such that A ∼= B where B is a subalgebra of

Q

Ai and for all

i, πi(B) = Ai.

Claim: KA

si ⊆ Vsi.

To see this, let us suppose that C ∈ KA

si. Then the subdirectly irreducible C is a

homo-morphic image of A i.e. C ∈ H(A). Since V is closed under homohomo-morphic images and A ∈ V , we deduce that C ∈ Vsi.

So for all Ai we have that Ai ∈ Vsi. By definition, B is a subalgebra of QAi and thus

B ∈ SP (Vsi). Now since A is isomorphic to B, we have obtained that A ∈ HSP (Vsi).

Birkhoff’s Theorem (Theorem 3.41) states that every variety is an equational class. So any set of algebras of the same type that is defined by a set of identities is a variety and every variety has a set of identities that define it.

Corollary 4.6. Let V be a variety an suppose p ≈ q is an identity of the same type as the elements in the variety, then

V |= p ≈ q if and only if Vsi |= p ≈ q.

Proof. ⇒ follows from the fact that Vsi ⊆ V , so we only need to show ⇐.

But that is a direct result of the fact that identities are preserved under homomorphic images, subalgebras and products, this finishes the proof.

So when we want to know whether an identity holds on a variety, it is enough to know whether the identity holds on all subdirectly irreducible members of this variety. This is why we would like to know which algebras are subdirectly irreducible and which ones are not. A characterization is given in the next section.

(40)

4.2 Determining subdirectly irreducible algebras

The goal of this section is to find subdirectly irreducible algebras in different varieties. So we would like to find out when an algebra is subdirectly irreducible. In order to do this we can use the following characterization.

Theorem 4.7. An algebra A is subdirectly irreducible if an only if A is trivial or there is a minimum congruence in Con(A) − ∆.

Proof. (⇒) We will prove this by contraposition. So suppose that A is a non-trivial algebra such that there is no minimum congruence in Con(A) − ∆. So if we take the intersection of all congruences except for ∆ we getT(Con(A) − ∆) = ∆. Now for θi ∈ Con(A) − ∆, let us define:

βi :A → A/θi

a 7→ a/θi.

Then for all i, βi is a natural map and by Lemma 3.8 we find that every βi is a

surjective homomorphism and ker(βi) = θi. Thus

\

(Con(A) − ∆) = \(θi)

=\(ker(βi))

= ∆.

Now we can use Lemma 3.19 to see that the map β :A → Y

θi∈Con(A−∆)

A/θi

β(a)(i) = βi(a)

is injective. Therefore β as defined above is an embedding and β(A) is a subalgebra of Q

θi∈Con(A−∆)A/θi i.e. β(A) ≤

Q

θi∈Con(A−∆)A/θi. Since for each i we have

πi(β(A) = A/θi by definition of β, we also have that β(A) is a subdirect product

of Q

θi∈Con(A−∆)A/θi and we find that βi is surjective for every i. But since βi is

not one-to-one, A is not subdirectly irreducible.

(⇐) Suppose A is an algebra, then there are two cases to consider for the proof: (i) A is trivial and

(ii) A is non-trivial but there exists a minimum congruence in Con(A) − ∆. So let us start with the case that A is trivial.

(i) A is trivial.

Referenties

GERELATEERDE DOCUMENTEN

• In the case of discrete dynamical systems, the induced representations arising from a periodic point with period p and an irreducible repre- sentation of the isotropy subgroup of

Recipients that score low on appropriateness and principal support thus would have low levels of affective commitment and high levels on continuance and

Since the submission of this thesis, new preprints were released in which Goodman and Hauschild Mosley [14, 15] use alternative topological and Jones basic construction theory

We shall indeed see that von Neumann do not have coexponentials, but that it is possible to find a left adjoint with respect to the spatial tensor product, making von Neumann algebras

The previous example shows there are only two 3-Lie algebras of dimension three, and that any skew-symmetric map on a three dimension 3-Lie algebra satisfies the Jacobi identity..

In de onderhavige studie zijn deze in vrijwel alle gevallen uitgevoerd, waardoor redelijk goed kon worden vastgesteld of een wegfactor al dan niet een rol had gespeeld bij

It seems natural to consider other families of such deformed group C ∗ -algebras, and, in particular, universal C ∗ -algebras that are obtained by allowing higher powers in

e evaluation of eHealth systems has spanned the entire spectrum of method- ologies and approaches including qualitative, quantitative and mixed methods approaches..