• No results found

Monotone and maximal monotone relations

N/A
N/A
Protected

Academic year: 2021

Share "Monotone and maximal monotone relations"

Copied!
27
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

    

 

    

   



    

  

  ! "

(2)

Monotone and

maximal monotone relations

Abstract

In this thesis we consider (maximal) monotone relations, as an extension to monotone functions in real analysis. First we give an introduction to (maximal) monotonicity, then we consider the composition of two (maximal) monotone relations. We see that the composition of two (maximal) monotone relations is (maximal) monotone. Besides the composition we also consider a special class of monotone relations: the linear case with in particular the Dirac structure.

Bachelor thesis in Mathematics August 2011

Student: V.H. Schoonveld

First supervisor: A.J. van der Schaft

Second supervisor: M.K. Camlibel

(3)
(4)

Contents

1 Introduction 1

1.1 Monotone relations . . . 1

1.2 Notation . . . 1

2 Monotonicity 2 2.1 Introduction . . . 2

2.2 The composition of two relations . . . 3

3 Monotonicity and compositions 4 3.1 Introduction . . . 4

3.2 Results in analysis . . . 5

3.3 Results in monotonicity . . . 7

3.4 Proof of the theorem . . . 12

4 The linear case 17 4.1 Introduction . . . 17

4.2 Dirac structures . . . 17

4.3 The composition of Dirac structures . . . 20

5 Discussion 22

Bibliography 22

(5)
(6)

1.1 Monotone relations

1 Introduction

1.1 Monotone relations

In this thesis we consider monotone relations. In the first part of the thesis we see how the general definition for monotonicity follows naturally from the definition in real analysis. We derive some results concerning these relations.

In the second part we continue on this path with a further exploration of the theory of compositions and come to the most important theorems of this thesis.

In the third and final part we consider linear monotone relations, in particular the Dirac structure.

Monotone relations are an extension to the monotonicity of functions from R to R. First because we do not just work in R, but in a Hilbert space H.

Secondly we are not just considering functions, but relations and therefore it is not necessary for a point to ‘be mapped’ to one point, but it could also ‘be mapped’ to a collection of points. The advantage of a relation is that we can speak of a maximal monotone relation: a monotone relation that is not included by a larger monotone relation.

When we consider functions inR, it is obvious that the composition of two monotone (decreasing or increasing) functions is monotone too. This is less trivial for the general case. Another problem we consider is, is a relation whose composition with a (maximal) monotone relation is always (maximal) monotone, (maximal) monotone itself?

1.2 Notation

In this thesis we work with relations R⊂ H × Hor use a multivalued function (or, in other words, a point could be mapped to a set (denoted by ‘⇒’, see for example [5], or simply ‘→’, see for example [4] or [1])). We denote this by ‘⇒’. These two definitions are equivalent. In the case of a multivalued function the definition is similar to that in Definition 2.1 with the remark that we take a single point from the image. The notation of H is not just suggestive:

we work in Hilbert spaces only. The inner product of the Hilbert space is denoted as�., .� → R, where R is used for the real numbers (since we work with monotonicity, we only work with real inner products).

In some cases the elements of a set are not denoted explicitly. For example we will suggestively write xb instead of xb ∈ Xb. D is used for the domain:

D(T ) is the domain of T or we simply take a set D ⊂ X, where D is actually the domain. In sets we sometimes need to add a premise and therefore use the abbreviation s.t. (such that). The end of a proof is indicated by� and the end of a definition, an example or a remark is indicated by�.

(7)

2 MONOTONICITY

2 Monotonicity

2.1 Introduction

Monotone functions in the real case are well-known as non-decreasing functions.

This is generalized in the following definition:

Definition 2.1. An operator T from a Hilbert space H into the subsets of its dual H is said to be monotone provided

�T (x) − T (y), x − y� ≥ 0

for all x, y∈ H. �

That this is just a generalization of non-decreasing functions in real analysis is shown in the following example:

Example 2.2. A function φ : R → R is monotone non-decreasing in the usual sense if and only if it is monotone as defined in Definition 2.1. That is φ(t1)≤ φ(t2) whenever t1≤ t2 if and only if:

[φ(t2)− φ(t1)](t2− t1)≥ 0, ∀t1, t2∈ R,

as we denoted. �

In our case we are not just considering functions, but relations. These are an extension to the theory of functions. If a function maps a point from the domain (D) to the co-domain (C), then we could also see this as set of pairs of points. In other words: x1 �→ x2 is equivalent to the pair (x1, x2). This is extended to any set of pairs as subset from D× C. We call such a set of pairs a relation R ⊂ D × C. A relation has the advantage that it is possible that (x1, x2), (x1, x3)∈ R (and thus x1 �→ {x2, x3}), which is not possible for functions. In section 3 we also use multi-valued functions, where one point is mapped to a set of points. This is equivalent to the definition of relations. We denote this as H ⇒ H.

The monotonicity of relations is summarized in the following definition:

Definition 2.3. A relation Rab⊆ Xa× Xa× Xb× Xb is monotone provided:

∀(xai, xai, xbi, xbi) ∈ Rab:

�xa1− xa2, xa1− xa2� + �xb1− xb2, xb1− xb2� ≥ 0.

We call Rab maximal monotone provided it is monotone and there exists no

greater monotone subset with Rab included. �

Again, we want this to be consistent with our earlier definition of monotone functions, as in Definition 2.1. With the concept of relations we can also speak of maximal monotone operators. This is treated in the following proposition:

Proposition 2.4. If an operator T is a monotone operator, then is its graph:

G(T ) ={(x, x)∈ X × X| x= T (x)}

a monotone set. We call T a maximal monotone operator, provided its graph is

a maximal monotone set. �

(8)

2.2 The composition of two relations 2 MONOTONICITY

2.2 The composition of two relations

A familiar principle in the theory of functions is the composition of two func- tions. Such a composition can be extended to relations, as shown in the following definition:

Definition 2.5. Let the relations Rab and Rbcbe defined such that

Rab⊆ Xa× Xa× Xb× Xb and Rbc⊆ Xb× Xb× Xc× Xc. The composition Racof the relations Raband Rbcis then defined to be

Rac={(xa, xa, xc, xc)| ∃(xb, xb) : (xa, xa, xb, xb)∈ Rab, (−xb, xb, xc, xc)∈ Rbc}.

We denote this as Rac= Rab◦ Rbc. �

This way of defining is not fully consistent with the composition of two functions as we know it: a minus sign is added in the second part. This def- inition is different since we talk about paired points, as defined in Definition 2.3. Definition 2.5 has therefore more structure than a general definition of the composition of two relations.

Example 2.6. When we have two functions f1, f2∈ C(R) that are monotone, then their composition is monotone. Since for x1, x2∈ R we see that if x1≤ x2, then for y1 = f1(x1) and y2 = f1(x2): y1≤ y2, and therefore f2(y1)≤ f2(y2).

This is in particular true for a non-decreasing function, as was shown to be

equivalent to the general case in Example 2.2. �

The result of this example can be extended to the general case, as shown in the following lemma.

Lemma 2.7. Let Rab and Rbcbe the relations as defined in Definition 2.5. If Raband Rbc are monotone then so is the composition Rac.

Proof. We know by the definition of monotonicity that

�xa1− xa2, xa1− xa2� + �xb1− xb2, xb1− xb2� ≥ 0

�−xb1+ xb2, xb1− xb2� + �xc1− xc2, xc1− xc2� ≥ 0.

By adding them, we find that

�xa1− xa2, xa1− xa2� + �xc1− xc2, xc1− xc2� ≥ 0 and thus we know that Racis monotone too.

Another problem is shown in the following example.

Example 2.8. Let f1∈ C(R) be a function such that for any monotone function f2 ∈ C(R), f1 ◦ f2 is monotone. Then f1 is monotone too. This is shown by taking f2 = id, the identity map, which is a monotone function. Then f1◦ f2= f1and thus f1is a monotone function. �

Again, this can be extended to the general case:

(9)

3 MONOTONICITY AND COMPOSITIONS

Theorem 2.9. Let R1 be a relation such that for any (maximal) monotone relation R2 either of the following properties holds:

R3= R1◦ R2 (2.1)

implies R3 is (maximal) monotone, or:

R3= R2◦ R1 (2.2)

implies R3 is (maximal) monotone. Then R1is (maximal) monotone itself.

Proof. In both cases the trick is to choose the right R2 such that R1◦ R2= R1

and R2◦ R1= R1respectively. Therefore we choose for Equation 2.1:

R2={(−xb, xb, xb, xb)| ∃(xa, xa) s.t. (xa, xa, xb, xb)∈ R1}, and for Equation 2.2:

R2={(xa, xa,−xa, xa)| ∃(xb, xb) s.t. (xa, xa, xb, xb)∈ R1}.

It follows directly from the definition of the composition of relations that R2 is chosen in the right way. Furthermore we see that R2 is in both cases a Dirac structure (see Section 4.2) and therefore it is maximal monotone.

3 The composition of two finite-dimensional maximal monotone relations

3.1 Introduction to the problem

In this section, we consider the following problem: given two maximal monotone relations, is their composition maximal monotone too? We saw in Lemma 2.7 that the composition of two monotone relations is a monotone relation again.

We show that this statement can be extended to the maximal monotone case.

To show this, we need Corollary 3.29, a result of Theorem 3.28. To proof this theorem, we have to use some other theorems and lemmas. This is done in Section 3.3. First, we give some theorems and lemmas that are not a part of the field of monotonicity, this is done in Section 3.2. Some of the more extensive proofs are skipped. These proofs are given in [5].

In this section, we want to consider the composition of two maximal mono- tone relations. We only consider finite dimensional Hilbert spaces. If we let {f1, f2, . . . , fn} be the basis for a n-dimensional Hilbert space, then we see that for the natural basis ofRn({e1, e2, . . . , en}) we can make a bijective map in the following way:

g :

n i=1

cifi�→

n i=1

ciei

This map g preserves the inner product. Therefore all n-dimensional Hilbert spaces are equivalent and we can consider only the Hilbert SpaceRn.

(10)

3.2 Results in analysis 3 MONOTONICITY AND COMPOSITIONS

3.2 Results in analysis

In this section we give some theorems, lemmas and propositions with just a sketch of the proof or not a proof at all. These theorems, lemmas and proposi- tions are necessary for the main theorems of the next section. Again, these are not a part of the field of monotonicity. The proofs are found in [5]. Let us first start with the definition of level-boundedness: a general definition in analysis.

Definition 3.1. Let the (lower) level-set be defined as follows:

lev≤αf :={x ∈ Rn| f(x) ≤ α}.

A function f : Rn → ¯R is (lower) level-bounded if, for every α ∈ R, the set lev≤αf is bounded (although it is possibly empty). � In some particular cases, we can say more about level-boundedness, as shown in the following lemma:

Lemma 3.2. Let f :Rn → ¯Rnbe lower semi-continuous onRn. Then the level sets lev≤α are all closed inRn.

No proof. See for example [5][p. 9] for a proof.

Theorem 3.3. If a mapping F : X → Rm, with X⊂ Rn Lipschitz continuous (relative to X) with constant κ, then we can extend F uniquely to F , a mapping on the closure of X (X). If F is continuous, then it is both unique and Lipschitz continuous with constant κ. It can even be extended beyond X (where the Lipschitz continuity with constant κ is preserved), but this extension is not unique.

In other words: there is an mapping F :Rn→ Rm, which agrees with F on X and is Lipschitz continuous with constant κ.

No proof. A proof of this theorem can be found in [5][p. 400].

Now we consider the following theorem which is a result from sequences of mappings. First we define graphically limits:

Definition 3.4. Let Sν :Rn⇒ Rn be a sequence of mappings. Then the 1. graphical outer limit, denoted by g-lim supνSν, is the mapping having as

its graph the set lim supν(G(Sν)),

2. the graphical inner limit, denoted by g-lim infνSν, is the mapping having as its graph the set lim infν(G(Sν)).

3. If the graphical outer and inner limit agree, then we say that the graphical limit g-limνSν exists.

We denote the graphical limit as Sν → S. There are more (equivalent) defini-g tions for this graphical limit (see for example [5]), but these are not relevant to this thesis.

The following theorem tells us that we could divide sequences of mappings into two parts: those that escape at the horizon (that is, limν→∞lim|x|→∞= 0) and those that have a graphically converging subsequence. �

(11)

3.2 Results in analysis 3 MONOTONICITY AND COMPOSITIONS

Theorem 3.5. A sequence of mappings Sν : Rn ⇒ Rm either escapes at the horizon or has a subsequence converging graphically to a mapping S :Rn ⇒ Rn with D(S)�= ∅.

Outline of the proof. This follows from the extraction of convergent

subsequences theorem: every sequence of nonempty sets Cν inRneither escapes to the horizon or has a subsequence converging to a nonempty set C inRn. We need to apply this to the sequence of sets G(Sν)⊂ Rn× Rm.

The following theorem is given without a proof. It is a result of the corre- spondence between graphically and point-wise convergence.

Theorem 3.6. If a sequence of closed-valued mappings Sν : Rn ⇒ Rm is asymptotically equi-osc (equi-outer semi-continuous) at x, then:

(g- lim inf

ν Sν)(x) = (p- lim inf

ν Sν)(x) (g- lim sup

ν Sν)(x) = (p- lim sup

ν Sν)(x).

Thus we see that if a sequence is asymptotically equi-osc everywhere, then Sν converges graphically to S if and only if it converges point-wise to S.

More generally: if we have a set X⊂ Rn and a point x∈ X, then any two of the following conditions implies the third:

1. the sequence is asymptotically equi-osc at x relative to X;

2. Sν converges graphically to S at x relative to X;

3. Sν converges point-wise to S at x relative to X.

No proof. The proof of this theorem is skipped. See for example [5][p. 174].

Theorem 3.8 makes use of both maximal monotonicity and other specific parts of analysis. Therefore it would take a lot of effort to prove it and thus is its proof skipped. To understand the results of this theorem, we give a definition containing some of the concepts that are used in the theorem.

Definition 3.7. Let C∈ Rn and define PCthe projection of a point x∈ Rnto the nearest point(s) of C.

Let x∈ C and let o(|x − x|) for x ∈ C denote a term with the property that o(|x − x|)/|x − x| converge to 0 in C. We say that v ∈ ˆNC(x), if for x∈ C:

�v, x − x� ≤ o(|x − x|).

A vector v is general normal to C at x if there are sequences xν that converge to x in C and vν that converge to v, where vν ∈ ˜NC(xν). �

Now, we can state the theorem:

Theorem 3.8. Let T : Rn ⇒ Rn be maximal monotone and call D = D(T ).

Then D is convex and for all x∈ D(T ) one has T (x)= ND(x). Furthermore:

λT → Ng D as λ ↓ 0. Since this happens, the mappings (I + λT )−1 converge uniformly (on all bounded sets) to (I + ND)−1 = PD, where PD denotes the projection mapping on D.

(12)

3.3 Results in monotonicity 3 MONOTONICITY AND COMPOSITIONS

No proof. A proof of this theorem is given in [5][p. 553].

The last theorem we need before we continue is about the separation of two sets:

Theorem 3.9. If C1, C2 ⊂ Rn are convex, then they can be separated by a hyperplane if and only if 0 /∈ int(C1 − C2). If int(C1− C2) �= ∅ then the separation is proper. Sufficient conditions are Ci �= ∅, but Cj∩ int(Ci), with i�= j.

No proof. A proof of this theorem is given in [5][p. 62].

3.3 Results concerning (maximal) monotone relations

In the last section we collected some tools from general analysis. In this part we continue to collect some tools, but this time in the theory of monotone relations. For these we will supply proofs. In the first part of the section we derive some calculation rules for (maximal) monotone relations, in the second part we derive some important lemmas and theorems for the theorem we want to proof: Theorem 3.28. First we give an example.

Example 3.10. If a continuous mapping F : Rn → Rn is monotone, then it is maximal monotone. Suppose not, then there exists (x, y) /∈ G(F ), with G(F ) the graph of F such that �x − x, F (x)− y� ≥ 0, ∀(x, F (x)) ∈ G(F ). Let x = x+ �u for any u∈ Rn and � > 0. Then:

�(x+ �u)− x, F (x+ �u)− y� ≥ 0

=⇒ �u, F (x+ �u)− y� ≥ 0.

We know that F is continuous, so F (x+ �u)→ F (x) if �→ 0, for all u ∈ Rn, so �u, F (x)− y� ≥ 0 for all u ∈ Rn. This is only possible if F (x)− y = 0 =⇒ F (x) = y, but then (x, y) = (x, F (x))∈ G(F ), which contradicts our

assumption. �

Now we give some calculation rules for (maximal) monotone relations in the following lemmas. These calculation rules are handy further on in this section.

Lemma 3.11. If the relation R is a (maximal) monotone set, then so is the following set:

Ra ={r + a | r ∈ R} (3.1)

Proof. If (x1+ a1, y1+ a2), (x2+ a1, y2+ a2)∈ Ra, then:

�(x1+ a1)− (x2+ a1), (y1+ a2)− (y2+ a2)� = �x1− x2, y1− y2(∗)≥ 0, where we use that R is a (maximal) monotone set in (∗). Therefore Ra is a (maximal) monotone set itself.

Lemma 3.12. Let T1, T2:Rn ⇒ Rn be monotone. Then so is is T1+ T2.

(13)

3.3 Results in monotonicity 3 MONOTONICITY AND COMPOSITIONS

Proof. For x1, x2∈ Rn, we know that for any yi,j∈ Ti(xj):

�x1− x2, y1,1− y1,2� ≥ 0

�x1− x2, y2,1− y2,2� ≥ 0.

By adding them we find:

�x1− x2, (y1,1+ y2,1)− (y1,2+ y2,2)� ≥ 0, and thus T1+ T2is monotone too.

Lemma 3.13. Let T be monotone, then so is the inverse T−1.

Proof. Let T : D→ C be monotone. Then for any x, y ∈ C, ∃x, y ∈ D, s.t.

T (x) = x, T (y) = y. This results in:

�x − y, T−1(x)− T−1(y)� = �T (x)− T (y), T−1(T (x))− T−1(T (y))�

= �T (x)− T (y), x− y� ≥ 0 and therefore T−1 is monotone too.

Lemma 3.14. If T :Rn⇒ Rn is monotone, then so is λT for any λ > 0.

Proof. For all x, y∈ Rn, x∈ T (x), y∈ T (y) and λ > 0:

�x − y, x− y� ≥ 0

⇐⇒ λ�x − y, x− y� ≥ 0

⇐⇒ �x − y, λx− λy� ≥ 0 and thus λT is monotone.

Lemma 3.15. T :Rn⇒ Rnis maximal monotone if and only if T−1is maximal monotone.

Proof. (=⇒) If T is maximal monotone, then T−1is monotone by Lemma 3.13.

To show that it is maximal, we assume it is not. Then there is a x0∈ T/ −1(x0) such that for all x∈ T−1(x): �x0− x, x0− x� ≥ 0. If x0∈ T/ −1(x0), then x0∈/ T (x0) and if x ∈ T−1(x), then x∈ T (x). We know then by the maximality of T :

�x0− x, x0− x� = �x0− x, x0− x� < 0,

which contradicts our assumption. Therefore T−1 is maximal monotone.

(⇐=) (T−1)−1= T and therefore we can use our proof of (=⇒) and thus is T maximal monotone.

Lemma 3.16. If T is maximal monotone, both T and T−1 are closed-convex- valued.

(14)

3.3 Results in monotonicity 3 MONOTONICITY AND COMPOSITIONS

Proof. Let us define vτ = (1− τ)v0+ τ v1 for v0, v1∈ T (¯x) and τ ∈ (0, 1), then:

�v − vτ, x− ¯x� = �v − [(1 − τ)v0+ τ v1], x− ¯x�

= �(1 − τ + τ)v − [(1 − τ)v0+ τ v1], x− ¯x�

= (1− τ)�v − v0, x− ¯x� + τ�v − v1, x− ¯x� ≥ 0.

Therefore vτ ∈ T (¯x) (since T is maximal monotone) and therefore T is closed- convex-valued. By Lemma 3.15 we see that T−1 is maximal monotone too and so T−1 is closed-convex-valued as well.

Lemma 3.17. Let S :Rm ⇒ Rm be monotone, then for any A ∈ Rm×n the mapping T (x) = AS(Ax) is monotone.

Proof. We observe that for any Ax, Ay∈ Rm:

�Ax − Ay, S(Ax) − S(Ay)� ≥ 0

=⇒ �x − y, AS(Ax)− AS(Ay)� ≥ 0

=⇒ �x − y, T (x) − T (y)� ≥ 0 and thus T is monotone.

After these calculation rules, we need some other lemmas that make use of newly defined concepts: we define for example the notion non-expansiveness.

Take into account that a mapping is non-expansive if and only if it is Lipschitz continuous with constant κ≤ 1.

Definition 3.18. A mapping S :Rn⇒ Rn is called non-expansive provided:

|w1− w0| ≤ |z1− z0|, (3.2) whenever w1 ∈ S(z1), w0 ∈ S(z0). If the inequality in Equation 3.2 is strict

(when z1�= z0), we call S contractive. �

This definition leads to the following lemma:

Lemma 3.19. If S0, S1 : Rn ⇒ Rn is non-expansive, then so is λ0S0+ λ1S1, provided |λ0| + |λ1| ≤ 1.

Proof. If wi ∈ (S0+ S1)(zi), then there are w1i ∈ λ1S1(zi) and w0i ∈ λ0S0(zi), such that: wi= wi1+ wi0. We see then that:

|w1− w0| = |w11− w10+ w01− w00|

≤ |w11− w10| + |w01− w00|

≤ |λ0||z1− z0| + |λ1||z1− z0|

≤ |z1− z0|.

Therefore, λS0+ λS1 is non-expansive too.

Another concept in the field of monotonicity is that of strictly monotonicity, which is defined in the following way:

Definition 3.20. We call T a strictly monotone operator provided that if x∈ T (x) and y∈ T (y), with x �= y, then �x − y, x− y� > 0. �

(15)

3.3 Results in monotonicity 3 MONOTONICITY AND COMPOSITIONS

Now we use a linear transformation. It is handy if we define it as following, as we see from the results of the proposition.

Proposition 3.21. Define the one-to-one linear transformation J :Rn× Rn → Rn× Rn as follows:

J(x, v) = (x + v, v− x).

J induces a one-to-one correspondence between the mappings S, T :Rn ⇒ Rn through:

G(S) = J(G(T )) G(T ) = J−1(G(S)).

In this one-to-one correspondence, S is non-expansive if and only if T is mono- tone, and on the other hand, S is contractive if and only if T is single-valued on D(T ). Under this correspondence, one has:

S = I− 2I ◦ (I + T )−1 (3.3)

T = (I− S)−1◦ 2I − I. (3.4)

Proof. Let (z0, w0) = J(x0, v0) and (z1, w1) = J(x1, v1), then:

|z1− z0|2− |w1− w0|2 = �(z1− z0) + (w1− w0), (z1− z0)− (w1− w0)�

= �(z1+ w1)− (z0+ w0), (z1− w1)− (z0− w0)�

= �2v1− 2v0, 2x1− 2x0

= 4�v1− v0, x1− x0� Thus we find:

|w1− w0| ≤ |z1− z0| ⇐⇒ �v1− v0, x1− x0� ≥ 0. (3.5) Since this holds for all corresponding pairs of points (zi, wi)∈ G(S) and (xi, vi)∈ G(T ), we see that the non-expansivity of S is equivalent to the mono- tonicity of T .

Furthermore, we observe that S is contractive if and only if the left side of Equation 3.5 is strict, when (z0, w0)�= (z1, w1). At the same time T is strictly monotone and single-valued on D(T ) if and only if the inequality of the right side of Equation 3.5 is strict when (x0, v0)�= (x1, v1).

Now we consider the formula of Equation 3.3. For the first formula, we observe that the condition (z, w) ∈ J(G(T )) means that if there is a x and a v ∈ T (x), then one has z = v + x and w = v − x such that z ∈ (I + T )(x).

This is equivalent to requiring that there is a x such that z ∈ (I + T )(x) and w = (z− x) − x = z − 2x. Thus w ∈ S(z) if and only if w = z − 2x for some x∈ (I + T )−1(z), which says that S = I− 2I ◦ (I + T )−1, as required.

The formula of Equation 3.3 can easily be derived from that of Equation 3.4:

S = I− 2I ◦ (I + T )−1

⇐⇒ (I − S) = 2I ◦ (I + T )−1

⇐⇒ (I + T ) = (I − S)−1◦ 2I

⇐⇒ T = (I − S)−1◦ I − I, as the formula of Equation 3.4.

(16)

3.3 Results in monotonicity 3 MONOTONICITY AND COMPOSITIONS

The result of Proposition 3.21 can be used in the following theorem. We want to examine the so-called Yosida λ-regularization of S as we do in Example 3.23, but to do so, we need the following theorem. It also comes in handy for some theorems we want to proof further on.

Theorem 3.22. Let T :Rn ⇒ Rnbe monotone and let λ > 0. Then (I +λT )−1 is monotone and non-expansive.

Moreover, T is maximal monotone if and only if R(I + λT ) =Rn, or equiv- alently D(I + λT )−1=Rn. In that case, (I + λT )−1is maximal monotone too, and it is a single-valued mapping from all ofRn into itself.

Proof. Since λT is monotone if and only if T is monotone and since I is trivially monotone, we know by Lemma 3.12 that I +λT is monotone. Hence we know by Lemma 3.13 that (I + λT )−1 is monotone. If we apply the results of Proposition 3.21, we find that there is an non-expansive S, such that (I + T )−1= 12(I− S).

We also know that I is non-expansive and therefore 12I−12S is non-expansive by Lemma 3.19. Thus we found that (I + T )−1 is non-expansive. The result is that D = D((I + T )−1) it’s single-valued and Lipschitz continuous with constant 1.

In Proposition 3.21 we saw that T is a maximal monotone mapping if and only if S is a maximal non-expansive mapping, that is, S can’t be enlarged without losing the non-expansive property. In Theorem 3.3 we saw that any non-expansive mapping can be extended to all of Rn. Thus we see that S is a maximal non-expansive mapping if and only if D(S) =Rn. From the formulas we find that this equivalent to D((I + T )−1) =Rn. Since (I + T )−1 is single- valued, continuous on D((I + T )−1) and monotone, we saw in Example 3.10 that T is maximal monotone.

Theorem 3.22 makes it possible to examine the Yosida λ-regularization, as is done in the following example:

Example 3.23. Let S :Rn ⇒ Rn be any mapping and for any λ > 0, let the Yosida λ-regularization of S be defined as:

Sλ= (λI + S−1)−1= (I + λ−1S−1)−1◦ λ−1I.

If S is maximal monotone, then so is Sλand it is even single-valued and Lipschitz continuous with constant λ−1. It is single-valued according to Theorem 3.22 with T = S−1. For the Lipschitz continuity of Sλwe use the results in the proof of Theorem 3.22 and see that it has indeed Lipschitz constant λ−1. � The following theorem states that if a sequence of (maximal) monotone mappings converges graphically to a map, then this map is (maximal) monotone too. This is an important implication of the notion of graphically convergence as defined in Definition 3.4.

Theorem 3.24. If a sequence of monotone mappings Tν :Rn⇒ Rn converges graphically, the limit mapping T is monotone. Moreover, if the mappings Tν are maximal monotone, then T is maximal monotone.

Proof. Suppose T = g-limνTν, with Tν monotone, and consider vi∈ T (xi), i = 0, 1. Because T = g-lim infνTν in particular, there exist sequences xνi → xi and vνi → vi, such that viν ∈ Tν(xνi). For each ν we have the assumption that

(17)

3.4 Proof of the theorem 3 MONOTONICITY AND COMPOSITIONS

�v1ν−vν0, xν1−xν0� ≥ 0. If we take the limit for ν → ∞ we get �v1−v0, x1−x0� ≥ 0, which means that T is monotone.

For the second part of the theorem, assume Tν to be maximal monotone and define for a fixed λ: Pλν := (I + λTν)−1. We know by Theorem 3.22 that Pλν is single-valued and non-expansive. In Example 3.23 we have seen that Pλν is Lipschitz continuous with constant 1 (see the proof of Theorem 3.22), hence they are equicontinuous on Rn. Now we know that the graphical convergence Pλν → Pg λ is equivalent to the point-wise convergence Pλν → Pp λ. Since Pλν is Lipschitz, we know that the point-wise convergence is actually uniform. As a result from this, we know that D(Pλ) = Rn and therefore T is maximal monotone by Theorem 3.22.

From Theorem 3.24 and Theorem 3.5, we can derive the following corollary:

Corollary 3.25. If a sequence of maximal monotone mappings Tν :Rn ⇒ Rn does not escape to the horizon, it has a subsequence that converges graphically to some maximal monotone mapping T :Rn ⇒ Rn.

Proof. From Theorem 3.5, we know that if Tνdoes not escape at the horizon, we know that it has a subsequence converging graphically, but then Theorem 3.24 tells us that since Tν are maximal monotone, T is maximal monotone too.

The last definition that we need in this section, is that of the relative interior, which is given as follows:

Definition 3.26. The relative interior of a set, with a lower dimension than the space where it is embedded, is intuitively the set of all points that are not on the edge of the set relative to the smallest set in which the set lies. Formally the definition of the relative interior of a set S is:

rint(S) :={x ∈ S | ∃� > 0 s.t. B(x)∩ aff(S) ⊂ S},

where Bis a ball with radius � centered at x and af f (S) the affine hull of S. �

3.4 The composition of two maximal monotone relations is maximal monotone

In this section, we proof the statement that the composition of two finite- dimensional maximal monotone relations is again maximal monotone. The main goal of the previous section was to give us the tools to proof Theorem 3.28. We have almost the tools to prove it, but we do first need a relation that holds for the Yosida regularizations of T :

Lemma 3.27. The Yosida regularizations of T obey the relation:

(λI + T−1)−1= λ−1[I− (I + λT )−1]

(18)

3.4 Proof of the theorem 3 MONOTONICITY AND COMPOSITIONS

Proof. The trick is to show that the two sets are the same. We see:

z∈ λ−1[I− (I + λT )−1](w)

⇐⇒ λz ∈ w − (I + λT )−1(w)

⇐⇒ w − λz ∈ (I + λT )−1(w)

⇐⇒ w ∈ (w − λz) + λT (w − λz)

⇐⇒ z ∈ T (w − λz) ⇐⇒ w − λz ∈ T−1(z)

⇐⇒ w ∈ (λI + T−1)(z)⇐⇒ z ∈ (λI + T−1)−1(w), as required.

Now we have finally the tools to prove Theorem 3.28. This theorem states that the maximality of a relation is preserved under a certain transformation.

Theorem 3.28. Suppose T (x) = AS(Ax + a) for a maximal monotone map- ping S :Rm⇒ Rm, a matrix A∈ Rm×n, and a vector a∈ Rm.

If (R(A) + a)∩ rint(D(S)) �= ∅, then T is maximal monotone.

Proof. First we observe that we could use Lemma 3.11 and therefore we could assume, without loss of generality that a = 0, 0 ∈ rint(D(S)) and 0 ∈ S(0).

Another thing we could arrange with this lemma is that the smallest subspace of Rm that includes the set D := D(S) and the R(A) is Rm itself. Otherwise, we could replaceRmby a space of lower dimension.

Let the Yosida regularization be defined for any λ > 0 as Sλ= (λI + S−1)−1. If 0∈ S(0), then 0 ∈ S−1(0), then 0 ∈ λI + S−1 and then 0∈ (λI + S−1)−1, which is Sλ(0). We know by Example 3.23 that Sλ is monotone, single-valued and continuous (since Lipschitz continuity implies continuity). Define Tλ(x) :=

ASλ(Ax). We know by Lemma 3.17 that Tλ is monotone. Therefore we know by Example 3.10 that Sλ is maximal. Furthermore, 0∈ Tλ(0).

We now by Corollary 3.25 that there is a sequence λν ↓ 0, such that Tλν converges graphically to a maximal monotone mapping T0with the remark that the sequence does not escape to the horizon by 0∈ Tλν(0). We want to show that G(T0)⊂ G(T ) and thus T is maximal monotone itself (and hence Tλ→ Tg as λ↓ 0).

Suppose ν ∈ T0(x). We know then that there exist xν → x and vν → v with vν ∈ Tλν(xν). The last part means that vν= Ayν for yν= Sλν(Axν). If yν → y, we have v = Ay. We also know by the graphical convergence of Sλν to S that y∈ S(Ax) and thus v ∈ T (x) (and thus G(T0)⊂ G(T )). If yν � y, then

|yν| → ∞. We show that this runs into a conflict with our interior assumption.

Although|yν| → ∞, λνyν converges, because of Lemma 3.27:

λνyν = [I− (I + λνS)−1](Axν), where Axν → Ax and (I + λνS)−1 converges uniformly to the projection PD on D := cl(D(S)) by Theorem 3.8. If we use that Sλν = (λνI + S−1)−1, we could rewrite yν = Sλν(Axν), as meaning yν ∈ S(Axν− λνyν). We see then that the points uν := Axν− λνyν must converge to a certain point u.

Take µν = 1/|yν|, so µν ↓ 0. We know µνvν → 0, while the sequence µνyν has a cluster point ˜y �= 0 and thus from vν = Ayν we get 0 = Ay. So ˜˜ y ⊥ R(A) (so R(A)�= Rm). On the other hand, we have µνyν ∈ (µνS)(uν). From Theorem 3.8, we have that µνS converges to ND graphically for D = D(S).

Therefore ˜y∈ ND(u). From this it follows that the hyperplane

(19)

3.4 Proof of the theorem 3 MONOTONICITY AND COMPOSITIONS

H ={u | �˜y, u − u�} separates D from R(A). Under our assumption that the two sets R(A) and rint(D(S)) = rint(D) ‘meet’, then this is impossible by Theorem 3.9, unless D ⊂ R(A). This is not possible since we have arranged that no proper subspace of Rn includes both D and R(A).

Now we finally proved Theorem 3.28, we can derive the following corollary:

Corollary 3.29. Let T :Rn1× Rn2 ⇒ Rn1× Rn2 be maximal monotone. Fix any x2∈ Rn2 and define T1:Rn1 ⇒ Rn1 by

T1(x1) ={v1| ∃v2 s.t. (v1, v2)∈ T (x1, x2)}.

If x2 is such that there exists x1∈ Rn1 with (x1, x2)∈ rint(D(T )), then T1 is maximal monotone.

Proof. Let A = (I, O)T ∈ Rn1×n2 and a = (O, x2) ∈ Rn1×n2. We see then that T1(x1) = AT (Ax + a) and thus we can apply Theorem 3.28. If there are x1 ∈ Rn1 such that (x1, x2)∈ rint(D(T )), then for R(A) + a, we see that Ax1+ a = (x1, x2)∈ R(A) + a and therefore (R(A) + a) ∩ rint(D(T )) �= ∅ and thus follows from Theorem 3.28 that T1(x1) is maximal monotone.

From Corollary 3.29, we want to prove that the composition of two maximal monotone relations is again maximal monotone. To proof this, we need some tools. One of the tools we need is the Cartesian product of two sets. That is, if X and Y are sets, then their Cartesian product is:

X× Y = {(x, y) | x ∈ X, y ∈ Y }.

In the following lemma, we see that (maximal) monotonicity is preserved by taking the Cartesian product of two monotone sets:

Lemma 3.30. Let R1 ⊂ X × X and R2 ⊂ Y × Y be (maximal) monotone relations. Then so is their Cartesian product:

R1× R2={(x, x, y, y)| (x, x)∈ X × X, (y, y)∈ Y × Y}.

Proof. We know that:

�x1− x2, x1− x2� ≥ 0

�y1− y2, y1− y2� ≥ 0

By simply adding them we see that R1× R2 is monotone too. For maximality, assume there is an e = (x,� �x,�y,y�) /∈ R1× R2, such that (R1× R2)∪ {e} is monotone. Then we could assume without loss of generality that there exists a (x, x)∈ R1, with�x−�x, x−�x� < 0. This would mean that �y−�y, y−�y� > 0 for all (y, y)∈ R2. This would mean (�y,y�)∈ R2. If this is true, then we can choose (y1, y1) = (y2, y2) and find:

�y1− y2, y1− y2� = �0, 0� = 0,

what is in contradiction with our assumption that �y − �y, y− �y� > 0 for all

�y, y� ∈ R2.

(20)

3.4 Proof of the theorem 3 MONOTONICITY AND COMPOSITIONS

Another tool we use is the following: a map that preserves (maximal) mono- tonicity:

Lemma 3.31. Let the map α be defined such that α : (xb,x�b, xb,�xb)�→ (xb+

xb, xb− �xb, xb+�xb, xb− �xb). Then the relation R is (maximal) monotone if and only if Rα={α(x) | x ∈ R} is (maximal) monotone.

Proof. If y = (ybi,y�bi, ybi,�ybi)∈ Rα, then we know that there exist

x = (xb,�xb, xb,�xb)∈ R such that α(x) = y. If we want to consider the mono- tonicity of Rα, we need to consider the following equation

�yb1− yb2, yb2− yb2� + ��yb1− �yb2,�yb1− �yb2� (3.6)

= �(xb1− xb2) + (˜xb1− ˜xb2), (xb1− xb2) + (˜xb1− ˜xb2)� + �(xb1− xb2)− (˜xb1− ˜xb2), (xb1− xb2)− (˜xb1− ˜xb2)�

= 2[�xb1− xb2, xb1− xb2� + �˜xb1− ˜xb2, ˜xb1− ˜xb2�]

and thus we see that Equation 3.6 is greater or equal to zero if and only if

�xb1− xb2, xb1− xb2� + ��xb1− �xb2,�xb1− �xb2� ≥ 0,

or that Rα is monotone if and only if R is monotone. This is valid for any element of R or even H and thus we see that Rαis even maximal monotone if and only if R is maximal monotone.

The following remark is the implication of Lemma 3.31. This is the tool that we are going to use to proof the most important theorem of this section:

Remark 3.32. If we consider the definition of composition as in Definition 2.5, than we see that if we use the mapping of Lemma 3.31, we get:

xb = xb+�xb = xb− xb= 0;

xb = xb+�xb;

xb = xb− �xb;

xb = xb− �xb = 0,

and thus we find that (xb, xb,x�b,x�b) = (0, xbx�b, 0). � Now, it is possible for us to prove the main theorem of this section: the composition of two maximal monotone relations is again maximal monotone:

Theorem 3.33. Let R1, R2be two maximal monotone relations. Then R1◦ R2 is maximal monotone.

Proof. In this proof we use a number of steps to proof the theorem. First, we let the pairs (xa, xa) and (xc, xc) be the incoming pairs and (xb, xb) and (˜xb, ˜xb) the out-coming pairs of respectively R1 and R2. We want to combine the pairs (xb, xb) and (˜xb, ˜xb) using Definition 2.5, that is:

xb = −˜xb xb = x˜b

First we use transformations on the pairs (xb, xb) and (˜xb, ˜xb) as shown in Figure 3.1.

(21)

3.4 Proof of the theorem 3 MONOTONICITY AND COMPOSITIONS

Figure 3.1. A schematic figure of the proof, where the steps are denoted as (I), (II), (III) and (IV).

(I) This is the part we just defined and which is by assumption maximal monotone.

(II) Now we take the Cartesian product of Xb× Xb and ˜Xb× ˜Xb. We know by Lemma 3.30 that this relation is maximal monotone again.

(III) After this we use the transformation of Lemma 3.31. Again, maximal monotonicity is preserved.

(IV) In this part we do not consider an element with four entries, but one with two, which contains both two entries itself.

We saw in Remark 3.32 that if we take the composition of R1and R2, we have xb+ ˜xb= 0 and xb− ˜xb = 0. If we define ¯xb = xb+ ˜xb and ¯˜xb= xb− ˜xb, then we have:

(xb, xb, ˜xb, ˜xb)�→ (0, ¯xb, ¯˜xb, 0).

This is equivalent to (0, ¯xb, 0, ¯˜xb), which is equivalent to the pair ((0, 0), (¯xb, ¯˜xb)).

Now we can apply Corollary 3.29. We saw that if we choose ¯x2= 0, then for any x2, R ={(x1, x1)} is maximal monotone. This is equivalent to our case: we have x1= (xa, xc), x1= (xa, xc), x2= (0, 0) and x2= (¯xb, ¯˜xb). Then for any x2, we have R = {(x1, x1 = ((xa, xc), (xa, xc))} is maximal monotone. Therefore R, with the composition structure is maximal monotone or, in other words, the composition of two maximal monotone relations is maximal monotone.

From Theorem 3.33 we can, together with our earlier results, derive the following theorem:

Theorem 3.34. Let R1◦R2= R3. If R1and R2are (maximal) monotone then so is R3. If R3is (maximal) monotone for every R2 and a certain R1, then R1

is (maximal) monotone and if R3 is (maximal) monotone for every R1 and a certain R2, then R1is (maximal) monotone.

Proof. This follows from Lemma 2.7, Theorem 3.33 and Theorem 2.9.

(22)

4 THE LINEAR CASE

4 The linear case

4.1 Introduction

We discuss in this section a special case of relations: the linear relations. A linear relation is linear in the sense that if (x, x), (y, y)∈ R, then (x + y, x+ y)∈ R. Therefore a linear relation is a subspace. In the case of a linear relation, monotonicity is equivalent to the following characterization.

Theorem 4.1. Let D⊂ H×H be a linear relation. Then �x, y� ≥ 0, ∀(x, y) ∈ D if and only if D is monotone.

Proof. We want to show that for all (xi, yi)∈ D (for i = 1, 2):

�x1− x2, y1− y2� ≥ 0, but by the definition of linearity it follows that (x1− x2, y1− y2)∈ D and thus �x1− x2, y1− y2� ≥ 0.

We know (by the linearity of D) that (0, 0) ∈ D and thus ∀(x, y) ∈ D:

�x − 0, y − 0� = �x, y� ≥ 0.

We defined before the notion of a maximal monotone map (see Proposi- tion 2.4). In the linear case these are equivalent, as is shown in the following proposition:

Corollary 4.2. Let T be a linear map. Then T is monotone if and only if it is maximal monotone.

Proof. This is a direct result of Example 3.10 with the remark that a linear map is in particular continuous.

We can again take the composition of two linear relations (as defined in Definition 2.5). It turns out that these are linear again:

Proposition 4.3. Let R1 and R2 be linear relations. Then so is their compo- sition R1◦ R2.

Proof. We want to proof that if (x1, y1), (x2, y2) are elements of R1◦ R2, then so is (x1+ x2, y1+ y2). By Definition 2.5 we know there exists zi such that (xi, zi)∈ R1 and (−zi, yi)∈ R2. That means that (x1+ x2, z1+ z2)∈ R1 and (−[z1+z2], y1+y2)∈ R2and therefore by the definition of composition we know (x1+ x2, y1+ y2)∈ R1◦ R2.

More information about general monotone linear relation is available in for example [2]. We continue on a particular monotone linear relation: the Dirac structure.

4.2 Dirac structures

An example of linear relations is the Dirac structure. Dirac structures are pairs of points (x, x)∈ X × X where x is the denoted the flow (sometimes written as f , see for example [7]) and xthe effort (sometimes denoted as e). X and X are called the spaces of flows and efforts respectively. Most of the information of this and the following section is derived of [6] and [7].

Dirac structures are energy preserving, that is, the total of outgoing and incoming energy is zero. To see what this mathematically means, let us consider

Referenties

GERELATEERDE DOCUMENTEN

(Tweede fase van het onderzoek).. Resultaten van het onderzoek. Conclusies en aanbevelingen. Kengetallen Onderhoud volgens NVDO-THE ten behoeve van het

De bodemopbouw kwam zeer verrommeld en onlogisch over wat mogelijk te verklaren is doordat het terrein (zoals een groot deel langs de Sint-Pieterszuidstraat) opgespoten is. Op

This paper described the derivation of monotone kernel regressors based on primal-dual optimization theory for the case of a least squares loss function (monotone LS-SVM regression)

Since the other state-of- the-art multi-view method considered here are not able to do clustering on the full dataset it shows the importance of the out-of-sample ex- tension of

We show that in the stationary M/G/1 queue, if the ser- vice time distribution is with increasing (decreasing) failure rate (IFR (DFR)), then (a) The distribution of the number

Table 5.7: Architectures of Sill networks and standard neural networks for which the minimum MSE is obtained by the models in Experiment 2 with partially monotone problems Approach

In this paper, our goal is to establish the well-posedness (in the sense of existence and uniqueness of solutions) for systems that arise as interconnections of passive

The prior theorem illustrates what happens if you use the option colored, as supplied in the file colored.sth, except that the supplied file has the color stick out into the