• No results found

Metrics on Spaces of Measures

N/A
N/A
Protected

Academic year: 2021

Share "Metrics on Spaces of Measures"

Copied!
38
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Wouter Slegers

Bachelor thesis

Supervisors: dr. S.C. Hille, M. Ziemlanska

Date: July 9, 2016

Mathematisch Instituut, Universiteit Leiden

(2)
(3)

Introduction 2

1 Lipschitz and measure spaces 4

1.1 A few functions, spaces and norms . . . 4

1.2 Embedding of M(S) into BL(S) . . . 11

1.3 An extension of the Kantorovich Norm . . . 13

2 A rigorous proof of the Kantorovich-Rubinstein Theorem 15 2.1 The Monge-Kantorovich mass transportation problem . . . 15

2.2 Kantorovich Duality Theorem . . . 17

2.3 The fc and fcc function . . . 18

2.4 The Kantorovich distance . . . 23

2.5 The Kantorovich-Rubinstein Theorem . . . 24

3 The (im)possibility of extension of the Kantorovich-Rubinstein Theorem 26 3.1 The structure of the Kantorovich-Rubinstein Theorem . . . 26

3.2 Analysing the separate parts . . . 27

A Lower semi-continuity 31

(4)

About Metrics on Spaces of Measures

We will investigate metrics that are defined on spaces of measures. Some of those metrics one can define only on probability measures, an example of this are the Wasserstein metrics. Other metrics one can define on the larger space of finite signed measures and are given by a norm on this vector space. Some of the metrics defined on probability measures are in essence a restriction of such metrics, derived from a norm on the space of finite signed measures, some of them are not.

The main point of study in this thesis are the Wasserstein metrics. These metrics are constructed using only measure theoretic definitions. One of those metrics we will study in greater detail, namely the Wasserstein metric of order 1, otherwise referred to as the Kantorovich distance. We will take a look at the Kantorovich-Rubinstein Theorem, which tells us that the Kantorovich distance is equal to a metric that has the structure of a metric derived from a dual norm on the space of Lipschitz functions. Both of these metrics are defined only on probability measures, but we find an extension to the space of finite signed measures.

We wish to know whether, more generally, the Wasserstein metrics of order p, where p > 1, are in essence such a restriction too. We investigate if a proof similar to that of the Kantorovich- Rubinstein Theorem is possible for these more general metrics. This extension of the theorem seems new in literature, which suggests such an extension may not be possible. We carry the adapted proof as far as we can and pinpoint where it seems to stop working.

An overview of this thesis

The reader is assumed to have a general knowledge of measure theory and functional analysis.

In the first chapter we give all definitions and lemmas to form a sufficient basis for the rest of the thesis. These definitions concern Lipschitz continuous functions which play a crucial role in the Kantorovich-Rubinstein Theorem. Furthermore we will discuss finite signed measures and their connection with the Lipschitz continuous functions. In the last section of this chapter we give an extension of the Kantorovich norm. This chapter is largely based on [7], some aspects worked out in further detail than found in [7], complemented with some additional results.

In the second chapter we introduce the transportation problem, we state the Kantorovich Du- ality Theorem and our main focus in this chapter is giving a rigorous proof of the Kantorovich- Rubinstein Theorem. This chapter is mainly based on the first chapter of [9], and it is the proof of the Kantorovich-Rubinstein Theorem given there that we work out in detail.

The focus in the third chapter lies in applying the proof of the Kantorovich-Rubinstein Theorem to Wasserstein metrics of order p where p > 1, giving some solutions for problems we encounter

(5)

and identifying in which step the proof stops to work.

In the Appendix we provide a note on lower semi-continuity. The first part of this chapter is based on [9, Remark 1.1.7.4] and the second part is based on [8, Theorem 3].

About this thesis

In this thesis we give an extension of the Kantorovich norm, defined on the space of probability measures, to the space of finite signed measures. Various extensions exist, such as the one given by Bogachev in [3, p. 234] and the one given by Hanin in [6]. Both extensions seem slightly artificial, the extension given in this thesis somewhat less so. This extension then also gives an extension for the metric derived from the Kantorovich norm, which we define on probability measures, to the space of finite signed measures.

The proof of the Kantorovich-Rubinstein Theorem we base on the one given by Villani in [9], we filled various gaps in reasoning and repaired some mistakes. For one [9, Remark 1.12] brushes over a few statements that take some work to formulate rigorously. The part given as an exercise in this remark actually turns out to be incorrect, see Example 0.1. This remark we split into parts, correct and work out in Chapter 2 of this thesis.

For each statement that is made in Chapter 2 we put extra care into determining the require- ments that are needed, for the statement to hold. This streamlines the process of applying the statements to the more general case, that of the Wasserstein metrics of order p, since then we can conveniently distinguish which statements still hold, which do not and why. One useful addition in that respect are the cX and cY functions defined in Remark 2.6, that depend on the cost function c we use. In the original statement, given in [9], it was required that c is bounded.

We need only that these functions cX and cY are bounded, which is a more general case. We use this increased generality to simplify the proof of the Kantorovich-Rubinstein Theorem and it helps when investigating the possibility of an extension of the Kantorovich-Rubinstein Theorem in Chapter 3.

Example 0.1

Let X and Y be Polish spaces. Let c : X × Y → R+ a measurable, bounded and lower semi- continuous function. Then there exists a nondecreasing sequence of nonnegative and uniformly continuous functions clconverging pointwise to c. Let ϕ : X → R be bounded and define

ϕc(y) := inf

x∈X[c(x, y) − ϕ(x)], and for any l ∈ N define ψl(y) := inf

x∈X[cl(x, y) − ϕ(x)].

In [9, Remark 1.12] it was given as an exercise to prove that ψl converges pointwise to ϕc. The following counter-example was taken from http: // math. stackexchange. com/ questions/

1270630 with slight modification to increase generality.

Suppose there exists a point e ∈ X such that e is not isolated. Take the bounded function given by

ϕ(x) :=0, if x = e 1, otherwise

and for all (x, y) ∈ X × Y take c(x, y) := ϕ(x). Clearly ϕc(y) = infx∈X[c(x, y) − ϕ(x)] = 0 for all y ∈ Y . Suppose that cl is a continuous function such that cl≤ c. Take a sequence (xn)n∈N⊂ X such that xn6= e for all n ∈ N and that xn→ e. Then for any y ∈ Y we have

ψl(y) = inf

x∈X[cl(x, y) − ϕ(x)] ≤ lim

k→∞[cl(xn, y) − ϕ(xn)] = cl(e, y) − 1 ≤ c(e, y) − 1 = −1.

Therefore ψl(y) ≤ −1 for all y ∈ Y , so the sequence ψl will not converge to ϕc.

(6)

Lipschitz and measure spaces

The first section of this chapter is mainly focused on introducing the required definitions and notations and it contains a few basic lemmas. Section two connects some of the definitions introduced here. The last section gives an extension of the Kantorovich norm, also introduced in the first section.

1.1 A few functions, spaces and norms

We start by defining the space of Lipschitz functions. This will play a crucial role in this thesis.

Definition 1.1

Let (S, d) be a metric space then define

Lip(S) := {f : S → R | ∃L ∈ R such that ∀x, y ∈ S : |f (x) − f (y)| ≤ Ld(x, y)} .

The defining property of the space Lip(S) is called the Lipschitz property. Intuitively, a Lipschitz function is a function whose slope will never be bigger than a certain value. In Definition 1.1 this value is L and we have

for all x, y ∈ S such that x 6= y : |f (x) − f (y)|

d(x, y) ≤ L.

Remark 1.2: A Lipschitz function is often called Lipschitz continuous. This makes sense since Lipschitz continuity is actually a stronger requirement than uniform continuity which in turn is stronger than continuity. Namely let f ∈ Lip(S) and take L from Definition 1.1. For any  > 0 choose δ := L, this gives for any x, y ∈ S satisfying d(x, y) ≤ δ that |f (x) − f (y)| ≤ Ld(x, y) <  proving uniform continuity.

For each Lipschitz function we call its maximal slope the Lipschitz constant of f . This is the minimal constant L such that f ∈ Lip(S) satisfies the Lipschitz property. It is defined as follows.

Definition 1.3

For f ∈ Lip(S) we define

|f |L:= sup

x,y∈S x6=y

|f (x) − f (y)|

d(x, y) .

The Lipschitz constant gives a seminorm on Lip(S). It is not a norm because |f |L = 0 if and only if f is constant.

(7)

Lemma 1.4

A finite sum of Lipschitz functions is again Lipschitz. Actually let f1, . . . , fN ∈ Lip(S) then

f :=

N

X

n=1

fn is Lipschitz with |f |L

N

X

n=1

|fn|L.

Proof. We find for every x, y ∈ S, by the Lipschitz property, that

|f (x) − f (y)| =

N

X

n=1

fn(x) −

N

X

n=1

fn(y)

N

X

n=1

|fn(x) − fn(y)| ≤

N

X

n=1

|fn|Ld(x, y).

Since L :=PN

n=1|fn|L< ∞ we have L ∈ R and that means that f is Lipschitz with |f |L≤ L.

Lemma 1.5 ([5, Lemma 4]) For f1, . . . , fn ∈ Lip(S) we take

g(x) := min

1≤i≤nfi(x) and h(x) := max

1≤i≤nfi(x).

Then g, h ∈ Lip(S) and

max(|g|L, |h|L) ≤ max

1≤i≤n|fi|L.

Proof. We prove the case where n = 2. We then get the general case by induction.

Let f1, f2∈ Lip(S), take g := min(f1, f2) and take M = max(|f1|L, |f2|L). Let x, y ∈ S. We get max(|f1(x) − f1(y)| , |f2(x) − f2(y)|) ≤ M d(x, y). (1.1) This proves

|g(x) − g(y)| ≤ M d(x, y),

if g(x) = f1(x) and g(y) = f1(y) and if g(x) = f2(x) and g(y) = f2(y).

If g(x) = f1(x) and g(y) = f2(y), then f1(x) ≤ f2(x) and f2(y) ≤ f1(y). Hence we get f1(x) − f1(y) ≤ f1(x) − f2(y) ≤ f2(x) − f2(y),

so by (1.1) we get |f1(x) − f2(y)| ≤ M d(x, y). By exchanging x and y in the preceding argument we also prove this for the remaining case.

By definition for a function f ∈ Lip(S) it follows that −f ∈ Lip(S) with |−f |L = |f |L. Hence for h = max(f1, f2) = − min(−f1, −f2) we also get h ∈ Lip(S) with |h|L≤ M .

Lemma 1.6

Let ∅ 6= A ⊂ S. Then for any x, y ∈ S the following holds

|d(x, A) − d(y, A)| ≤ d(x, y).

Proof. If x ∈ A or y ∈ A then this is clear. Assume x, y 6∈ A then let (xn)n∈N ⊂ A such that limn→∞d(x, xn) = d(x, A). For any n ∈ N we find d(y, A) ≤ d(y, xn) ≤ d(x, xn) + d(x, y) hence d(y, A) − d(x, xn) ≤ d(x, y). Now in the limit n → ∞, we get d(y, A) − d(x, A) ≤ d(x, y). From symmetry we also get d(x, A) − d(y, A) ≤ d(x, y), proving the claim.

Remark 1.7: Note that Lemma 1.6 proves that x 7→ d(x, A) is a Lipschitz function and that

|d(·, A)|L≤ 1. Note also that this makes x 7→ d(x, y) Lipschitz continuous for any y ∈ S.

(8)

We call a function f ∈ Lip(S) with |f |L≤ 1 a 1-Lipschitz function.

There are a few commonly used norms on Lip(S) related to the seminorm |·|L. One norm on Lip(S) was introduced in [7]. It is given in the following definition.

Definition 1.8

Let e ∈ S. We define the norm k·ke on Lip(S) by

kf ke:= |f (e)| + |f |L.

Let e ∈ S. For any f ∈ Lip(S) we have, by the Lipschitz property, for all x ∈ S that

|f (x)| ≤ |f (x) − f (e)| + |f (e)| ≤ |f |Ld(x, e) + |f (e)| . (1.2) For e, e0 ∈ S we find by (1.2)

kf ke = |f (e)| + |f |L≤ |f |Ld(e, e0) + |f (e0)| + |f |L= |f |L(1 + d(e, e0)) + |f (e0)|

≤ (|f |L+ |f (e0)|) (1 + d(e, e0)) = kf ke0(1 + d(e, e0)).

Consequently k·ke and k·ke0 are equivalent. From now on we take e ∈ S fixed and denote the normed space Lip(S) with norm k·ke as Lipe(S).

Let BL(S) denote the space of bounded Lipschitz maps.

Definition 1.9

We define the norm k·kBL on BL(S) by

kf kBL:= kf k+ |f |L.

We write k·ke for the dual norm of k·keon Lipe(S) i.e. for φ ∈ Lipe(S) we have kφke:= sup {|φ(f )| : f ∈ Lipe(S), kf ke≤ 1} .

We will write k·kBL for the dual norm of k·kBLon BL(S). Let P(S) denote the space of Borel probability measures on S.

Let M(S) denote the finite signed Borel measures on S, i.e. all µ : B → R satisfying only the null empty set and countable additivity requirements, not the nonnegativity requirement that would make µ an ordinary measure. Note that B is the set of Borel sets and that we used R not R because we want µ to be finite: −∞ < µ(S) < ∞.

Let M+(S) denote the subset of M(S) that consist of finite positive measures.

For every measure µ ∈ M(S) there exists a unique decomposition µ = µ+− µ where µ+, µ∈ M+(S). This is called the Jordan decomposition.

We let M1(S) be the set of finite signed measures with finite first moment, i.e.

M1(S) :=



µ ∈ M(S) : Z

S

d(x, e)d|µ|(x) < ∞

 .

Similarly, let P1(S) ⊂ M1(S) be the set of probability measures with finite first moment.

We write M01(S) for the subspace of M1(S) consisting of µ ∈ M1(S) such that µ(S) = 0.

For any (positive or signed) measure µ we write L1(dµ) for the space of all measurable functions f : S → R that are µ-integrable.

We will write k·kTV for the total variation norm on M(S), i.e. for µ ∈ M(S) we have kµkTV:= |µ|(S) = µ+(S) + µ(S).

(9)

We write k·k1 for the norm on M1(S) given by kµk1:=

Z

S

max(1, d(x, e))d|µ| . Lemma 1.10

Let f ∈ Lipe(S) and µ ∈ M1(S). Then f is µ-integrable.

Proof. Since f is Lipschitz continuous it is continuous hence (Borel-)measurable. By (1.2) it follows that

Z

S

|f (x)| d|µ| ≤ Z

S

|f (e)| + |f |L|d(x, e)|d|µ| = |f (e)| kµkTV+ |f |L Z

S

d(x, e)d|µ| < ∞ holds since µ has finite first moment.

For a function f we will write [f (x)]+ for max(f (x), 0). We will now introduce a function that will find various uses throughout this thesis.

Lemma 1.11

For ∅ 6= A ⊂ S and n ∈ N we define the function fn,A: S → [0, 1] by fn,A(x) := [1 − nd(x, A)]+. Then fn,A ∈ Lip(S) with |fn|L ≤ n. If Ac contains a point x such that 0 < d(x, A) ≤ n1, then we have |fn,A|L= n. Moreover, fn,A converges to1A pointwise if and only if A is closed.

Proof. From Remark 1.7 we get that x 7→ d(x, A) is Lipschitz continuous with |d(·, A)|L ≤ 1, hence from Lemma 1.4 and Lemma 1.5 we get that fn,A is Lipschitz with |fn,A|L≤ n.

If there exists an x ∈ S such that 0 < d(x, A) ≤ n1, let (xk)k∈N⊂ A such that limk→∞d(x, xk) = d(x, A), then

|fn,A(x) − fn,A(xk)|

d(x, xk) = |[1 − nd(x, A)] − 1|

d(x, xk) = nd(x, A) d(x, xk) → n as k → ∞. So we find that |fn,A|L= n.

For x ∈ A we have d(x, A) = 0 hence for all n ∈ N we have fn,A(x) = 1. For x 6∈ A we have d(x, A) > 0 so we get fn,A(x) = [1 − nd(x, A)]+ = 0 for n sufficiently large. That means that fn,A converges pointwise to1A if and only if A = A, i.e. if and only if A is closed.

Definition 1.12 (Kantorovich norm) For σ ∈ M01(S) we define

kσkKR= sup

Z

S

f dσ : f ∈ Lip(S), |f |L≤ 1

 . Lemma 1.13

The function k·kKR defines a norm on M01(S).

Proof. Let σ ∈ M01(S) such that kσkKR = 0. We will prove that σ = 0. Let C ⊂ S be closed.

For any n ∈ N we define fn(x) := [1 − nd(x, C)]+. From Lemma 1.11 we obtain that fn is Lipschitz and that the pointwise limit of fn is 1C since C is closed. There does not exist an n ∈ N such thatR

Sfndσ 6= 0. Otherwise we have Z

S

fn

ndσ 6= 0, whilst

fn n L

≤ 1,

(10)

as |fn|L≤ n, contradicting that kσkKR= 0. Hence we get that σ(C) =

Z

S

1Cdσ = Z

S

n→∞lim fndσ = lim

n→∞

Z

S

fndσ = 0

by Lebesgue’s Dominated Convergence Theorem, using that |fn| ≤ 1 for all n ∈ N and that the constant function 1 is σ-integrable since σ is finite. Since the closed sets generate the Borel sets, this proves that σ = 0. It is clear that if σ = 0 we get kσkKR= 0. The remaining properties follow easily.

Remark 1.14: The norm k·kKR can only be defined on M01(S) and not on M1(S), see Re- mark 1.15. Extensions of this norm to the space M1(S) exist, one is given by Hanin in [6] and another is given by Bogachev in [3, p. 234]. We will work with yet another extension of this norm, which we will provide in Section 1.3.

Remark 1.15: The reason we cannot define k·kKR on all of M1(S) is the following. Let σ ∈ M1(S) such that σ(S) 6= 0. We have that for any n ∈ Z the function fn: S → R given by fn ≡ n is in Lip(S) and furthermore |fn|L= 0 ≤ 1 since it is constant. We then find

Z

S

fndσ = nσ(S) → ∞

by letting n → ∞ if σ(S) > 0 and n → −∞ if σ(S) < 0. This means that the supremum, from Definition 1.12, does not exist. Note that this seems forgotten in [9, p.35].

Definition 1.16 (The dKRmetric) On P1(S) we define the metric dKR by

dKR(µ, ν) := kµ − νkKR.

Remark 1.17: For Definition 1.16 we can use the norm k·kKR to define dKR, since for two probability measures µ, ν ∈ P1(S) we have µ − ν ∈ M01(S).

At this point, the metric dKR on the convex set P1(S) does not (yet) derive from a norm on a vector space enveloping P1(S), i.e. dKR = kµ − νk for some norm k·k on M1(S). We know k·kKR is not a norm on M1(S) by Remark 1.15. In Section 1.3, Theorem 1.26 in particular, we shall show that dKR derives from a norm on M1(S) indeed. Despite the norm that defines dKRbeing only defined on M01(S), not on P1(S), we do get that dKRis a metric. The proof is very similar to how one would prove that from any norm one can derive a metric when both are defined on the same space.

Let the subscript KR (which stands for Kantorovich-Rubinstein), and the fact that dKRis derived from the Kantorovich norm, not let you confuse this metric with the Kantorovich distance which has a quite different definition that we will give later on.

Remark 1.18: Any Borel measure µ on a metric space is inner regular [3, Theorem 7.1.7], which means that for any Borel set A ⊂ S we have

µ(A) = sup {µ(K) : K closed, K ⊂ A} . Lemma 1.19

Let µ ∈ M(S). The set BL(S) lies dense in L1(dµ) with respect to the seminorm k·kL1, i.e. for every f ∈ L1(dµ) there exists a sequence (fn)n∈N⊂ BL(S) such that

n→∞lim kf − fnkL1= lim

n→∞

Z

S

|f − fn| d|µ| = 0. (1.3)

(11)

Proof. Lipschitz continuous functions are continuous hence (Borel-)measurable. Any bounded measurable function is µ-integrable since µ is finite. Hence we have BL(S) ⊂ L1(dµ).

Since for any µ ∈ M(S) a function f is µ-integrable if and only if f is |µ|-integrable and since we have |µ| ∈ M+(S), we will, for proving (1.3), without loss of generality, take µ to be a finite positive measure. Let g : S → [0, ∞] be µ-integrable. Then there exists a nondecreasing sequence ϕn of nonnegative step-functions converging pointwise to g.

Let n ∈ N. We can write the step-function as ϕn =PNn

i=1αi1Ai where Ai are pairwise disjoint Borel sets, αi∈ R+and Nn∈ N. Take 1 ≤ i ≤ Nn, we will consider the function αi1Ai.

By Remark 1.18 we can take a sequence (Ck,i)k∈Nsuch that for each k ∈ N we have that Ck,i⊂ Ai

is closed and that limk→∞µ(Ck,i) = µ(Ai). For each k ∈ N we have that 1Ck,i ≤1Ai so this gives

lim

k→∞

Z

S

1Ai−1Ck,i

dµ = lim

k→∞

Z

S

1Ai−1Ck,i dµ = lim

k→∞[µ(Ai) − µ(Ck,i)] = 0. (1.4) Let k ∈ N. Define the function σk,m,i := [1 − md(x, Ck,i)]+ for any m ∈ N. Clearly σk,m,i is bounded and by Lemma 1.11 we find that σk,m,iis Lipschitz. The lemma also tells us that for any x ∈ S we get

m→∞lim σk,m,i(x) =1Ck,i(x)

since Ck,i closed. Furthermore we have by definition that σk,m,i(x) = 1 for x ∈ Ck,i, so we get for any m ∈ N that 1Ck,i ≤ σk,m,i. Hence we get for any k ∈ N that

m→∞lim Z

S

1Ck,i− σk,m,i

dµ = lim

m→∞

Z

S

k,m,i−1Ck,i dµ = Z

S

1Ck,idµ − Z

S

1Ck,idµ = 0. (1.5)

Let l ∈ N. By (1.4) we can take kl∈ N such that Z

S

1Ai−1Ckl,i

dµ < 1

2l. For this klwe can by (1.5) take ml∈ N such that

Z

S

1Ckl,i− σkl,ml,i

dµ < 1

2l.

Thus we construct a sequence ((kl, ml))l∈N⊂ N × N. We find for any l ∈ N that Z

S

|1Ai− σkl,ml,i| dµ = Z

S

1Ai−1Ckl,i+1Ckl,i − σkl,ml,i

≤ Z

S

1Ai−1Ckl,i

+

Z

S

1Ckl,i− σkl,ml,i

< 1 2l+ 1

2l = 1 l, proving that

lim

l→∞

Z

S

|1Ai− σkl,ml,i| dµ = 0.

Now, for any l, n ∈ N we define, as described above,

ψl,n:=

Nn

X

i=1

αiσkl,ml,i.

(12)

By Lemma 1.4 we get that ψl,nis Lipschitz. Clearly ψl,n is bounded and we get for any n ∈ N that

lim

l→∞

Z

S

n− ψl,n| dµ = lim

l→∞

Z

S

Nn

X

i=1

αi1Ai

Nn

X

i=1

αiσkl,ml,i

≤ lim

l→∞

Z

S Nn

X

i=1

αi|1Ai− σkl,ml,i| dµ

=

Nn

X

i=1

αi lim

l→∞

Z

S

|1Ai− σkl,ml,i| dµ = 0. (1.6)

Let t ∈ N. Since ϕn is a nondecreasing sequence converging pointwise to g, we can, in a similar way to (1.5), prove that we may take nt∈ N such that

Z

S

|g − ϕnt| dµ < 1 2t. By (1.6) we can, for this nt, take lt∈ N such that

Z

S

nt− ψlt,nt| dµ < 1 2t. Therefore we get for any t ∈ N

Z

S

|g − ψlt,nt| dµ = Z

S

|g − ϕnt+ ϕnt− ψlt,nt| dµ

≤ Z

S

|g − ϕnt| µ + Z

S

nt− ψlt,nt| dµ

< 1 2t+ 1

2t <1 t. Hence

lim

t→∞

Z

S

|g − ψlt,nt| dµ = 0,

which means that we have constructed a sequence (ψlt,nt)t∈N⊂ BL(S) such that

t→∞lim kg − ψlt,ntkL1 = 0.

For any f ∈ L1(dµ) we have that f+ and f are nonnegative integrable functions. Hence, by what was just proven, we can construct sequences (fn+)n∈N⊂ BL(S) and (fn)n∈N⊂ BL(S) such that

n→∞lim

f+− fn+

L1 = lim

n→∞

Z

S

f− fn L1 = 0.

Now taking fn := fn+− fn gives

n→∞lim kf − fnkL1 ≤ lim

n→∞

Z

S

f+− fn+

L1dµ + lim

n→∞

Z

S

f− fn

L1dµ = 0 proving the result.

(13)

We will end this section by giving a lemma that will be quite useful later on. Note that the following lemma does not require the set, that is the domain of the functions, to be a metric space.

Lemma 1.20

Let X 6= ∅ be any set. We find for f, g, h : X → R that

inf

x∈X[f (x) + h(x)] − inf

x∈X[g(x) + h(x)]

≤ sup

x∈X

|f (x) − g(x)|

if f, g and h are such that said supremum and infima are finite.

Proof. Without loss of generality we assume that infx∈X[f (x) + h(x)] ≥ infx∈X[g(x) + h(x)].

Let  > 0 and choose x0 ∈ X such that g(x0) + h(x0) < inf

x∈X[g(x) + h(x)] + , so − [g(x0) + h(x0) − ] > − inf

x∈X[g(x) + h(x)].

We get

x∈Xinf[f (x) + h(x)] − inf

x∈X[g(x) + h(x)]

= inf

x∈X[f (x) + h(x)] − inf

x∈X[g(x) + h(x)]

≤ f (x0) + h(x0) − inf

x∈X[g(x) + h(x)]

< f (x0) + h(x0) − [g(x0) + h(x0) − ]

= f (x0) − g(x0) + 

≤ |f (x0) − g(x0)| + 

≤ sup

x∈X

|f (x) − g(x)| + .

Since the last inequality holds for any  > 0, the result follows.

1.2 Embedding of M(S) into BL(S)

Each µ ∈ M(S) defines Iµ∈ BL(S) given by Iµ(f ) =

Z

S

f dµ. (1.7)

Note that R

S|f | dµ < ∞ holds since f is bounded and µ is finite. For any µ ∈ M(S) we have that Iµ is a continuous functional. Namely for linear maps between normed linear spaces it is enough to find a k ∈ R such that |Iµ(f )| ≤ k for all f ∈ BL(S) with kf kBL≤ 1. We find such a k in the following lemma.

Lemma 1.21

For µ ∈ M(S) we have kIµkBL≤ kµkTV.

(14)

Proof. We find for µ ∈ M(S) that kIµkBL = sup

 Z

S

f dµ

: kf kBL≤ 1



≤ sup

Z

S

|f | d|µ| : kf kBL≤ 1



≤ sup

Z

S

kf kd|µ| : kf kBL≤ 1



≤ Z

S

1d|µ| = |µ| (S) = kµkTV. Lemma 1.22

For µ ∈ M+(S) we have kIµkBL= kµkTV.

Proof. From Lemma 1.21 we obtain kIµkBL≤ kµkTV. Since the constant function 1 is in BL(S) with k1kBL= 1, we find

kµkTV= |µ|(S) = µ(S) = Z

S

1dµ ≤ kIµkBL because µ is a positive measure. This proves the lemma.

We define the map χ : M(S) → BL(S) by µ 7→ Iµ, where Iµ∈ BL(S)is defined by (1.7).

Lemma 1.23

The map χ is injective.

Proof. Let µ, ν ∈ M(S) such that µ 6= ν. Take A ⊂ S a Borel set such that µ(A) 6= ν(A), then A 6= ∅. We assume that Iµ= Iν and work towards a contradiction.

Let B ⊂ S be closed. We define fn(x) := [1 − nd(x, B)]+ for all n ∈ N. By Lemma 1.11 we find that fn is Lipschitz continuous with |fn|L ≤ n and that the pointwise limit of fn is1B. Now using the assumption that Iµ= Iν we find that

µ(B) = Z

S

1Bdµ = Z

S

n→∞lim fndµ = lim

n→∞

Z

S

fn

= lim

n→∞Iµ(fn) = lim

n→∞Iν(fn) = lim

n→∞

Z

S

fn

= Z

S

n→∞lim fndν = Z

S

1Bdν = ν(B),

by Lebesgue’s Dominated Convergence Theorem, using that |fn| ≤ 1 for all n ∈ N and that the constant function 1 is µ and ν-integrable, since µ and ν are finite. By Remark 1.18 we know µ and ν to be inner regular, so we find that

µ(A) = sup {µ(B) : B ⊂ A, B closed} = sup {ν(B) : B ⊂ A, B closed} = ν(A), giving a contradiction. We conclude that Iµ6= Iν, hence χ is injective.

Similar to χ we define ξ : M1(S) → Lipe(S) given by µ → Iµ, where Iµ ∈ Lipe(S) is given by 1.7. In the next lemma we prove that ξ is an embedding of M1(S) into Lipe(S). Note that we can only find such an embedding for M1(S), not M(S), since when f is Lipschitz we want either f to be bounded (like in the M(S) ⊂ BL(S) case) or µ to have finite first moment to ensure that Iµ(f ) =R

Sf dµ is finite.

(15)

Lemma 1.24

The map ξ is injective and for µ ∈ M1(S) we find that kIµke≤ kµk1.

Proof. If for µ ∈ M1(S) we have Iµ(f ) = 0 for all f ∈ Lipe(S) then we also have Iµ(f ) = 0 for all f ∈ BL(S), since BL(S) ⊂ Lipe(S), hence µ = 0 by Lemma 1.23, proving injectivity for ξ.

The second statement follows since for any f ∈ Lipe(S) we find for all x ∈ S that

|f (x)| ≤ |f (x) − f (e)| + |f (e)| ≤ |f |Ld(x, e) + |f (e)|

≤ (|f |L+ |f (e)|)(max(1, d(x, e)) ≤ kf kemax(1, d(x, e)) hence we get

Z

S

f dµ

≤ Z

S

|f | d|µ| ≤ kf ke Z

S

max(1, d(x, e))d|µ| = kf kekµk1 proving the inequality.

Remark 1.25: In this thesis we are mostly interested in norms and metrics on measures. The embedding ξ of M1(S) into Lipe(S) is a valuable result in this respect. Namely, consider the norm k·ke we defined on Lipe(S). This now gives a norm on M1(S). For µ ∈ M1(S) take

kµke:= kIµke.

For this to define a norm on M1(S) we need ξ to be an embedding since then kµke = 0 if and only if µ = 0. The other properties for norms follow easily.

1.3 An extension of the Kantorovich Norm

We gave the definition of the k·kKR norm on M01(S) in Definition 1.12. This norm can be extended to the space M1(S). In the following theorem we prove that the norm k·keon M1(S), from Remark 1.25, gives us such an extension.

Theorem 1.26

Let µ ∈ M01(S). Then we find that

kµkKR= kµke. Proof. Recall that

kµkKR= sup

Z

S

f dµ : f ∈ Lip(S), |f |L≤ 1



and

kµke= sup

Z

S

f dµ : f ∈ Lipe(S), kf ke≤ 1



from which follows directly that kµkKR≥ kµke since kf ke= |f (e)| + |f |L≥ |f |L.

To prove the other inequality we consider a sequence (fn)n∈N⊂ Lip(S) such that |fn|L≤ 1 and

n→∞lim

Z

S

fn



= kµkKR.

(16)

We define for all n ∈ N functions gn ∈ Lipe(S) given by gn(x) := fn(x) − fn(e) for all x ∈ S.

Note that both |fn|L = |gn|L and |gn(e)| = |fn(e) − fn(e)| = 0 hold, hence kgnke≤ 1.

Since µ ∈ M01(S), we have µ(S) = 0, hence the following holds Z

S

gndµ = Z

S

(fn− f (e)) dµ

= Z

S

fndµ − Z

S

f (e)dµ

= Z

S

fndµ − f (e)µ(S)

= Z

S

fndµ.

This implies that

kµke≥ lim

n→∞

Z

S

gn



= lim

n→∞

Z

S

fn



= kµkKR.

We conclude that kµkKR= kµke.

Remark 1.27: Theorem 1.26 not only gives an extension of the norm k·kKR to the space M1(S), we also get an extension of dKR, a metric on P1(S), to the space M1(S). Namely, for µ, ν ∈ M1(S) we have that kµ − νkedefines a metric. So we get for µ, ν ∈ P1(S) by Theorem 1.26 that

dKR(µ, ν) = kµ − νkKR= kµ − νke holds, giving an extension of dKR to M1(S).

(17)

A rigorous proof of the

Kantorovich-Rubinstein Theorem

In this chapter we will introduce the Kantorovich distance. This is a metric on probability measures, based on the Monge-Kantorovich mass transportation problem. Thereafter we will carefully study the proof of the Kantorovich-Rubinstein Theorem, which proves that this metric is equal to the dKRmetric defined earlier. We will first have a look at the transportation problem.

2.1 The Monge-Kantorovich mass transportation problem

A Polish space is a topological space that is metrizable such that it becomes a complete, separable metric space. Any metric that metrizes the space in this way is called admissible.

Let X and Y be Polish spaces. Let µ ∈ P(X), ν ∈ P(Y ). Let dXand dY be admissible metrics on X and Y respectively. Furthermore, X × Y is equipped with the Borel σ-algebra corresponding to the product (metric) topology.

Definition 2.1

A cost function is a nonnegative, measurable function c : X × Y → R+∪ {∞}.

The Monge-Kantorovich problem can be described as follows. We view µ as representing a distribution of sand on space X where µ(A) denotes the amount of sand that is on the subset A ⊂ X. Similarly ν represents a hole on Y where sand can be placed, there is room for ν(B) sand on the subset B ⊂ Y . Transporting sand from x ∈ X to y ∈ Y costs c(x, y). Minimizing the cost for transporting all sand from X to Y is known as the Monge-Kantorovich mass transportation problem, see Figure 2.1. To define the minimal cost of transportation we first need a transference plan. This is a measure π on X × Y . Now π(A × B) tells us how much sand will be moved from A ⊂ X to B ⊂ Y . We consider an example (Figure 2.2) where we have the sets X = {x1, x2, x3} and Y = {y1, y2, y3}. We denote the amount of sand moved from xi to yj by aij. We want all sand from xito be moved to somewhere on Y . We express this by µ(xi) = ai1+ ai2+ ai3. When we translate this into the language of transference plans we write µ(xi) = π({xi} × Y ). Likewise we want the hole at yj to be completely filled up, for which we write ν(yj) = π(X × {yj}).

(18)

X

Y µ

ν

x y

c(x, y)

Figure 2.1: Monge-Kantorovich’s mass transportation problem.

a31

a21

a11

a32

a22

a12

a33

a23

a13

µ(x1)

µ(x2)

µ(x3)

µ(y3) µ(y2)

µ(y1)

X

Y

Figure 2.2: A simplified transportation plan.

(19)

The continuous case, for arbitrary Polish spaces X and Y , is quite similar. The requirement that the whole pile of sand is to be emptied and the complete hole filled up, comes down to π having to satisfy

π(A × Y ) = µ(A) and π(X × B) = ν(B) (2.1)

for all measurable subsets A ⊂ X and B ⊂ Y .

In our example the cost of transportation would be given by

3

X

i=1 3

X

j=1

aij· c(xi, yi), which we write as

3

X

i=1 3

X

j=1

π({xi}, {yj}) · c(xi, yi).

In the continuous case this cost of transportation by transportation plan π is I(π), given by I(π) :=

Z

X×Y

c(x, y)dπ(x, y).

The Monge-Kantorovich mass transportation problem concerns finding the minimal cost for transporting this mass. This optimal transportation cost Tc(µ, ν) is thus given by

Tc(µ, ν) := inf

π∈Π(µ,ν)I(π),

where Π(µ, ν) is the set of all admissible transference plans i.e. the set of all measures π on X ×Y satisfying (2.1). Note that every transportation plan is also a probability measure since (2.1) implies π(X × Y ) = µ(X) = 1.

2.2 Kantorovich Duality Theorem

This minimization problem allows a so called dual representation. The theorem that describes this is called the Kantorovich Duality Theorem. As a preparation for the Kantorovich Duality Theorem we define

J (f, g) = Z

X

f dµ + Z

Y

gdν, for (f, g) ∈ L1(dµ) × L1(dν) and the set

Φc :=(f, g) ∈ L1(dµ) × L1(dν) : f (x) + g(y) ≤ c(x, y) for µ-a.e. x ∈ X and ν-a.e. y ∈ Y . Since L1(dµ) denotes the space of all measurable functions f : X → R that are µ-integrable, f ∈ L1(dµ) is every where defined, hence f (x) has a meaning, as we do not consider equivalence classes of functions equal µ almost everywhere.

Theorem 2.2 (Kantorovich Duality Theorem)

Let X and Y be Polish spaces, let µ ∈ P(X), ν ∈ P(Y ) and let c : X × Y → R+∪ {∞} be a lower semi-continuous cost function. Then

inf

π∈Π(µ,ν)

I(π) = sup

(f,g)∈Φc

J (f, g).

A proof of this theorem can be found in [9, p.25]. For the concept of lower semi-continuity we refer to Appendix A.

(20)

Villani presents a nice explanation of this Theorem, originally from Carafelli, in [9, p.20] that we closely reproduce here. You want to minimize the costs transporting a pile of sand µ on X to a hole ν on Y . You are using trucks to transport the sand. You do this by considering all transference plans, π, and find the minimal cost for transportation, infΠ(µ,ν)I(π). Someone else comes along and says, you know what, you do not have to worry about how the sand gets from X to Y , I will take care of that. All I do is set a price for loading sand onto a truck at point x ∈ X, namely f (x), and a price for unloading at y ∈ Y , namely g(y). It will always be in your financial interest to let me take care of the transportation because f (x) + g(y) ≤ c(x, y)! In order to achieve this I will even compensate for loading or unloading in certain places, by setting negative prices. Having set these prices determines the price for transporting even if we do not know what happens in between. The cost for loading will be R

Xf dµ and for unloading will be R

Y f dν, making the total cost of transportation J (f, g). We will always have J (f, g) ≤ I(π) by construction. You will of course accept the deal, and what the theorem tells us is that if the other is smart enough and he sets the prices in a clever enough way, then the cost will be (almost) as much as you were ready to spent on the other method anyway.

So for us the benefit of this theorem is that we do not have to care about the infimum over the set of transference plans, instead we have a supremum over the set Φc, which, as we will see in the next section, is nicely manageable.

2.3 The f

c

and f

cc

function

In this section we will give and prove lemmas that will be used in the proof of the Kantorovich- Rubinstein Theorem. Let us first define the functions fc and fcc, they play an important part in all this. In [9] these functions were simply introduced in the proof of the Kantorovich Duality Theorem, without detailed proof of their properties.

Definition 2.3

For c a cost function we define for any bounded f ∈ L1(dµ) the functions fc: Y → R and fcc: X → R ∪ {−∞} by

fc(y) := inf

x∈X[c(x, y) − f (x)] and fcc(x) := inf

y∈Y[c(x, y) − fc(y)].

Remark 2.4: We assume for this definition that for each x ∈ X there exists a y ∈ Y such that c(x, y) < ∞. Otherwise take A ⊂ X the set of x ∈ X such that c(x, y) = ∞ for all y ∈ Y . We would have that I(π) = ∞ if µ(A) 6= 0. In that case it is not interesting to consider this cost function and the results in this chapter.

If µ(A) = 0 then I(π) would not change if we took c(x, y) = 0 for all x ∈ A and y ∈ Y .

That I(π) = ∞ if µ(A) 6= 0 follows from the fact that π(A × Y ) 6= 0 for all π ∈ Π(µ, ν) whilst c(x, y) = ∞ for all x ∈ A and y ∈ Y , or in words, we have to move sand from A to somewhere on Y but it will always cost us infinitely much to do so. Another reason for making this assumption is that otherwise we would have fcc(x) = ∞ for any x ∈ A.

Of course we also assume for all y ∈ Y that there exists an x ∈ X such that c(x, y) < ∞.

Otherwise we would again have I(π) = ∞ and there would exist some y ∈ Y such that fc(y) = ∞.

Remark 2.5: The infimum in the definition of fc always exists since c is nonnegative and f is bounded. Note that some authors define fc and fcc for functions f ∈ L1(dµ) that are not necessarily bounded. Then the range of fc would be R ∪ {−∞}. The range of fccis R ∪ {−∞}

in any case since fc does not have to be bounded from above, even if f is bounded. If the cost function c is bounded then both fc and fccare bounded for bounded f ∈ L1d(µ). See the discussion below in Remark 2.6.

(21)

We may consider Definition 2.3 in the following way. For a pair (f, g) ∈ Φc we know that f (x) + g(y) ≤ c(x, y) holds for µ-a.e x ∈ X and ν-a.e. y ∈ Y . For proving the Kantorovich- Rubinstein Theorem we will use the Kantorovich Duality Theorem 2.18, which gives us the following expression

sup

(f,g)∈Φc

J (f, g).

To make this expression easier to handle we will use fc and fcc. Namely fc will act as a replacement for g as a function that still satisfies f (x) + fc(y) ≤ c(x, y) and gives a possibly higher value J (f, fc) ≥ J (f, g). Actually by definition it has, for each y ∈ Y , the largest value that a function satisfying f (x) + fc(y) ≤ c(x, y) (for µ-a.e. x ∈ X) can have. Therefore it maximizes the value of J (f, fc). Then we do similar thing with fcc replacing f . Note that the functions fc and fcc erase the need for a pair (f, g) ∈ Φc. We can now take the supremum over bounded f ∈ L1(dµ) where each f generates a pair (fcc, fc) ∈ Φc. That this does not change the value of the supremum will be proven in Lemma 2.14. One might then wonder if taking fccc would further increase the value J . It does not, since (fcc)c= fcwhich we prove in Lemma 2.11.

Before we may do any of this we will have to proof that fc and fcc are actually measurable and integrable functions which we do in Corollary 2.10 and Lemma 2.12 respectively.

Remark 2.6 (cX and cY): Some of the upcoming lemmas require fc and fccto take only values in R or we might even want them to be bounded. Taking the cost function to be bounded would ensure that both fc and fccare bounded as well, provided that f is bounded. We can be more general though. Let us define cX: X → R+

x 7→ inf

y∈Yc(x, y) and cY: Y → R+ by

y 7→ inf

x∈Xc(x, y).

We know that since c is nonnegative and since f is bounded that fc is bounded from below, hence takes only values in R. For bounded f we find that fc is bounded if and only if cY is bounded, because when cY is bounded then fc is bounded from above. With fc being bounded we prevent fccfrom ever taking value −∞. If we also have that cX is bounded we even get that fccis bounded. We prove this in Lemma 2.12.

To clarify, see Table 2.1 for an overview. Note that these results hold when f is bounded.

any cost function cX bounded cY bounded cX and cY bounded c bounded

fc values in R values in R bounded bounded bounded

fcc values in R ∪ {−∞} values in R ∪ {−∞} values in R bounded bounded Table 2.1: The consequences of cX and/or cY being bounded.

Clearly, if c is bounded, both cX and cY are bounded. So requiring that only cX and/or cY

are bounded is more general than taking c bounded. What we gain from defining cX and cY

becomes clear in Example 2.7.

Note that whenever we only require cY to be bounded we might as well take only cX to be bounded. In that case we would have to consider gc and gcc for bounded g ∈ L1(dν), instead of fc and fccfor bounded f ∈ L1(dµ), in order to get similar results.

Example 2.7

Let X = Y and c a cost function such that c(x, x) = 0 for all x ∈ X. Then we have

(22)

0 ≤ infy∈Yc(x, y) ≤ c(x, x) = 0 for all x ∈ X and we have 0 ≤ infx∈Xc(x, y) ≤ c(y, y) = 0 for all y ∈ X . That means that cX = cY = 0 holds, hence cX and cY are bounded. So cX and cY

are bounded if we take c = d to be a (not necessarily bounded) metric on X.

We know that for bounded f ∈ L1(dµ), when the cost function c is bounded, both fc and fcc are bounded as well. There are other properties that the fc and fccfunctions inherit from the cost function. We will see two such examples in Lemma 2.8 and Lemma 2.9.

Lemma 2.8

Let c be a Lipschitz continuous cost function, such that cY is bounded. Then for any bounded f ∈ L1(dµ) we find that fc and fcc are Lipschitz continuous with |fc|L≤ |c|L and |fcc|L≤ |c|L.

Proof. We need cY to be bounded so that fcand fcctake only values in R. Let y, y0∈ Y . Then we obtain from Lemma 1.20 that

|fc(y) − fc(y0)| =

inf

x∈X[c(x, y) − f (x)] − inf

x∈X[c(x, y0) − f (x)]

≤ sup

x∈X

|c(x, y) − c(x, y0)|

≤ |c|LdX×Y((x, y), (x, y0))

= |c|L(dX(x, x) + dY(y, y0))

= |c|LdY(y, y0).

This proves the statement for fc. In a similar way we can prove the result for fcc. Lemma 2.9

Let c be a lower semi-continuous cost function. Then for any bounded f ∈ L1(dµ) we find that fc and fcc are lower semi-continuous.

Proof. Let (x0, y0) ∈ X × Y . Let  > 0. Since c is lower semi-continuous we can take δ > 0 such that

c(x0, y0) > c(x0, y0) − 2

for all (x0, y0) ∈ X × Y that satisfy dX×Y((x0, y0), (x0, y0)) = dX(x0, x0) + dY(y0, y0) < δ.

We see that

c(x0, y0) > c(x0, y0) −

2, gives us that c(x0, y0) − c(x0, y0) <  2.

Take y ∈ Y such that dY(y0, y) < δ. We need to show that fc(y) > fc(y0) −  holds to prove lower semi-continuity for fc.

We take ˆx ∈ X such that fc(y) > c(ˆx, y) − f (ˆx) −2. We can do this since

fc(y) = infx∈X[c(x, y) − f (x)]. Note that we also have fc(y0) ≤ c(ˆx, y0) − f (ˆx). Now we find fc(y0) − fc(y) ≤ c(ˆx, y0) − f (ˆx) − fc(y)

< c(ˆx, y0) − f (ˆx) − [c(ˆx, y) − f (ˆx) − /2]

= c(ˆx, y0) − c(ˆx, y) +  2

≤ 

since dX×Y((ˆx, y0), (ˆx, y)) = dX(ˆx, ˆx) + dY(y0, y) < δ. What we have is fc(y0) − fc(y) < , which implies fc(y) > fc(y0) − , proving lower semi-continuity for fc. Similar reasoning applies to fcc.

(23)

Corollary 2.10

Let c be a lower semi-continuous cost function on X × Y . Then for any bounded f ∈ L1(dµ) the functions fc and fcc are Borel measurable.

Proof. By definition, for a lower semi-continuous function f : X → R we have for any r ∈ R that f−1((r, ∞]) is open in X. This actually corresponds to one among various equivalent definitions of Borel measurability as proven in [4, Theorem 9.2]. Hence by Lemma 2.9 both fc and fccare measurable.

Lemma 2.11

Let c be a lower semi-continuous cost function on X × Y such that cY is bounded. Then for bounded f ∈ L1(dµ) we have fccc= fc.

Proof. By definition we have fcc(x) + fc(y) ≤ c(x, y) for all (x, y) ∈ X × Y so we find fc ≤ fccc for the function fccc defined for every y ∈ Y by

fccc(y) := inf

x∈X[c(x, y) − fcc(x)].

For any y ∈ Y we find that fccc(y) def= inf

x∈Xc(x, y) − fcc(x)

def= inf

x∈X

h

c(x, y) − inf

y0∈Yc(x, y0) − fc(y0)i

= inf

x∈X

hc(x, y) + sup

y0∈Y

−c(x, y0) + fc(y0)i

def= inf

x∈X



c(x, y) + sup

y0∈Y

h−c(x, y0) + inf

x0∈Xc(x0, y0) − f (x0)i

≤ inf

x∈X



c(x, y) + sup

y0∈Y

h−c(x, y0) +c(x, y0) − f (x)i

= inf

x∈X

h

c(x, y) + sup

y0∈Y

−f (x)i

= inf

x∈X[c(x, y) − f (x)] = fc(y) hence we also find fc≥ fccc.

Lemma 2.12

Let c be a lower semi-continuous cost function on X × Y such that both cX and cY are bounded.

If f ∈ L1(dµ) is bounded then fc and fcc are integrable for any finite signed measure on Y respectively X.

Proof. From Corollay 2.10 we obtain that fc and fcc are measurable.

Let M, m ∈ R such that cY(y) = infx∈Xc(x, y) ≤ M for all y ∈ Y and |f (x)| ≤ m for all x ∈ X.

We will show that fc and fcc are bounded. For any y ∈ Y we have that fc(y) = inf

x∈X[c(x, y) − f (x)] ≤ inf

x∈X[c(x, y) + m] = inf

x∈X[c(x, y)] + m ≤ M + m and we also find that

fc(y) = inf

x∈X[c(x, y) − f (x)] ≥ inf

x∈X[c(x, y) − m] ≥ −m

since c is nonnegative. Since fcis bounded we can now prove in a similar way that fccis bounded.

Hence fc and fcc are integrable for finite signed measures on Y and X respectively.

Referenties

GERELATEERDE DOCUMENTEN

a. welke vereist zijn voor de huidige functie. in welke onderwijs is ontvangen tijdens de ingenieursstudie. in welke onderwijs is ontvangen na de

The expected switch duration (ESD) is the expected time required to reach a predefined stable working region defined via the comfort level c, after an attention switch, in an

(6) The time needed to take a step, τ, which is equal to the decision window length in the context of an AAD algorithm, can be used to convert the number of steps into a time metric..

Dit zijn twee voorbeelden in de praktijk die tot projecten hebben geleid waarbij leerlingen, studenten, docenten, onderzoekers en adviseurs hebben samengewerkt.. In 2006

lariks- boomfase Grove den- stakenfase Grove den- boomfase Grove den met douglassparverjonging Grove den met ruwe berk later dg Grove den met Japanse lariks Japanse lariks- jonge

Our main result concerns the special case where L is a partially ordered vector space with a strong order unit and M is a (possibly infinite) product of copies of the real

This program seems quite interesting, and it would complement work by other authors on gradient flows in Wasserstein spaces over infinite dimensional spaces (see [7], and [6]

higher dissolution for metallic iridium and hydrous iridium oxide in comparison to crystalline 202.. iridium