• No results found

Numerical Methods for Elliptic Partial Differential Equations with Random Coefficients

N/A
N/A
Protected

Academic year: 2021

Share "Numerical Methods for Elliptic Partial Differential Equations with Random Coefficients"

Copied!
73
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MS

C

M

ATHEMATICS

M

ASTER

T

HESIS

Numerical Methods for Elliptic Partial

Differential Equations with Random

Coefficients

Author: Supervisor:

Chiara Pizzigoni

dr. S.G. Cox

prof. dr. R.P. Stevenson

Examination date:

(2)

Abstract

This thesis analyses the stochastic collocation method for approximating the solution of elliptic partial differential equations with random coefficients. The method consists of a finite element approximation in the spatial domain and a collocation at the zeros of suitable tensor product orthogonal polynomials in the probability space and naturally leads to the solution of uncoupled deterministic problems. The computational cost of the method depends on the choice of the collocation points and thus we compare few possible constructions. Although the random fields describing the coefficients of the problem are in general infinite-dimensional, an approximation with certain optimality properties is obtained by truncating the Karhunen-Loéve expansion of these random fields. We estimate the convergence rates of the method, depending on the regularity of the random coefficients. In particular we prove exponential convergence in probability space. Numerical examples illustrate the theoretical results.

Title: Numerical Methods for Elliptic Partial Differential Equations with Random Co-efficients

Author: Chiara Pizzigoni, chiarapizzigoni92@gmail.com, 10978801 Supervisor: dr. S.G. Cox, prof. dr. R.P. Stevenson

Examination date: March 9, 2017

Korteweg-de Vries Institute for Mathematics University of Amsterdam

Science Park 105-107, 1098 XG Amsterdam http://kdvi.uva.nl

(3)

Contents

Acknowledgements v

Introduction vii

1 Preliminaries 1

1.1 The Karhunen-Loève Expansion . . . 1

1.1.1 Existence of the Karhunen-Loève Expansion . . . 1

1.1.2 Truncation of the Karhunen-Loève Expansion . . . 7

1.2 The Karhunen-Loève Expansion of a Random Field . . . 10

1.2.1 More on Convergence of the Karhunen-Loève Expansion . . . 12

2 Problem Setting 15 2.1 Strong and Weak Formulations . . . 15

2.2 Finite-Dimensional Stochastic Space . . . 17

3 The Stochastic Collocation Method 19 3.1 Full Tensor Grid of Collocation Points . . . 19

3.2 Sparse Tensor Grid of Collocation Points . . . 21

3.2.1 Anisotropic Version . . . 24

4 Convergence of the Method 27 4.1 Regularity Results . . . 27

4.2 Convergence Analysis . . . 35

4.2.1 Error of the Truncation . . . 35

4.2.2 Finite Element Error . . . 36

4.2.3 Collocation Error - Full Tensor Method . . . 37

4.2.4 Convergence Result . . . 47

4.2.5 Collocation Error - Sparse Tensor Method . . . 48

4.3 Convergence of Moments . . . 51

4.4 Anisotropic Sparse Method - Selection of Weights α . . . 52

5 Numerical Implementation 53 5.1 Example . . . 53

5.2 Conclusions . . . 60

Summary 61

(4)
(5)

Acknowledgements

I am more than grateful to Sonja and Rob. Thank you for supporting me and not only supervising. In Italian the word support is translated with supporto and if you change only one letter getting sopporto, the meaning will become the slightly more negative tolerate. So I really have to thank you for both supporting and stand me. Thank you for the time, the energies and the attention that both of you have dedicated to me.

When the mathematics is not the biggest problem, I can always rely on them. Thank you Sara for enlightening when I can see nothing but darkness. Thank you Marco for being always next to me even if you are far away. Thanks to my soul mates, Francesca, Linda and Alessia, loyal mates and beautiful souls.

Silvia, thank you for being simply the best person standing by my side and for giving your all to me.

I personally think that behind great men and women there are always great parents. So thank you mum and dad for being great even if I am not. Grazie!

(6)
(7)

Introduction

Elliptic second order partial differential equations are well suited to describe a wide variety of phenomena which present a static behaviour, e.g. the stationary solution to a diffusion problem. An entire branch of research is dedicated to the theoretical anal-ysis and numerical implementation of methods which allow to approximate the exact solution of these equations. On the other hand one should always keep in mind that in practice these models are approximations of the physical system and they do not de-scribe the given problem exactly. Therefore we aim to study equations which account as much as possible for the uncertainties arising naturally from the mathematical ob-servation of the real world. Sources of such uncertainties can be for example errors in the measurements, the intrinsic nature of the system, partially known data sets or an overall knowledge extrapolated from only a few spatial locations.

The mathematical model describing a phenomenon can be thought as an input-output machine which takes some functions and returns the solution of the model. Hence we expect the inaccuracies in the inputs to easily propagate to the output.

In order to include in our mathematical examination all these moderately predictable factors, we model them as noises following some probability distribution. Thus we turn our attention to partial differential equations where the coefficients are random fields depending on these uncertain parameters. Namely, instead of focusing on elliptic equations, defined on a spatial domain D ⊂ Rd, of the form

−∇ · (a(x)∇u(x)) = f (x), x ∈ D, we consider the following model

−∇ · (a(ω, x)∇u(ω, x)) = f (ω, x), (ω, x) ∈ Ω × D (0.0.1) where Ω is the set of possible outcomes. Again the main difference relies on the fact that the functions of the second problem have a stochastic representation emphasizing the inexactness of the coefficients of which we suppose to know the law.

In our discussion we consider a particular parametrization of the random coefficients given by the Karhunen-Loève expansion, a linear combination of infinitely many un-correlated random variables. This choice is motivated because it guarantees some op-timal results as it has been proved in [14]. We underline the fact that in general this decomposition is indimensional. The computational necessity to deal with finite-dimensional objects leads to the need of truncating the Karhunen-Loève expansion and the quantification of the consequent error.

To approximate numerically the solution of problem (0.0.1) appended with some suit-able boundary conditions, we will use the stochastic collocation method which employs

(8)

standard deterministic techniques in the spatial domain and tensor product polynomial approximation in the random domain. This method is extensively analysed in [11]. Un-fortunately tensor product spaces suffer from the so-called curse of dimensionality as the dimension of the polynomial space grows exponentially fast in the number of terms that we keep in the Karhunen-Loève truncation. In case where this number becomes quite large we may switch to sparse tensor product spaces which highly reduce the computational complexity of the method while preserving a good effectiveness (see [12]).

Concretely the procedure consists in approximating, using the Galerkin finite ele-ment scheme, the solutions of problems which are deterministic in the domain D once we evaluate the probabilistic variables at several collocation points. The final approx-imation is then recovered by interpolating the semi-discrete finite element approxima-tions. This approach differs from the widely used Monte Carlo method in the selection of the evaluation points. The latter adopts a random pick subject to a given distribu-tion. By applying the stochastic collocation method instead, the points are chosen as the roots of (possibly sparse) tensor product polynomials orthogonal with respect to the joint probability density of the random variables appearing in the Karhunen-Loève truncation. This preference benefits from the nice properties of the zeros of orthogo-nal polynomials (see [15], Chapter III). While keeping the advantage of solving uncou-pled deterministic problems as in the Monte Carlo approach, the stochastic collocation achieves a faster convergence rate as we will prove in Chapter 4.

The main goals of this work are to provide a rigorous investigation on the existence of the Karhunen-Loève parametrization and how its truncation influences our analysis, to describe the numerical techniques to solve a stochastic model problem and to estimate in detail the errors arising from the subsequent approximations that we introduce.

The outline of the thesis is as follows: Chapter 1 focuses on the existence of the Karhunen-Loève expansion of an element in a tensor product Hilbert space and on the estimation of the error, measured in some suitable norm, which arises as a consequence of truncating the expansion. Afterwards this abstract setting is tuned to our specific situation, analysing the results in the special case of random fields.

Chapter 2 describes the infinite-dimensional boundary value problem whose solu-tion we want to approximate. In particular we show that such a solusolu-tion indeed exists and in order to give an approximation, we turn to the corresponding finite-dimensional problem where all the random fields are represented by truncations of the Karhunen-Loève decomposition.

In Chapter 3 we present the stochastic collocation method, both in its full and sparse versions, and briefly discuss the anisotropic (direction dependent) variant of the last one.

The aim of Chapter 4 is to provide a rigorous error estimate of the entire method by splitting the error into three parts: the truncation error, the approximation error in the spatial domain and the approximation error in the stochastic space. In particular the latter is analysed both for the full and sparse version of the method comparing the convergence rate of the two. This is done under some mild regularity assumptions on the coefficients entering the problem which we need to impose in order to ensure

(9)

cor-responding regularity of the solution in the random domain. The theoretical result con-cerning the collocation error, which is obtained conducting a stepwise one-dimensional analysis, guarantees exponential convergence in the random space for both variants of the stochastic collocation method.

Chapter 5 is devoted to some computational examples including a numerical com-parison with the Monte Carlo method.

(10)
(11)

1 Preliminaries

1.1 The Karhunen-Loève Expansion

Uncertainties in a physical model are very often modelled as random fields. The aim of this section is to prove that any infinite-dimensional random field can be represented by the so-called Karhunen-Loève expansion which is roughly speaking an infinite lin-ear combination of uncorrelated random variables. Moreover the best (in terms of mean square error) finite-dimensional approximation of the random field is obtained by trun-cating this expansion. We will give precise bounds for this error. Following [14], we develop our construction in the general framework of Hilbert spaces. Throughout this chapter all Hilbert spaces are real and separable unless stated otherwise.

1.1.1 Existence of the Karhunen-Loève Expansion

Let (H1, h·, ·iH1), (H2, h·, ·iH2)and (S, h·, ·iS)be separable Hilbert spaces over R. For i ∈

{1, 2}, let (e(i)n )n∈Iiand (sm)m∈Jbe orthonormal bases of Hiand S, respectively, with Ii

and J countable sets. For x ∈ Hiand y ∈ S define the bilinear form x ⊗ y : Hi× S → R

[x ⊗ y](a, b) := hx, aiHihy, biS, (a, b) ∈ Hi× S.

Let E be the set of all finite linear combinations of such bilinear forms. We define the inner product h·, ·iHi⊗Son E as follows

hx1⊗ y1, x2⊗ y2iHi⊗S:= hx1, x2iHihy1, y2iS

where x1, x2 ∈ Hiand y1, y2 ∈ S.

Definition 1.1.1. The tensor product of Hiand S denoted by Hi⊗S is defined as the completion

of E under the inner product h·, ·iHi⊗S.

It can be shown that (e(i)n ⊗ sm)(n,m)∈Ii×J is an orthonormal basis for the space Hi⊗ S

[1, Section II.4]. Thus we can represent any element f ∈ Hi⊗ S as

f = X

(n,m)∈Ii×J

cn,me(i)n ⊗ sm

where cn,m = hf, e(i)n ⊗ smiHi⊗S and

P (n,m)∈Ii×Jc 2 n,m = kf k2Hi⊗S. Define fm := Pn∈Iicn,me (i)

n ∈ Hi.It is useful for later use to observe that kfmk2Hi =

P

n∈Iic

2

n,m.We get the following expansion for any element f ∈ Hi⊗ S :

f = X

m∈J

(12)

We can prove the following

Proposition 1.1.2. Let f ∈ H1⊗ S and g ∈ H2⊗ S. The map C·,·: (H1⊗ S) × (H2⊗ S) →

H1⊗ H2defined as

Cf g =

X

m∈J

fm⊗ gm

is bilinear, well-defined and independent on the choice of the orthonormal basis in S. Proof. Bilinearity comes directly from the distributive property of tensor product. We prove that the map is well-defined:

kCf gkH1⊗H2 ≤ X m∈J kfm⊗ gmkH1⊗H2 = X m∈J kfmkH1kgmkH2 ≤ X m∈J kfmk2 H1 !12 X m∈J kgmk2 H2 !12 = kf kH1⊗SkgkH2⊗S

where we have used the Cauchy-Schwarz inequality.

Finally we show that the map is independent on the choice of the orthonormal basis in S. Let {sm}m∈J be an orthonormal basis of S and {s0m}m∈J be a second one.

Conse-quently for each m ∈ J there exist (γn,m)n∈J ⊂ R such that s0m =

P

n∈Jγn,msn.By the

orthonormality condition it follows that X n∈J γn,mγn,k = δmk. Indeed δmk = hs0m, s 0 kiS = h X n∈J γn,msn, X l∈J γl,ksliS = X (n,l)∈J ×J γn,mγl,khsn, sliS = X (n,l)∈J ×J γn,mγl,kδnl = X n∈J γn,mγn,k.

Let {e(i)n }n∈Iibe an orthonormal basis of Hi. As f ∈ H1⊗S there exist (αn,m)(n,m)∈I1×J ⊂

R such that

f = X

(n,m)∈I1×J

αn,me(1)n ⊗ sm.

Similarly as g ∈ H2⊗ S there exist (βn,m)(n,m)∈I2×J ⊂ R such that

g = X

(n,m)∈I2×J

βn,me(2)n ⊗ sm.

Now we expand f and g with respect to the basis {s0m}m∈J:

f = X (n,k)∈I1×J α0n,ke(1)n ⊗ s0k =X k∈J X (n,m)∈I1×J (α0n,kγm,k)e(1)n ⊗ sm,

(13)

g = X (n,k)∈I2×J β0n,ke(2)n ⊗ s0k=X k∈J X (n,m)∈I2×J (βn,k0 γm,k)e(2)n ⊗ sm.

By uniqueness of the expansion it holds αn,m = X k∈J α0n,kγm,k, βn,m = X k∈J βn,k0 γm,k. Therefore we have C(sm) f,g = X m∈J   X n∈I1 αn,me(1)n  ⊗   X l∈I2 βl,me(2)l   = X (n,l,m)∈I1×I2×J (αn,mβl,m)e(1)n ⊗ e (2) l = X (n,l,m)∈I1×I2×J X k∈J α0n,kγm,k ! X i∈J βl,i0 γm,i ! e(1)n ⊗ e(2)l = X (n,l)∈I1×I2   X (m,k,i)∈J ×J ×J α0n,kβl,i0 γm,kγm,i  e(1)n ⊗ e (2) l = X (n,l)∈I1×I2   X (k,i)∈J ×J α0n,kβl,i0 X m∈J γm,kγm,i  e(1)n ⊗ e (2) l = X (n,l)∈I1×I2   X (k,i)∈J ×J α0n,kβl,i0 δki  e(1)n ⊗ e (2) l = X (n,l)∈I1×I2 X k∈J α0n,kβl,k0 ! e(1)n ⊗ e(2)l =X k∈J   X n∈I1 α0n,ke(1)n  ⊗   X l∈I2 βl,k0 e(2)l  = C (s0m) f,g .

As a consequence of the previous result we are allowed to introduce the following

Definition 1.1.3. Cf g defined in Proposition 1.1.2 is called the correlation of f and g.

Now we investigate the possible existence of an operator which can be associated to the correlation

Cf := Cf f

for f ∈ H ⊗ S. Before doing this we recall some concepts and results of functional analysis. Let C : H → H be an operator on the real Hilbert space H. We can associate to

(14)

C the norm

kCk := sup

v∈H, kvkH=1

kCvkH.

We say that a linear and bounded operator C on H is compact if there exists a sequence (Cn)nof finite rank operators such that

kC − Cnk −→ 0.

The Spectral Theorem for compact and symmetric operators on a separable Hilbert space states (see [2], Theorem 5.1)

Theorem 1.1.4. Let H be a separable real Hilbert space. Let C be a symmetric and compact

operator on H. Then for any v ∈ H

Cv = X

m∈J

λmhv, emiHem

where (em)m form an orthonormal basis for H, (λm)m ⊂ R are the eigenvalues corresponding

to the eigenvectors emof C and λm↓ 0.

The notation λm↓ 0 denotes a non-increasing sequence converging to 0.

Define, for any orthonormal basis (em)m∈J, the trace of the non-negative definite

sym-metric operator C as

Tr(C) := X

m∈J

hCem, emiH.

Indeed it can be proved that this definition is independent on the choice of the basis in H, see for example [1]. We say that a non-negative definite symmetric compact operator is trace class if Tr(C) < ∞. Equivalently by using the characterization of the Spectral Theorem, a non-negative definite symmetric compact operator is trace class ifP

mλm<

∞.

We can now proceed to prove the following

Theorem 1.1.5. Let (H, h·, ·iH) and (S, h·, ·iS) be separable Hilbert spaces of the same

di-mension and let (em)m∈J , (sm)m∈J be orthonormal bases of H and S, respectively. The map

Φ : {Cf ∈ H ⊗ H : f ∈ H ⊗ S} → {C : C non-negative definite trace class operator} given by

Φ(Cf)(v) = Φ X m∈J fm⊗ fm ! (v) := X m∈J hfm, viHfm∈ H, v ∈ H (1.1.1) is a one-to-one correspondence.

Proof. For f ∈ H ⊗ S we denote Φ(Cf) by Cf.First we prove that indeed Cf has the

required properties. By definition we can immediately conclude that Cf is compact as

it is a norm limit of finite-rank operators. Indeed if we define Cn := Pm≤nhfm, ·iHfm

then

kCf − Cnk ≤ X

m>n

(15)

We show that Cf is non-negative definite. Let v ∈ H, v 6= 0. Then hCfv, viH = X m∈J hfm, viHhfm, viH = X m∈J hfm, vi2H ≥ 0.

Now we check that Cf is a trace class operator. Namely plugging the expression for Cf

into the definition of trace, Tr(Cf) = X m∈J hCfem, emiH = X m∈J X n∈J hfn, emiHhfn, emiH = X n∈J X m∈J hfn, emi2Hhem, emiH =X n∈J hX m∈J hfn, emiHem, X m∈J hfn, emiHemiH = X n∈J kfnk2 H = kf k2H⊗S < ∞.

Therefore we can conclude that Cf is a non-negative definite trace class operator.

Now it remains to show that the map Φ is a one-to-one correspondence. Observe that the following chain of equalities holds true for v, w ∈ H:

hCfv, wiH = X m∈J hfm, viHhfm, wiH = X m∈J hfm⊗ fm, v ⊗ wiH⊗H = hX m∈J fm⊗ fm, v ⊗ wiH⊗H = hCf, v ⊗ wiH⊗H. (1.1.2)

We are going to check that our map is injective, i.e. Cf = Cg for f, g ∈ H ⊗ S implies

Cf = Cg.Assuming that Cf = Cg we have by (1.1.2) that for all v, w ∈ H

hCf, v ⊗ wiH⊗H = hCg, v ⊗ wiH⊗H.

By linearity we have also for all n ∈ N and for all vi, wi ∈ H

hCf, n X i=1 vi⊗ wiiH⊗H = hCg, n X i=1 vi⊗ wiiH⊗H or equivalently hCf − Cg, n X i=1 vi⊗ wiiH⊗H = 0.

The claim is proved after observing that the set {Pn

i=1vi⊗ wi: vi, wi ∈ H, n ∈ N} is a

dense subset in H ⊗ H and therefore Cf − Cg = 0.

It remains to show that the map is also surjective. Let C be a non-negative definite trace class operator. We want to show that there exist f ∈ H ⊗ S such that Cf = C.Let

(φm)m∈J be the sequence of eigenvectors of C forming an orthonormal basis for H and

(λm)m∈J ⊂ R+be the eigenvalues of C, i.e.

Cφm = λmφm. (1.1.3)

MoreoverP

mλm< ∞as C is trace class. As a consequence we get convergence of the

following series

f = X

m∈J

p

(16)

For this element we have fm = √ λmφm. Hence Cf = X m∈J λmφm⊗ φm.

Thus for all v ∈ H it holds that Cfv =

X

m∈J

λmhφm, viHφm.

From this we can state that the spectrum of Cf equals (1.1.3) and therefore Cf = C.

Corollary 1.1.6. Let (H, h·, ·iH)be a separable Hilbert space and let C ∈ H ⊗ H be a

corre-lation. Let C be defined by (1.1.1) with eigenpairs (λm, φm)m∈J. Then C can be represented

as

C = X

m∈J

λmφm⊗ φm. (1.1.4)

The following theorem is crucial for our purpose.

Theorem 1.1.7. Let (H, h·, ·iH)and (S, h·, ·iS)be separable Hilbert spaces of the same

dimen-sion. Let (φm)m∈J be an orthnormal basis for H. Let C ∈ H ⊗ H be a correlation represented

as in (1.1.4). Let f ∈ H ⊗ S. Then Cf = C if and only if there exists an orthonormal family

(Ym)m∈J ⊂ S such that

f = X

m∈J

p

λmφm⊗ Ym. (1.1.5)

Proof. Assume that there exists an orthonormal family (Ym)m∈J ⊂ S such that f =

P

m

λmφm⊗ Ym. Then by Proposition 1.1.2 we can conclude the statement, possibly

after having completed the family (Ym)m∈Jto a basis for S.

On the other way around, assume that Cf = C. We have already observed at the

beginning of the section that we can represent the element f ∈ H ⊗ S as f = X

m∈J

φm⊗ Xm (1.1.6)

where (Xm)m ⊂ S. Moreover we have the expansion

Xm=

X

n∈J

hXm, sniSsn

for (sn)n∈J being an orthonormal basis of S. Thus we get

f = X m∈J φm⊗ Xm = X m∈J X n∈J hXm, sniSφm⊗ sn.

(17)

If we call fn:=Pm∈JhXm, sniSφmwe have Cf = X n fn⊗ fn =X n∈J X m∈J hXm, sniS ! X m0∈J hXm0, sniS ! φm⊗ φm0 = X (m,m0)∈J ×J X n∈J hXm, sniShXm0, sniS ! φm⊗ φm0 = X (m,m0)∈J ×J X n∈J hXm, sniShXm0, sniShsn, sniS ! φm⊗ φm0 = X (m,m0)∈J ×J hXm, Xm0iSφm⊗ φm0.

If we compared the previous result with (1.1.4) we conclude that necessarily hXm, Xm0iS = λmδmm0.

Therefore (Xm)mis an orthogonal family with kXmk2S = λm. Hence from (1.1.6) we get

the statement with Ym =

Xm

√ λm

.

Definition 1.1.8. The representation(1.1.5) of f given the spectrum of Cfis called the

Karhunen-Loève expansion of f.

1.1.2 Truncation of the Karhunen-Loève Expansion

For any element f ∈ H ⊗ S with a given correlation we have established the existence of an expansion taking the form

f = X

m∈J

p

λmφm⊗ Ym

where (φm)m∈J and (Ym)m∈J are orthonormal systems and λm ↓ 0. Now we are

in-terested in investigating the properties of such an expansion. In order to do that we introduce the following standard notation: for U a closed subspace of H, PU will

indi-cate the orthogonal projection of H onto U.

Theorem 1.1.9. If f ∈ H ⊗ S has the Karhunen-Loève expansion(1.1.5), then for any N ∈ N it holds inf U ⊂H Uclosed dim U =N kf − PU ⊗Sf k2H⊗S ≥ X m≥N +1 λm, (1.1.7)

(18)

Proof. If U = span{φ1, ..., φN} then PU ⊗Sf = N

P

m=1

λmφm⊗ Ymwhich is exactly the Nth

truncation of the Karhunen-Loève expansion. Thus kf − PU ⊗Sf k2H⊗S= k X m≥N +1 p λmφm⊗ Ymk2H⊗S = X m≥N +1 λmhφm⊗ Ym, φm⊗ YmiH⊗S = X m≥N +1 λmkφm⊗ Ymk2H⊗S = X m≥N +1 λm

and so we have equality in (1.1.7).

We prove the first statement by induction. If N = 0 then (1.1.7) holds. Firstly observe that we are allowed to take (Ym)m∈J to be an orthonormal basis of S with J finite or

countable infinite. Let U ⊂ H be closed and such that dim U = N . Consider g ∈ U ⊗ S. Then we can represent g as

g = X

m∈J

u0m⊗ Ym

where (u0m)m ⊂ U. It holds that

g = X m∈J p λmum⊗ Ym with um:= u0m √ λm

⊂ U . Using this we get kf − gk2U ⊗H = hf − g, f − giU ⊗H = hX m∈J p λm(φm− um) ⊗ Ym, X n∈J p λn(φn− un) ⊗ YniU ⊗H = X m∈J X n∈J p λm p λnh(φm− um) ⊗ Ym, (φn− un) ⊗ YniU ⊗H = X m∈J X n∈J p λm p λnhφm− um, φn− uniHhYm, YniS = X m∈J λmkφm− umk2H. (1.1.8)

Consider span{u1, ..., uN −1} ⊂ U. If dim(span{u1, ..., uN −1}) = N − 1 then

W :=span{u1, ..., uN −1}.

Otherwise if dim(span{u1, ..., uN −1}) < N − 1 take

(19)

for some k ∈ N such that dim W = N − 1.

We use the notation W⊥U for the space such that U = W ⊕(W⊥U). Observe that

dim(W⊥U) = 1and therefore W⊥U =span{e} for some e ∈ U with kek

H = 1.We have kf − gk2U ⊗H = X m≤N −1 λmkφm− umk2H + X m≥N λmkφm− umk2H = X m≤N −1 λmkφm− PWumk2H + X m≥N λmkφm− umk2H+ + X m≥N λmkφm− PWumk2H − X m≥N λmkφm− PWumk2H = X m∈J λmkφm− PWumk2H − X m≥N λm(kφm− PWumk2H − kφm− umk2H) (∗) ≥ X m∈J λmkφm− PWumk2H − X m≥N λmkPW⊥Uφmk2H = X m∈J λmkφm− PWumk2H − X m≥N λmhe, φmi2H ≥ X m∈J λmkφm− PWumk2H − sup m≥N λm X m≥N he, φmi2H ≥ X m∈J λmkφm− PWumk2H − λNkek2H = X m∈J λmkφm− PWumk2H − λN

where in the last inequality we have used Bessel’s inequality and the assumption that the sequence of eigenvalues is non-increasing. The inequality marked with (∗) holds true because kφm− PWumk2H = kPWφm− PWum+ PW⊥Uφmk2H = kPW(φm− um) + PW⊥Uφmk2H = kPW(φm− um)kH2 + kPW⊥Uφmk2H ≤ kφm− umk2H + kPW⊥Uφmk 2 H. By (1.1.2) for w = P m∈J √ λmPWum⊗ Ym∈ W ⊗ S we have X m∈J λmkφm− PWumkH2 − λN = kf − wk2W ⊗S− λN ≥ inf h∈W ⊗Skf − hk 2 W ⊗S − λN.

Thus we have shown kf − gk2 U ⊗H ≥ inf W ⊂H Wclosed dim W =N −1 inf h∈W ⊗Skf − hk 2 W ⊗S− λN.

(20)

Applying the inductive hypothesis we finally get inf U ⊂H Uclosed dim U =N inf g∈U ⊗Skf − gk 2 U ⊗H ≥ inf W ⊂H Wclosed dim W =N −1 inf h∈W ⊗Skf − hk 2 W ⊗S− λN ≥ X m≥N λm− λN = X m≥N +1 λm.

1.2 The Karhunen-Loève Expansion of a Random Field

The main goal of this section is to specialize the results we have previously obtained to the particular case of infinite-dimensional random fields.

In the notation of the previous section, let H = L2(D)and S = L2P(Ω).Let a ∈ L2P(Ω) ⊗ L2(D)(i.e. a is a second order random field) with mean

Ea(x) :=

Z

a(ω, x)dP (ω) and covariance function

Va(x, x0) :=

Z

a(ω, x)a(ω, x0)dP (ω) − Ea(x)Ea(x0)

= Z

[a(ω, x) − Ea(x)][a(ω, x0) − Ea(x0)]dP (ω).

Observe that Ea(x) ∈ L2(D)and Va(x, x0) ∈ L2(D) ⊗ L2(D).

Consider

˜

a(ω, x) := a(ω, x) − Ea(x) ∈ L2P(Ω) ⊗ L2(D).

Let (sm)m∈Nand (en)n∈Nbe orthonormal bases for L2P(Ω)and L2(D), respectively.

Ac-cording to our discussion at the beginning of Section 1.1.1, we have the following rep-resentation ˜ a =X n∈N X m∈N ˜ an,men⊗ sm, (˜an,m) ∈ `2(N × N).

(21)

Define ˜am := P n∈N

˜

an,men ∈ L2(D).Now we verify that C˜aas given in Definition 1.1.3

coincides with Vadefined as above. Indeed if we identify ˜awith an element of L2(Ω×D)

Va(x, x0) = Z Ω ˜ a(ω, x)˜a(ω, x0)dP (ω) = Z Ω X n∈N X m∈N ˜ an,men(x)sm(ω) ! X n0∈N X m0∈N ˜ an0,m0en0(x0)sm0(ω) ! dP (ω) =X n∈N X m∈N X n0∈N X m0∈N ˜ an,m˜an0,m0en(x)en0(x0) Z Ω sm(ω)sm0(ω)dP (ω) =X n∈N X m∈N X n0∈N X m0∈N ˜ an,m˜an0,m0en(x)en0(x0)δmm0 = X m∈N X n∈N ˜ an,men(x) ! X n0∈N ˜ an0,men0(x0) ! = X m∈N ˜ am(x)˜am(x0) = C˜a.

Therefore Vais indeed the correlation of ˜a.

By Theorem 1.1.5 we can associate to Va a non-negative definite trace class operator

Va: L2(D) −→ L2(D)taking the form

(Vav)(x) = X m∈N h˜am, viL2(D)˜am(x) = X m∈N Z D ˜ am(x0)v(x0)dx0  ˜ am(x) = Z D X m∈N ˜ am(x)˜am(x0)v(x0)dx0 = Z D Va(x, x0)v(x0)dx0. (1.2.1)

This is called the Carleman operator. In particular Vais compact and has an eigenpairs

sequence (λm, φm)∞m=1such that

Vaφm= λmφm (1.2.2)

with the properties that the eigenvalues are non-negative real such that λm↓ 0 and the

eigenvectors (φm)mform an orthonormal sequence in L2(D).

By Corollary 1.1.6 we have a representation for the covariance given by Va(x, x0) =

X

m∈N

λmφm(x)φm(x0).

(22)

Corollary 1.2.1. If a ∈ L2P(Ω) ⊗ L2(D),then there exists a sequence (Ym)m ⊂ L2P(Ω)such

that EYm= 0,Cov(Ym, Yn) = δmnfor all m, n ∈ N and

a(ω, x) = Ea(x) +

X

m∈N

p

λmφm(x)Ym(ω) (1.2.3)

where (λm)m and (φm)m are the eigenvalues and eigenvectors of the Carleman operator Va

defined in (1.2.1). Moreover Ym(ω) = 1 √ λm Z D [a(ω, x) − Ea(x)]φm(x)dx.

Definition 1.2.2. Decomposition(1.2.3) is called the Karhunen-Loève expansion of the random field a.

In this context the analogue of Theorem 1.1.9 states that

Corollary 1.2.3. If a ∈ L2P(Ω) ⊗ L2(D)has the Karhunen-Loève expansion (1.2.3), then for any N ∈ N it holds ˜ a − N X m=1 p λmφmYm 2 L2 P(Ω)⊗L2(D) = X m≥N +1 λm. (1.2.4)

1.2.1 More on Convergence of the Karhunen-Loève Expansion

Under certain conditions it can be shown that that the convergence of the truncated Karhunen-Loève expansion is in L∞(Ω×D).Namely assuming that the sequence (Ym)m

is uniformly bounded in L∞(Ω)andP

m √ λmkφmkL∞(D)converges, we have ˜ a − N X m=1 p λmφmYm L∞(Ω×D) ≤ C X m≥N +1 p λmkφmkL∞(D) (1.2.5)

where the constant C is independent of N. We will use this important observation in Section 4.2.1 where we analyse the overall error of the stochastic collocation method. Therefore inspired by the bound (1.2.5), we are interested in studying the eigenvalue decay and point-wise eigenfunction bounds. The results we are going to present are strictly related to the regularity of the covariance Vaas stated in the following

Definition 1.2.4. Let p, q ∈ [0, ∞). A covariance function Va : D × D −→ R is said to be

piecewise analytic (respectively smooth and Hp,q) if there exists a finite partition D = {Dj}Kj=1

of D into simplices Djand there exists a finite family G = {Gj}Kj=1of open sets in Rdsuch that

D =

K

[

j=1

Dj, Dj ⊂ Gj, ∀j = 1, ..., K

and such that Va|Dj×Dj0 has an extension to Gj×Gj0which is analytic in Gj×Gj0(respectively

(23)

We refer the reader to [14, Section 2.2] for a proof of the following result.

Proposition 1.2.5. Let Va∈ L2(D × D)and let λmas in (1.2.2).

1) If Vais piecewise analytic, then the eigenvalues satisfy

λm≤ C1e−C2m

1/d

, ∀m for some constants C1, C2> 0depending only on Va.

2) If Vais piecewise Hk,0, then the eigenvalues satisfy

λm ≤ C3m−k/d, ∀m

for a constant C3 > 0depending only on Va.

3) If Vais piecewise smooth, then the eigenvalues satisfy

λm ≤ C4m−s, ∀m

for any s > 0 with a constant C4 > 0depending only on Vaand s.

4) If Vais a Gaussian covariance function taking the form Va(x, x0) = σ2e

− |x−x0|2

γ2diam(D)2 for

σ, γ > 0called standard deviation and correlation length, respectively, then the eigenval-ues satisfy λm≤ C5 σ2 γ2 (1/γ)m1/d Γ(0.5m1/d), ∀m

where Γ(·) denotes the Gamma function and C5> 0is independent of m.

The proof of the following proposition can be found in [14, Section 2.3].

Proposition 1.2.6. Let Va∈ L2(D × D)and let φmas in (1.2.2).

1) If Vais piecewise Hk,0, then φm∈ Hk(Dj)for all Dj ∈ D and for every ε ∈ (0, k − d/2]

there exists a constant C6= C6(ε, k) > 0such that

kφmkL∞(D)≤ C6λ−(d/2+)/km , ∀m.

2) If Vais piecewise smooth, for any s > 0 there exists a constant C7 = C7(s, d) > 0such

that

mkL(D)≤ C7λ−sm, ∀m.

Combining Propositions 1.2.5 and 1.2.6 we finally get control on the series (1.2.5).

Corollary 1.2.7. If Vais piecewise Hk,0, then φm ∈ Hk(Dj)for all Dj ∈ D and

p

λmkφmkL(D)≤ Km−s

where s = 2d1 (k − d − 2ε)for an arbitrary ε ∈ (0,12(k − d)]and the constant K is independent of m.

(24)
(25)

2 Problem Setting

2.1 Strong and Weak Formulations

For d ∈ N, let D ⊂ Rd be a bounded Lipschitz domain and (Ω, F, P ) be a complete probability space. Let a, f : Ω × D → R be known random functions. Our model problem is an elliptic partial differential equation with random coefficients and it takes the form: P -a.s. (

−∇ · (a(ω, x)∇u(ω, x)) = f (ω, x), x ∈ D

u(ω, x) = 0, x ∈ ∂D (2.1.1)

where the gradient operator ∇ is taken with respect to x. The main goal is to find a random field u : Ω × D → R which satisfies the above problem. As we will see, under certain assumptions on a and f , it is relatively easy to show that such solution exists and it is unique. On the other hand no explicit expression of u is known in general. Thus we aim to approximate numerically the solution to (2.1.1).

Consider the Hilbert space L2

P(Ω)of square integrable functions with respect to P

with the usual norm and the Hilbert space H01(D)of H1(D)-functions vanishing at the

boundary of D in a trace sense, equipped with the norm kφk2H1

0(D)=

Z

D

|∇φ(x)|2dx. In what follows we are going to make several assumptions: (A1) a ∈ L2

P(Ω) ⊗ L2(D)with mean and covariance function defined as:

Ea(x) := Z Ω a(ω, x)dP (ω), Va(x, x0) := Z Ω

a(ω, x)a(ω, x0)dP (ω) − Ea(x)Ea(x0).

(A2) a is uniformly bounded from below:

∃ amin > 0 : P (ω ∈ Ω : a(ω, x) > amin∀x ∈ D) = 1.

(A3) f ∈ L2

P(Ω) ⊗ L2(D)with mean and covariance function defined as:

Ef(x) := Z Ω f (ω, x)dP (ω), Vf(x, x0) := Z Ω f (ω, x)f (ω, x0)dP (ω) − Ef(x)Ef(x0).

In this framework we deal with stochastic functions living on a domain which is the Cartesian product of spatial and probabilistic domains. For this reason the choice of

(26)

tensor product spaces is natural. In particular we focus on the tensor product Hilbert space VP := L2P(Ω) ⊗ H01(D)endowed with the inner product

hv, wiVP = E

Z

D

∇v(ω, x) · ∇w(ω, x)dx. Moreover we introduce the subspace

VP,a:=  v ∈ VP : E Z D a(ω, x)|∇v(ω, x)|2dx < ∞ 

with the norm kvk2

VP,a = E

R

Da(ω, x)|∇v(ω, x)|

2dx.Observe that due to (A2) we have

the continuous embedding VP,a,→ VP with

kvkVP ≤ √1 amin

kvkVP,a. (2.1.2)

Define the bilinear form B : VP × VP → R and the linear functional F : VP → R as

follows B(w, v) := E Z D a(ω, x)∇w(ω, x) · ∇v(ω, x)dx, F (v) := E Z D f (ω, x)v(ω, x)dx. We can now reformulate problem (2.1.1) in a variational form: find u ∈ VP such that

B(u, v) = F (v), ∀ v ∈ VP. (2.1.3)

We show in a moment that problem (2.1.3) is well-posed in the sense that a solution exists and it is unique. More precisely if we can prove that B is continuous and coercive and F is continuous, then by the Lax-Milgram Theorem we have the claim (see [9], Theorem 2.7.7). In the proof of this result we will use Poincaré’s inequality:

kvkL2(D)≤ CPk∇vkL2(D), ∀ v ∈ H01(D). (2.1.4) Lemma 2.1.1. Under assumptions (A2) and (A3), problem(2.1.3) admits a unique solution u ∈ VP such that

kukVP ≤ CP amin

kf kL2

P(Ω)⊗L2(D).

Proof. Holder’s inequality applied twice gives that |B(u, v)| ≤ kukVP,akvkVP,a

and therefore B is continuous with continuity constant equal to 1. It is straightforward to show also that

B(v, v) = kvk2V

(27)

which in turn gives coerciveness with constant equal to 1. For what concerns the continuity of F we have

|F (v)| = |E Z Df vdx| ≤ E Z D|f v|dx ≤ E[kfkL 2(D)kvkL2(D)] ≤ CPE[kf kL2(D)k∇vkL2(D)] ≤ CP √ aminE[kf kL 2(D)k √ a∇vkL2(D)] ≤ √CP amin(E[kf k 2 L2(D)])1/2kvkVP,a = √CP amin kf kL2 P(Ω)⊗L2(D)kvkVP,a

where we have used Holder’s inequality again and Poincaré’s inequality (2.1.4). Thus, by assumption (A3), F is continuous with continuity constant CP

aminkf kL2P(Ω)⊗L2(D).

Having shown continuity of B and F and coerciveness of B, the first part of the lemma is then proved by applying the Lax-Milgram Theorem and the fact that VP,a,→ VP.

It remains to show that the estimate in the statement holds. We have kukVP ≤ 1 amin Z D E[a2|∇u|2]dx 1/2 ≤ CP amin Z D E[(∇ · (a∇u))2]dx 1/2 = CP amin Z D E[f2]dx 1/2 = CP amin kf kL2 P(Ω)⊗L2(D)

where the first inequality comes from (A2) and the second one from (2.1.4).

2.2 Finite-Dimensional Stochastic Space

In order to deal with the infinite-dimensional problem (2.1.3) we aim to replace the coefficients by finite-dimensional counterparts which can be treated numerically. More precisely we consider a new problem whose coefficients depend only on a finite number N of random variables. We have shown in Chapter 1 that we can decompose a random field with the Karhunen-Loève expansion and we can consider a finite-dimensional truncation of this last one. Hence we choose the new coefficient

aN(ω, x) = Ea(x) + N X m=1 p λmφm(x)Ym(ω)

which converges to a(ω, x) in the space L2

P(Ω) ⊗ L2(D). A similar result holds for f. We

emphasize the dependence on the random variables Ym’s introducing the notation

aN(ω, x) = aN(Y1(ω), ..., YN(ω), x), fN(ω, x) = fN(Y1(ω), ..., YN(ω), x)

where (Ym)Nm=1 are real valued with EYm = 0and Cov(Ym, Yn) = δmn (see Corollary

4.1.4). The new problem is then to find uN ∈ VP such that

E Z D aN(ω, x)∇uN(ω, x) · ∇v(ω, x)dx = E Z D fN(ω, x)v(ω, x)dx, ∀v ∈ VP. (2.2.1)

(28)

Observe that by the Doob-Dynkin Lemma (see [3], Theorem 20.1) it holds true that also the function uN depends on the same random variables, i.e.

uN(ω, x) = uN(Y1(ω), ..., YN(ω), x).

With this characterization the infinite-dimensional probability space Ω has been sub-stituted by an N -dimensional one. We underline the fact that aN and fN are inexact

representations of the coefficients appearing in (2.1.3) and therefore the solution uN

will also be an approximation of the exact solution u and the truncation error u − uN

has to be estimated (see Section 4.2.1).

We will adopt the following notation. Let Γm := Ym(Ω) ⊂ R which can be either

bounded or unbounded. Define

Γ :=

N

Y

m=1

Γm.

Moreover let assume that the random variables Y1, ..., YN have joint probability density

function ρ : Γ −→ R+.

We now consider the tensor product space

Vρ= L2ρ(Γ) ⊗ H01(D).

Introducing the new notation we can reformulate problem (2.2.1) as follows: find uN ∈

Vρsuch that

Z

Γ

Z

D

aN(y, x)∇uN(y, x) · ∇vN(y, x)ρ(y)dxdy

= Z

Γ

Z

D

fN(y, x)vN(y, x)ρ(y)dxdy, ∀vN ∈ Vρ.

(2.2.2)

We can equivalently consider uN, aN and fN as functions

uN, aN, fN : Γ −→ H01(D).

The analogue of problem (2.2.2) is then find uN ∈ Vρsuch that

Z D aN(y)∇uN(y) · ∇ψ(x)dx = Z D fN(y)ψ(x)dx, ∀ψ ∈ H01(D), ρ-a.e. in Γ. (2.2.3)

Remark 1. The use of the finite-dimensional truncation of the Karhunen-Loève expansion has

turned the stochastic problem (2.1.3) into the deterministic parametric elliptic problem (2.2.3) with an N -dimensional parameter.

(29)

3 The Stochastic Collocation Method

The stochastic collocation method aims to approximate numerically the solution uN to

problem (2.2.3). We follow [11] in the description of the method. It is based on stan-dard finite element approximations in the space H1

0(D)where D is a bounded Lipschitz

domain and a collocation, in the space L2

ρ(Γ), on a tensor grid built upon the zeros of

polynomials orthogonal with respect to the joint probability density function of the random variables Y1,...,YN.

3.1 Full Tensor Grid of Collocation Points

We seek an approximation in the finite-dimensional tensor space Vp,h := Pp(Γ) ⊗ Shq(D)

where:

• Shq(D) ⊂ H01(D)is a finite element space of dimension Nh containing piecewise

polynomials of degree q on a uniform triangulation Thwith mesh size h.

• Pp(Γ) ⊂ L2

ρ(Γ)is the span of tensor product polynomials with degree p = (p1, ..., pN),

i.e. Pp(Γ) := N

N

m=1

Ppm(Γm)where for each m ∈ {1, ..., N }

Ppm(Γm) :=span{ymi : i = 0, ..., pm}. Hence Np := dim(Pp(Γ)) = N Q m=1 (pm+ 1).

The first step in the approximating process consists in choosing a set of collocation points and applying Lagrange interpolation to uN at those points. One way to do so

is by using a full tensor grid interpolation operator. The evaluation points are chosen to be the roots of polynomials which are orthogonal with respect to a certain auxiliary density function. Standard probability density functions, such as Gaussian or uniform, lead to well known Gauss quadrature nodes and weights. Recall that in general the random variables Ym’s are uncorrelated but not independent and therefore the joint

probability density function ρ does not factorize. In this case we introduce ˆρ : Γ → R+

such that ˆ ρ(y) = N Y m=1 ˆ ρm(ym), ρ ˆ ρ L∞(Γ) < ∞. (3.1.1)

See Section 4.1 for conditions on the existence of ˆρ. For each m ∈ {1, ..., N }, we denote by {ym,km}

pm+1

km=1 ⊂ Γ

pm+1

(30)

polynomial qpm+1∈ Ppm+1(Γm)such that for any v ∈ Ppm(Γm)

Z

Γm

qpm+1(ym)v(ym) ˆρm(ym) = 0.

We consider the tensorized grid of collocation points Y := {yk= (ym,km)

N

m=1 : 1 ≤ km ≤ pm+ 1}.

We order this set introducing the following index associated to the vector k = (k1, ..., kN) :

k := k1+ N −1 X i=1 (ki+1− 1) Y j≤i (pj + 1).

More precisely we introduce the bijection

Ψ : {k = (k1, ..., kN) : km ∈ {1, ..., pm+ 1}} −→ {1, 2, ..., Np}, Ψ(k) = k.

Then we denote by ykthe collocation point yk= (y1,Ψ−1(k)

1, ..., yN,Ψ−1(k)N).

We define the full tensor Lagrange interpolant Ifull: C0(Γ; H01(D)) → Pp(Γ) ⊗ H01(D),

Ifullv(y, x) = X yk∈Y v(yk, x)`k(y) where `k(y) = N Q m=1 `m,km(ym) and `m,km ∈ Ppm(Γm), `m,km(ym) = pm+1 Q i=1 i6=km ym− ym,i ym,km− ym,i . Note that `m,j(ym,i) = δjifor j, i = 1, ..., pm+ 1and consequently `k(yj) = δkj.

Equivalently by using the global index notation we get Ifullv(y, x) = Np X k=1 v(yk, x)`k(y) (3.1.2) where `k(y) = N Q m=1 `m,Ψ−1(k) m(ym).

By defining one-dimensional interpolants Ipm : C

0 m; H01(D)) → Ppm(Γm) ⊗ H 1 0(D), Ipmv(ym, x) = pm+1 X km=1 v(ym,km, x)`m,km(ym) (3.1.3)

we have the equivalent formulation Ifull=

N

N

m=1

Ipm.

The semi-discrete approximation in the stochastic domain is obtained by applying the operator Ifullto uN, the solution to problem (2.2.3):

(31)

The second step in the approximating procedure consists in projecting the semi-approximation uN,p onto the finite element space Shq(D). We may do this using the

Galerkin method and we get the final approximation uN,p,h : Pp(Γ) → Shq(D)satisfying

Z D aN(y)∇uN,p,h(y) · ∇ψh(x)dx = Z D fN(y)ψh(x)dx ∀ψh∈ Shq(D), ∀y ∈ Y.

The main goal of the implementations in Chapter 5 will be to approximate the statis-tics of the solution u to problem (2.1.3). Therefore we explain how to recover the ex-pected value of the final approximation uN,p,h.We define the Gauss quadrature weights

ωm,km := Z Γm `2m,km(ym) ˆρm(ym)dym, ωk = N Y m=1 ωm,Ψ−1(k) m. (3.1.4)

Given these weights, we introduce the following Gauss quadrature formula (see [11], (2.3)) Eρb(uN,p,h)(x) = Eρb h IfulluN,h(y, x) i = Eρb   Np X k=1 uN,h(yk, x)`k(y)  = Np X k=1 uN,h(yk, x)ωk (3.1.5) where uN,h(yk, x) ∈ Shq(D)indicates the finite element solution of the problem with

coefficients evaluated at point yk.

Remark 2. The stochastic collocation method is equivalent to solve Npdeterministic problems.

Note that each of these problems is naturally decoupled. On the other hand Np grows

expo-nentially in the number N of random variables giving place to a huge computational work, the so-called curse of dimensionality.

3.2 Sparse Tensor Grid of Collocation Points

We propose an alternative to the approach described in the previous section based on a different choice of the collocation points. We construct the grid by the Smolyak al-gorithm as described in [11] and [12]. The main goal is to keep the number of points moderate.

Let i ∈ N+be a positive integer denoting the level of approximation and t : N+ → N+

an increasing function denoting the number of collocation points used to build the ap-proximation at level i with t(1) = 1. In this context for m = 1, ..., N the one-dimensional interpolation operator Imt(i): C0(Γm; H01(D)) → Pt(i)−1(Γm) ⊗ H01(D)takes the form

Imt(i)v(ym, x) = t(i)

X

km=1

(32)

Here similarly as before {ym,km}

t(i)

km=1are the roots of the polynomial qm,t(i)∈ Pt(i)(Γm)

orthogonal to Pt(i)−1(Γm)with respect to ˆρm.

We introduce the difference operators ∆t(i)m =

(

Imt(i)− Imt(i−1), i ≥ 2

Imt(i), i = 1.

Given a multi-index i = (i1, ..., iN) ∈ NN+ we consider a function g : NN+ → N strictly

increasing in each argument. Let w ∈ N. Then the isotropic sparse grid approxi-mation uN,p,h is obtained by projecting onto the finite element space Shq(D)the

semi-approximation uN,p= Swt,guN where St,g w = X i∈NN + g(i)≤w N O m=1 ∆t(im) m . (3.2.2) For |j| =PN

m=1jm, the following equivalent formulation can be proved by induction:

St,g w = X i∈NN + g(i)≤w X j∈{0,1}N g(i+j)≤w (−1)|j| N O m=1 It(im) m . (3.2.3)

Therefore the sparse grid approximation can be seen as a linear combination of full ten-sor product interpolations and the sparse grid Ysparse⊂ Γ is obtained as a superposition

of all full tensor grids used in (3.2.3) which correspond to non-zero coefficients. Namely Ysparse= [ i∈NN + g(i)≤w [ j∈{0,1}N g(i+j)≤w Yt(i1)× · · · × Yt(iN) (3.2.4)

where Yt(im) ⊂ Γmdenotes the grid of points used by I

t(im)

m .Recall that the full tensor

product grid is obtained as

Y = Yp1 × · · · × YpN

with Ypm ⊂ Γm the grid of points used by Ipm.So the choice of w and g in the sparse

construction is driven by the idea to reduce the number of non-zero terms in (3.2.4). Figure 3.2.1 shows the significant difference between a full grid and a sparse grid.

The good effectiveness of the sparse collocation method strongly relies on the proper selection of the functions t and g. A typical choice which leads to the isotropic Smolyak algorithm is t(i) = ( 1, i = 1 2i−1+ 1, i ≥ 2, g(i) = N X m=1 (im− 1). (3.2.5)

(33)

Figure 3.2.1: A full tensor product grid and a sparse tensor product grid for Γ = [−1, 1]2 both with maximum polynomial degree in each direction equal to 33.

On the other hand it is interesting to notice that the full tensor product method is re-covered when we consider

t(i) = i, g(i) = max

1≤m≤N(im− 1).

The advantage of the sparse tensor product grid over the full tensor product grid can already be seen for a 2-dimensional stochastic domain (see Figure 3.2.2).

Since in general the function t is non surjective we define a left-inverse given by t−1(k) := min{i ∈ N+ : t(i) ≥ k}.

Note that t−1(t(i)) = iand t(t−1(k)) ≥ k.Let t(i) = (t(i1), ..., t(iN))and consider the

following set

Θ = {p ∈ NN : g(t−1(p +1)) ≤ w}.

It is not hard to see that the Smolyak functions (3.2.5) give rise to

Θ = ( p ∈ NN : N X m=1 f (pm) ≤ w ) , f (pm) =      0, pm = 0 1, pm = 1 dlog2(pm)e, pm ≥ 2 . (3.2.6)

Define the polynomial space

PΘ(Γ) :=span ( N Y m=1 ypm m : p = (p1, ..., pN) ∈ Θ ) . Then Swt,guN ∈ PΘ(Γ) ⊗ H01(D).

(34)

Figure 3.2.2: A comparison between the number of collocation points in a full grid and a sparse grid for N = 2. We plot the log of the number of points versus the maximum number of points tmaxemployed in each direction.

Similarly to the previous section, we describe how to compute the first moment of the final approximation uN,p,h.For the Smolyak algorithm, it can be shown (see [12],

formula (3.9)) that Ebρ(u i N,p,h)(x) = X i∈NN + w+1≤|i|≤w+N (−1)w+N −|i|  N − 1 w + N − |i|  Eρb " N O m=1 It(im) m uiN,h(ym, x) # (3.2.7) and from (3.1.5) Eρb " N O m=1 It(im) m uiN,h(ym, x) # = Nt(i) X k=1 uiN,h(yk, x)ωk

where ωkare the same as in (3.1.4) and again uN,h(yk, x)is the finite element solution

of the problem with coefficients evaluated at point yk.

3.2.1 Anisotropic Version

It is possible to construct an even refined algorithm acting differently on each direc-tion ym.This may be useful in situations where the convergence rate is poor in some

(35)

directions with respect to others. This anisotropy can be described introducing some weights α = (α1, ..., αN)in the function g. One possible choice can be for instance

g(i; α) := N X m=1 αm αmin (im− 1), αmin:= min 1≤m≤Nαm.

Consequently the corresponding anisotropic sparse interpolation operator takes the form Sw,αt,g = X i∈NN + g(i;α)≤w N O m=1 ∆t(im) m .

Observe that the isotropic Smolyak method described in the previous section is a special case of the anisotropic formula when we take all the components of the weight vector to be equal, i.e. α1= ... = αN.

We will see in the next chapter (cf. Section 4.4) how the selection of weights is related to the analytic dependence of the solution uN with respect to each of the random

vari-ables Ym.The key idea is to place more points in those directions where the convergence

(36)
(37)

4 Convergence of the Method

Before studying the convergence of the stochastic collocation method we need to im-pose some more requirements on the data of the problem which in turn, as we will see, imply regularity of the stochastic behaviour of the solution uN to problem

Z D aN(y)∇uN(y) · ∇ψ(x)dx = Z D fN(y)ψ(x)dx, ∀ψ ∈ H01(D), ρ-a.e. in Γ. (4.0.1)

The results presented in this chapter can be partially found in [11].

In the following section for convenience we drop the subscript N which indicates the finite-dimensional noise dependence.

4.1 Regularity Results

We introduce the following weight σ : Γ → R+ which allows to keep control on the exponential growth at infinity of some function:

σ(y) = N Y m=1 σm(ym), σm(ym) = ( 1, Γmis bounded e−αm|ym|, Γ mis unbounded

for some αm> 0. We consider the corresponding functional space

Cσ0(Γ; V ) = 

v : Γ → V, vcontinuous in y, max

y∈Γ σ(y)kv(y)kV < ∞



where V is an Hilbert space defined on D. As we have announced at the beginning of the chapter, we make the following assumptions on the data of problem (2.2.3):

(A4) f ∈ Cσ0(Γ; L2(D)).

(A5) The joint probability density function ρ is such that ∀y ∈ Γ, ρ(y) ≤ Cρe−

PN

m=1(δmym)2 (4.1.1)

for some Cρ > 0and δm = 0in the case Γm is bounded and δm > 0when Γm is

unbounded.

We have remarked in Section 3.1 that we need to introduce an auxiliary probability density function satisfying (3.1.1). The following condition is sufficient

Cmin(m)e−(δmym)2 ≤ ˆρ

m(ym) < Cmax(m)e−(δmym)

2

(38)

for some constants Cmin(m), Cmax(m) > 0which do not depend on ym.Then (3.1.1) holds with ρ ˆ ρ L∞(Γ) ≤ Cρ Cmin , Cmin := N Y m=1 Cmin(m).

Under the previous assumptions the following continuous embeddings hold true: Cσ0(Γ; V ) ,→ L2 b ρ(Γ; V ) ,→ L2ρ(Γ; V ). Indeed we have kvkL2 ρ(Γ;V )≤ ρ ˆ ρ 1/2 L∞(Γ) kvkL2 b ρ(Γ;V )≤  Cρ Cmin 1/2 kvkL2 b ρ(Γ;V ) (4.1.2)

proving continuity of the second embedding. For the first one we have kvk2L2 b ρ(Γ;V )= Z Γ (σ(y)kv(y)kV)2 ˆ ρ(y) σ2(y)dy ≤ kvk 2 C0 σ(Γ;V ) Z Γ ˆ ρ(y) σ2(y)dy = kvk2C0 σ(Γ;V ) N Y m=1 Z Γm ˆ ρm(ym) σ2 m(ym) dym. Denote with Mm:= R Γm ˆ ρm(ym) σ2 m(ym)

dym.We have two possible cases. If Γmis bounded then

σm(ym) = 1and δm = 0and therefore

Mm = Z Γm ˆ ρm(ym)dym≤ Z Γm Cmax(m)e−(0·ym)2dy m = Cmax(m)|Γm|

where |Γm| denotes the volume of Γm.

On the other hand if Γmis unbounded then σm(ym) = e−αm|ym|and δm > 0.Thus

Mm = Z Γm ˆ ρm(ym)e2αm|ym|dym = Z Γm ˆ ρm(ym)  e−δ2my2m+2αm|ym|eδ2mym2 dy m ≤ Cmax(m) Z Γm  e−δ2my2m+2αm|ym|dy m ≤ Cmax(m) Z Γm e−δ 2 my2m+ 2α2m δ2m + δ2my2m 2 ! dym = Cmax(m) √ 2π δm e √ 2αm δm 2 .

Hence Cσ0(Γ; V )is continuously embedded in L2ρb(Γ; V )for either Γm bounded or

un-bounded.

Lemma 4.1.1. If f ∈ Cσ0(Γ; L2(D))and a is uniformly bounded from below by amin> 0, then

(39)

Proof. By the definition of the functional space Cσ0(Γ; H01(D))we have kukC0 σ(Γ;H10(D))= sup y∈Γ σ(y)ku(y)kH1 0(D)= sup y∈Γ σ(y)k∇u(y)kL2(D) ≤ 1 amin sup y∈Γ σ(y)ka(y)∇u(y)kL2(D) ≤ CP amin sup y∈Γ σ(y)kf (y)kL2(D) = CP amin kf (y)kC0 σ(Γ;L2(D))

where in the second inequality we have used Poincaré’s inequality.

The last result we prove before investigating the convergence of the stochastic collo-cation method concerns a kind of regularity of the solution u in the random domain. For later convenience we introduce the following notation.

Γ∗m:= N Y j=1 j6=m Γj, y∗m := (y1, ..., ym−1, ym+1, ..., yN) ∈ Γ∗m. Similarly we write ˆ ρ∗m := N Y j=1 j6=m ˆ ρj, σ∗m:= N Y j=1 j6=m σj.

With slight abuse of notation we write v(y, x) = v(ym, y∗m, x)for any m = 1, ..., N.

Lemma 4.1.2. Assume that for every y ∈ Γ and any m ∈ {1, ..., N } there exists γm < ∞

such that for all k ∈ N ∂ykma(y) a(y) L∞(D) ≤ γmkk!, ∂ykmf (y) L2(D) 1 + kf (y)kL2(D) ≤ γmkk!.

Then the solution u(ym, ym∗, x)to problem (4.0.1) as a function u : Γm → Cσ0∗ m(Γ

m; H01(D))

admits an analytic extensionu(z, ye

∗ m, x)in the region Σ(Γm; τm) := {z ∈ C : dist(z, Γm) ≤ τm} ⊂ C with 0 < τm < 1 2γm

.Moreover for all z ∈ Σ(Γm; τm)

σm(<z)ku(z)ke C0 σ∗m(Γ ∗ m;H01(D))≤ CPeαmτm 2amin(1 − 2τmγm) (1 + 2kf kC0 σ(Γ;L2(D)))

(40)

Proof. Consider the weak problem (4.0.1) which, after suppressing subscripts, takes the form Z D a(y)∇u(y) · ∇ψ(x)dx = Z D f (y)ψ(x)dx, ∀ψ ∈ H1 0(D), ρ-a.e. in Γ.

Differentiating k times with respect to ymand using Leibniz’s product rule we end up

with k X l=0 k l  Z D (∂ylma(y))∇∂yk−lm u(y) · ∇ψ(x)dx = Z D (∂ykmf (y))ψ(x)dx and reordering terms we obtain

Z D a(y)∇∂ykmu(y)·∇ψ(x)dx = − k X l=1 k l  Z D (∂ylma(y))∇∂yk−lm u(y) · ∇ψ(x)dx + Z D (∂ykmf (y))ψ(x)dx. Setting ψ = ∂k

ymuand dropping the variable dependence, we get

Z D a|∇∂ykmu|2 = − k X l=1 k l  Z D (∂ylma)∇∂yk−lm u · ∇∂ykmu + Z D (∂ykmf )∂ykmu. Thus k√a∇∂ykmuk2L2(D)= − k X l=1 k l  Z D ∂lyma a ( √ a∇∂k−lym u) · (√a∇∂ykmu) + Z D (∂ykmf )∂ykmu ≤ k X l=1 k l  ylma a L∞(D) Z D (√a∇∂yk−lm u) · (√a∇∂ykmu) + Z D (∂ykmf )∂ykmu .

Applying Holder’s inequality to the terms in absolute value, we have

k√a∇∂ykmuk2L2(D)≤ k X l=1 k l  ylma a L∞(D) k√a∇∂yk−lm ukL2(D)k √ a∇∂ykmu)kL2(D)+ + k∂ykmf kL2(D)k∂yk mukL2(D).

(41)

Dividing both sides by k√a∇∂ykmukL2(D)and recalling assumption (A2), it holds k√a∇∂ykmukL2(D)≤ k X l=1 k l  ∂l yma a L(D) k√a∇∂yk−lm ukL2(D)+ + k∂ykmf kL2(D) k∂k ymukL2(D) k√a∇∂k ymukL2(D) ≤ k X l=1 k l  ∂ylma a L∞(D) k√a∇∂yk−lm ukL2(D)+ + k∂ykmf kL2(D) k∂k ymukL2(D) √ amink∇∂ykmukL2(D) . Application of Poincaré’s inequality gives

k√a∇∂ykmukL2(D)≤ k X l=1 k l  ylma a L∞(D) k√a∇∂yk−lm ukL2(D) + k∂ykmf kL2(D) CP √ amin k∂k ymukL2(D) k∂k ymukL2(D) = k X l=1 k l  ylma a L∞(D) k√a∇∂yk−lm ukL2(D) +√CP amin k∂ykmf kL2(D). (4.1.3) Let define Rk := k√a∇∂ykmukL2(D)

k! .Using the bounds in the assumption of the lemma it holds true that

Rk≤ k X l=1 γml Rk−l+ CP √ amin γmk(1 + kf kL2(D)). (4.1.4)

By induction we are going to prove that the following relation holds: Rk≤ 1 2(2γm) k  R0+ CP √ amin (1 + kf kL2(D))  . (4.1.5)

For convenience set the constant C := CP

(42)

k = 1, (4.1.5) follows from (4.1.4). Now suppose (4.1.5) holds true for k > 1. Then Rk+1≤ k+1 X l=1 γml Rk+1−l+ γmk+1C = k X l=1 γml Rk+1−l+ γmk+1R0+ γmk+1C ≤ k X l=1 γml 1 2(2γm) k+1−l(R 0+ C) + γmk+1(R0+ C) = 1 2(2γm) k+1(R 0+ C) k X l=1 2−l+ γmk+1(R0+ C) = 1 2(2γm) k+1(R 0+ C)  1 − 1 2k  + γk+1m (R0+ C) = 1 2(2γm) k+1(R 0+ C).

Observe that (4.1.3) gives for k = 0 R0 = k √ a∇ukL2(D)≤ CP √ amin kf kL2(D). Moreover Rk≥ √ amink∇∂ykmukL2(D) k! .

Therefore we finally achieve k∇∂k ymukL2(D) k! ≤ Rk √ amin ≤ √1 amin 1 2(2γm) k  R0+ CP √ amin (1 + kf kL2(D))  ≤ √1 amin 1 2(2γm) k  CP √ amin kf kL2(D)+ CP √ amin (1 + kf kL2(D))  = CP 2amin (2γm)k 1 + 2kf kL2(D) . (4.1.6)

Now fix ym∈ Γmand define the power seriesu : C → Ce

0 σ∗ m(Γ ∗ m; H01(D)) e u(z, y∗m, x) := ∞ X k=0 (z − ym)k k! ∂ k ymu(ym, y ∗ m, x).

We aim to prove that the above series converges to the solution u in a complex disc centred at ym. For z ∈ C we have

keu(z)kC0 σ∗m(Γ ∗ m;H01(D))≤ ∞ X k=0 (z − ym)k k! k∂ k ymu(ym)kCσ∗0 m(Γ ∗ m;H01(D)) = ∞ X k=0 (z − ym)k k! k∇∂ k ymu(ym)kCσ∗0 m(Γ ∗ m;L2(D)).

(43)

From (4.1.6) we get for all k ∈ N0 k∇∂yk mu(ym)kCσ∗0 m(Γ ∗ m;L2(D))= maxy∗ m∈Γ∗m σm∗(ym∗)k∇∂ykmu(ym, y∗m)kL2(D) ≤ max y∗ m∈Γ∗m σm∗(ym∗) CP 2amin (2γm)kk! 1 + 2kf (y)kL2(D)  = CP 2amin (2γm)kk!  max y∗ m∈Γ∗m σ∗m(y∗m) + 2kf (ym)kC0 σ∗m(Γ ∗ m;L2(D))  ≤ CP 2amin (2γm)kk!  1 + 2kf (ym)kC0 σ∗m(Γ ∗ m;L2(D)) 

where the last inequality follows by definition of the weight σ. Inserting this bound in the previous display

ku(z)ke C0 σ∗m(Γ ∗ m;H01(D))≤ ∞ X k=0 (z − ym)k k! k∇∂ k ymu(ym)kCσ∗0 m(Γ ∗ m;L2(D)) ≤ ∞ X k=0 (z − ym)k k! CP 2amin (2γm)kk!  1 + 2kf (ym)kC0 σ∗m(Γ ∗ m;L2(D))  = CP 2amin  1 + 2kf (ym)kC0 σ∗m(Γ ∗ m;L2(D)) X∞ k=0 [(z − ym)(2γm)]k. (4.1.7)

Therefore we can conclude that the power series converges for all z ∈ C such that dist(z, ym) ≤ τm <

1 2γm

.

This reasoning is independent on the choice of ym ∈ Γm. Hence by a continuation

argument the series converges in Σ(Γm; τm) for τm < 1m. Finally observe that the

power series converges exactly to u. Indeed for all z ∈ C u(z) − n X k=0 (z − ym)k k! ∂ k ymu(ym, y ∗ m, x) C0 σ∗m(Γ ∗ m;H01(D)) ≤ sup dist(s,ym)≤|z−ym| k∂yn+1m u(s, ym∗, x)kC0 σ∗m(Γ ∗ m;H01(D)) |z − ym|n+1 (n + 1)! ≤ CP 2amin |z − ym|n+1(2γ m)n+1  1 + 2kf (ym)kC0 σ∗m(Γ ∗ m;L2(D)) 

and this upper-bound goes to zero as n → ∞ whenever |z − ym| < 1m.

Note that for z ∈ Σ(Γm; τm)it holds that

(44)

Along the same line as above, the estimate in the statement follows from σm(<z)ku(z)kC0 σ∗m(Γ ∗ m;H01(D))≤ e αmτmσ m(ym)ku(z)kC0 σ∗m(Γ ∗ m;H01(D)) ≤ eαmτm max ym∈Γm σm(ym) CP 2amin  1 + 2kf kC0 σ∗m(Γ ∗ m;L2(D)) X∞ k=0 [(z − ym)(2γm)]k = eαmτm CP 2amin  max ym∈Γm σm(ym) + 2kf kC0 σ(Γ;L2(D))  ∞ X k=0 [(z − ym)(2γm)]k ≤ eαmτm CP 2amin 1 + 2kf kC0 σ(Γ;L2(D))  ∞ X k=0 [(z − ym)(2γm)]k ≤ CPe αmτm 2amin(1 − 2τmγm) (1 + 2kf kC0 σ(Γ;L2(D)))

where again we have used in the second-to-last inequality the fact that σm ≤ 1 in Γm.

Remark 3. The coefficients aN and fN in (4.0.1) are decomposed by using truncations of the

Karhunen-Loève expansions of the corresponding infinite-dimensional random fields. In this case assumptions of Lemma 4.1.2 are fulfilled. In particular for

aN(ω, x) = Ea(x) + N X m=1 p λmφm(x)Ym(ω)

provided that aN(ω, x) ≥ aminin D and P -a.s. in Ω, we have

∂k ymaN(y) aN(y) L∞(D) ≤    √ λmkφmkL∞(D) amin , k = 1 0, k > 1

and we can take γm=

√ λmkφmkL∞(D) amin .Similarly for fN(ω, x) = Ef(x) + N X m=1 √ µmϕm(x)Ym(ω) we have k∂k ymfN(y)kL2(D) 1 + kfN(y)kL2(D) ≤ (√ µmkϕmkL2(D), k = 1 0, k > 1

and we can take γm=

µmkϕmkL2(D).

We may also be interested, as we will do in Chapter 5, in a truncated exponential decomposi-tion of the diffusion coefficient. More precisely,

log[aN(ω, x) − amin] = Ea(x) + N X m=1 p λmφm(x)Ym(ω).

(45)

In this case we have ∂ykmaN(y) aN(y) L∞(D) ≤pλmkφmkL∞(D) k , k ≥ 1

and we can take γm=

λmkφmkL∞(D).

4.2 Convergence Analysis

The main goal of this section is to provide an a priori estimate on the total error between the exact solution u of problem (2.1.3) and the approximation uN,p,h we finally recover

by applying the stochastic collocation method. By considering the subsequent approxi-mations we have introduced in our analysis, we can naturally split the total error in the following way

ku − uN,p,hkVP ≤ ku − uNkVP + kuN− uN,pkVP + kuN,p− uN,p,hkVP

where VP = L2P(Ω) ⊗ H01(D)as defined in Chapter 2.

4.2.1 Error of the Truncation

We first focus on the term ku − uNkVP. Here uN indicates the exact solution of the

weak problem (2.2.3) where the random coefficients of (2.1.3) have been substituted by the corresponding Karhunen-Loève truncated expansions. Thus we aim to give an upper bound for the truncation error of u depending on the truncation errors of a and f which have been investigated in Section 1.2. Consider the two variational formulations of finding u, uN ∈ VP such that respectively it holds ∀ψ ∈ H01(D), P-a.e. in Ω

Z D a(ω, x)∇u(ω, x) · ∇ψ(x)dx = Z D f (ω, x)ψ(x)dx, (4.2.1) Z D aN(ω, x)∇uN(ω, x) · ∇ψ(x)dx = Z D fN(ω, x)ψ(x)dx. (4.2.2)

Now dropping the variable dependence for shortness of the notation, the following holds P -a.e. in Ω and ∀ψ ∈ H01(D)

Z D a∇(u − uN) · ∇ψdx = Z D a∇u · ∇ψdx − Z D aN∇uN · ∇ψdx + Z D aN∇uN · ∇ψdx − Z D a∇uN· ∇ψdx = Z D (f − fN)ψdx + Z D (aN− a)∇uN· ∇ψdx.

(46)

Applying Holder’s and Poincaré’s inequalities we get Z D a∇(u − uN) · ∇ψdx ≤ kf − fNkL2(D)kψkL2(D) + ka − aNkL∞(D)k∇uNkL2(D)k∇ψkL2(D) ≤ CPkf − fNkL2(D)kψkH1 0(D) + ka − aNkL∞(D)k∇uNkL2(D)kψkH1 0(D).

Therefore we end up with the P -a.e. estimate ku − uNkH1 0(D)≤ 1 amin sup ψ∈H1 0(D) R Da∇(u − uN) · ∇φ kψkH1 0(D) - kf − fNkL2(D)+ ka − aNkL(D)k∇uNkL2(D) - kf − fNkL2(D)+ ka − aNkL(D)kfNkL2(D)

where the notation b - c means that there exists a constant K such that b ≤ Kc. Re-markably the constants we have omitted in the previous estimate do not depend on N . Finally we obtain ku − uNkVP - kf − fNkL2 P(Ω)⊗L2(D)+ ka − aNkL 2 P(Ω;L∞(D)kfNkL 2 P(Ω)⊗L2(D).

Given the Karhunen-Loève expansions of a and f a(ω, x) = Ea(x) + ∞ X m=1 p λmφm(x)Ym(ω), f (ω, x) = Ef(x) + ∞ X m=1 √ µmϕm(x)Ym(ω),

we have shown in Section 1.2 that the following estimate holds kf − fNk2L2

P(Ω)⊗L2(D)=

X

m≥N +1

µm.

On the other hand, assuming that the sequence (Ym)mis uniformly bounded in L∞P(Ω),

ka − aNkL2 P(Ω;L∞(D)≤ ka − aNkL ∞(Ω×D)≤ C X m≥N +1 p λmkφmkL(D).

Moreover depending on the regularity, as in Definition 1.2.4, of the covariance functions Vaand Vfof a and f respectively, we have presented explicit bounds for the eigenvalues

and eigenfunctions (see Section 1.2.1). 4.2.2 Finite Element Error

Here we concentrate on the analysis of the error caused by the finite element approxi-mation as described at the end of Section 3.1. We refer the reader to [9]. We are inter-ested in estimating kuN,p− uN,p,hkVP.First of all observe that by passing to the

proba-bility space L2ρ(Γ)we have

Referenties

GERELATEERDE DOCUMENTEN

Initial genomic investigation with WES in a family with recurrent LMPS in three fetuses did not identify disease-causing variants in known LMPS or fetal

networked scholarly community to generate recommendations based on article usage Based on data mining and structural analysis of.. • Based on data mining and structural analysis of

Door de geringere diepte komt de werkput in het westen niet meer tot aan het archeologisch vlak, hierdoor zijn de daar aanwezig sporen niet meer zichtbaar en is het

(Zdh). Een deel van het in totaal 2,3 ha gebied was niet toegankelijk voor onderzoek, door de aanwezigheid van bestaande bebouwing en een weg. De noordoostelijke hoek

Van de 9 behandelingen die werden toegepast tijdens de bloei of bij een vruchtdiameter van 10-12 mm gaven 2 behandelingen een betrouwbare dunning.. Twee behandelingen gaven een

deze groep wordt beschreven op basis van moleculaire en morfologische gegevens, ook mesozoische lijnen worden hierin

In this chapter, a brief introduction to stochastic differential equations (SDEs) will be given, after which the newly developed SDE based CR modulation model, used extensively in

Deze gruttogebieden zijn getoetst aan de volgende voorwaarden: uit recente inventarisaties blijkt dat sprake is van landelijk gezien een duidelijk goed • tot zeer goed gruttogebied