• No results found

Exercise 2 (2 pts) LN Exercise 2.14∗

N/A
N/A
Protected

Academic year: 2021

Share "Exercise 2 (2 pts) LN Exercise 2.14∗"

Copied!
2
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1 Homework 5

to be handed in: March 30, 2016

Note: always check the web for the latest adaptations in blue in LN and BN.

We will give you the option of selecting exercises according to your interest. You should select exercises worth in total at most 10 points. If you hand in exercises worth more than 10 points, then this will therefore not be beneficial to your grade.

Exercise 1 (3 pts) LN Exercise 2.14

Instead of the hint in LN, you might also try to take a route via Theorem 2.2.19.

Exercise 2 (2 pts) LN Exercise 2.14. Exercise 3 (3 pts) LN Exercise 2.16.

Exercise 4 (3 pts)

Let X1, X2, . . . , be i.i.d. integrable random variables, with E(Xi) = 0 and α = E(Xi2) < ∞, i = 1, 2, . . .. Then it holds that

Mn=

n

X

i=1

Xi, n ≥ 1, and Mn2− nα, n ≥ 1

are both (Fn= σ(X1, . . . , Xn))n≥1-adapted martingales.

Let τ be an adapted stopping time with Eτ < ∞. Prove the following two statements:

• EMτ = 0;

• EMτ2= Eτ · α.

Hint: show that the stopped martingale {Mnτ}n is bounded in L2. Use this to show that (Mnτ)2, n = 1, 2, . . . , Mτ2 is a sub-martingale (with ‘last’ element Mτ2).

Students who follow the course Stochastic Integration might recognise a similar result, but then for Brownian motion. The proof in that case could use an Ito-integral argument, but also the scheme that solves the above problem!

Exercise 5 (4 pts)

Consider the integrable random variable X : (Ω, F , P) → (R, B), with EX = m0. We are going to associate a Doob martingale M = (Mn)n=0,1,... with X by a recursive construction of a filtration {Gn}n.

At time n construct inductively:

• a partition of R into a finite number of sets, denoted Gn;

(2)

2

• the σ-algebra Gn= σ(X−1(A), A ∈ Gn);

• Sn= {x ∈ R | ∃ω ∈ Ω such that x =E(X | Gn)(ω)}.

Given the partition Gn, Sn is a finite set. The construction of Gnis immediate from Gn. We therefore have to prescribe the initial values, and the iterative construction of the partition.

For n = 0: G0 = {R}, G0 = {∅, Ω}, S0 = {m0}. The partition Gn+1 has the following properties. For all A ∈ Gn, let xnA= Sn∩ A.

• If P{X−1({xnA})} = P{X−1(A)} then A ∈ Gn+1;

• if P{X−1({xnA})} < P{X−1(A)} then A ∩ (−∞, xnA], A ∩ (xnA, ∞) ∈ Gn+1.

One can also construct an infinite binary tree associated with the above procedure: the nodes are pairs (n, xnA), where xnA ∈ Sn, n ≥ 0. Put an arrow (n, xnA) → (n + 1, xn+1B ), for A ∈ Gn and B ∈ Gn+1, if B ⊂ A.

a) Perform the above construction for n = 0, 1, 2, when X= exp(1), i.o.w. P{X > x} = ed −x, x ≥ 0.

b) Provide a formula for xnA∈ Sn and show that this implies that xnA∈ A.

From now on assume that X has the following property (this may be assumed without loss of generality): if (n, xnA) is a vertex of the binary tree with only one out-arrow, then X−1(xnA) = X−1(A).

c) Show that Gn+1= σ(Gn, {X > E(X | Gn)});

d) Show for A = lim supn{X > E(X | Gn)} that A ∈ G.

e) Show that X a.s.= E(X | G). Show that if two random variables X, Y have the same associated binary tree(with the same labels), thenX and Y have the same distributions.

The martingale is therefore a martingale that ‘recovers’ the random variable X through conditional means. It shows that any random variable can be approximated arbitrarily close (in L1-sense) by a discrete random variable with the same mean.

Referenties

GERELATEERDE DOCUMENTEN

This is an alternative proof of Theorem 3.3 in Michael Stoll’s “Linear Algebra II” (2007).

If the geyser is active two consecutive minutes the turbine achieves maximum efficiency and generates 10 Megawatt.. Finally, if an emission is followed by a non-active minute,

[Multiprocessors] A computer has k processors with identical independent exponential processing times with rate µ. Instructions are processed on a first-come first-serve basis as

Remark: You do not have to prove any standard results from functional analysis concerning compact operators, but you should be clear about what results you are using.. 1 Here

This is the process N restricted to index set

But as we have seen, we can always view a stochastic process to live on a

However, the one(s) I found, use a division of the interval [0, t] into equally spaced sub-intervals. That argument is

The Markov process spends an exponentially distributed amount of time in a state, then jumps to the next state according to the kernel q, again spends an exponentially