• No results found

Embedded matrices for finite Markov chains

N/A
N/A
Protected

Academic year: 2021

Share "Embedded matrices for finite Markov chains"

Copied!
25
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Citation for published version (APA):

Overdijk, D. A. (1983). Embedded matrices for finite Markov chains. (Memorandum COSOR; Vol. 8319). Technische Hogeschool Eindhoven.

Document status and date: Published: 01/01/1983

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne

Take down policy

If you believe that this document breaches copyright please contact us at:

openaccess@tue.nl

providing details and we will investigate your claim.

(2)

Memorandum COSOR 83-19

Embedded matrices for finite Markov chains

by

D.A. Overdijk

Eindhoven, the Netherlands December 1983

(3)

D.A. Overdijk

Abstract. For an arbitrary subset A of the finite state space S of a Markov chain the so-called embedded matrix P

A is introduced. By use of these matrices formulas expressing recurrence probabilities can be written down almost auto-matically and derivations can be given very systeauto-matically.

Keywords: finite Markov chain, embedded matrix.

O. Introduction and summary

We consider a Markov chain XO,x

l' ••• on the state space S

=

{t,2, •••,s}. The

corresponding matrix of transition probabilities is denoted by P. We do not exclude the case where P is sub-Markov i.e. the case where the elements of P are nonnegative with row sums less than one. For the Markov chain this means that for some time n ~ t we may have ~ ~ S for all k ~ n.

For an arbitrary subset A c S we introduce the so-called

embedded matrix

P A, By use of these embedded matrices calculations can be performed very systema-tically, and the derivation and interpretation of results become quite trans-parent. Most of our results are not new and can be found in e.g. the standard reference for calculations in finite Markov chains

Kemeny

and

SneZZ (1976).

The novelty lies in the ease with which the results are obtained. The idea of the embedded process can be found in e.g,

PogueZ

(1969),

ReVUS

(1975),

Simol,S

(4)

In Section 1 some basic matrices are defined. In Section 2 we introduce the embedded matrix P

A, and explain how calculations can be performed using these matrices. In Section 3 the well-known partition of the state space in tran-sient andnontrantran-sient states is derived by means of embedded matrices. As an

I

example, in Section 4 we give detailed calculations for the random walk along the edges of the cube with roof (see Figure 1).

4(-L::...-+----1'£"~ 8 I I I I I I I

_1'--- ----

9

--

2 Figure

The vertices 8 and 9 are absorbing and in all other vertices edges are choosen with equal probability.

The following quantities are calculated.

i) The probability that absorption takes place in a given absorbing state (cf. problem 90 in Statistica Neerlandica).

ii) The mean and variance of the first entrance time in one of the absorbing states (cf. problem 59 in Statistica Neerlandica).

iii) The mean and variance of the number of different states of a given transient set visited by the random walk (cf. problem 54 in Statistica Neerlandica).

(5)

Except for the variance in iii) these quantities are also calculated in

Kemeny

and

SneLL

(1976). In Section 4 we give general formulas for such quantities; the calculations for the random walk on the "cube" in Figure are performed with the aid of a computer.

I. Basic matrices

For every j E S the s x l-matrix (column vector), whose j-th component is one and all other components zero, is denoted bye. and we define the s xs-matrix

J

(0) I. := e.e!

J J J (e! is the transpose of e.) •J J

For every subset Ac S we define

eA

:=

L

e.

,

jEA J I A := jEA

L

I.J I := IS e := eS 0 := e$ (8 x s identity matrix) ,

(column vector with all components one) ,

(zero column vector) •

The following relations can easily be verified.

(1 ) The matrix element Pjk

=

ej P ek ~ j ~ sand 1 ~ k ~ s

(2) I.e

=

e. ~ j ~ s

J J

(3) I

Ae

=

eA A c S

(6)

All equalities (inequalities) between matrices have to be interpreted componentwise and convergence of a sequence of matrices is convergence componentwise.

b .

*

. For every su set A c S we wr~te A

:=

S \ A•

Example 1. Put S

=

{1,2,3} and A

=

{2,3} then we have e.g.

o

0 0 0

o

0 0 0

I

=

2

o

o

eA

=

o

o

o

0 0 0

o

0

It is well-known that the probability distribution of a Markov chain is de-termined by its transition matrix P and an initial distribution ~ of X

o

on

s.

The corresponding probability distribution of the Markov chain is denoted by

P . An initial distribution on S is denoted by a column vector ~ with

non-~

negative components summing to one, i.e.

1T' e. ~ 0 for all J 'IT' e = I . s j s s If 1T

=

e., we write P. J J := lPe. J Expectations with respect to JP

1T or lPj are denoted by the symbol lE1T or lEj .

2. Embedded matrices

We start with a proposition that is the key to calculating probabilities with respec t to P .

(7)

Proposition 1. For every integer n ~ 0, every initial distribution ~ on S and every (n+l)-tuple subsets AO,AI, ••• ,A

n C S

PIA e •

.n

Proof. We proceed by induction. Let ~ be an initial distribution on Sand let A

Oc S. Since lP1T (XOEAO) = 1T' eA == 1T'IA e, (use

(3»,

the proposition

o

0

is true for n = O.

Suppose the proposition has been proved for 0 s k S n. Then we have

=

I

jEA n 11'1T (XOEAO""'Xn-1EAn-I'X =j) F,(Xn J 1EAn+I) =

I

L

11'1T (X

o

EAO" ••

,x

n= j) Fj(Xl = k) jEA kEAn+1 n

==

L

L

1T' lAo PIAl

...

PIA PI. ep'k (use (1»

JEAn kEAn+1 n-l J J

=

L

l:

1T'IA PIA PI. e e! Pe

k (use (2»

jEAn kEAn+1

o

I J J

:II

I

I

1T' I A PIA'" P e. e! P~ (use (0) and (2» jEAn ke:An+1

o

I J J

=

L

L

1T'IA PIA ••• PIj PI

ke = 1T'IA PIA PIA PIA e . 0

jEAn k€An+I

o

I

o

I n n+1

Remark. If 1T ... e. then

J

lP. (XI EAI,·.·,x EA ) = e! IS PIA ••• PIA,e ... e! PIA ••• PIAe •

J n n J I n J I n

(8)

Lemma I. For every subset A c S the sequence of matrices

is convergent.

N

Proof. For every integer N

~

0 define the matrix QN:=

L

(PI *)n pIA ·

n=O A

Using Proposition I with n

=

ej we get for the entries QN(j,k)

(I~j~s and I~k~s) of the matrix QN

= N+\ \' n-I t.. ej (PI A*) PIAn{k} e n=1

which is nondecreasing and bounded by one.

, X €A n {k}) , n Hence

*

=

lP. (3 I: Xl € A J n~

,

...

,

Xn-\ €A

*

, X € An {k}) • n

o

We are now ready to introduce the class of embedded matrices.

Definition I. For every subset A c S the embedded matrix PA is defined by

00

P

A :=

L

(PI *)n PIA •

n=O A

(9)

The probabilistic interpretation of the entries of the matrix PA follo~s from the proof of Lemma I

,

...

,

Xn-1€ A

*

, Xn €An{k}) •

For the actual calculation of embedded matrices the following two lemmas are useful.

Lemma 2. For every subset A c S

Proof.

PA

=

00 00

00

PIA + PI

A* nI

o

(p\*)n PIA

=

PIA + PIA*PA •

o

I f for some subset A c S the matrix PI has no eigenvalue one, i.e. the A*

matrix I - PI is regular, then the embedded matrix P

A can easily be calcu-*

A lated. We then have

(5)

Lennna 3.

The following lemma supplies a probabilistic characterization of the pre-sence of an eigenvalue one of the matrix PIA.

Let A c S. The matrix PIA has an eigenvalue one iff there exists

a state j € S such that

lP. (X € A for all n 2: I) > 0 •

(10)

Proof. First suppose the matrix PIA has an eigenvalue one. Then there exists a columnvector v

'I-

a

such that PIA v

=

v.

Consider the vectors v+ and v with components v+(i) := max(v(i) to) and

+

Then v

=

v v-(i)

=

max(-v(i)tO).

+

-PIA v and PIA v are nonnegative

+

- v

=

PIA v - PIA v • Since the vectors

+ +

PIA v ~ v and PIA v ~ v •

+

(Use the fact that v

=

v - v

=

b - Ct where band c are nonnegative

vec-+ +

tors, implies b ~ v and c ~ v ). At least one of the vectors v and v is nonzero and therefore a vector w exists such that w ; 0t 0 ~ W ~ e and PIA w ~ w. Let j-th component of w be positive then from Proposition 1 with

TI

=

e. it follows that

J

lP. (X EA for all n

J n ~ 1)

=

n-+<>olim ej (PIA)n e ?:ej w >

°.

Conversely, suppose a state J E S exists such that

lP. (X EA for all n ~ 1) >

°.

J n Since lP. (X EA for J n n v := lim (PIA) e :/: n-+<>o all n ?: 1)

=

e! lim J n-+<>o

o

and PIA v

=

v. n

(PIA) e > 0 the vec tor

o

Recurrence probabilities with respect to a subset A c S can be calculated using embedded matrices. We give some useful examples.

Proposition 2. For every initial distribution TI on St all subsets

A, B c S and all integers n ~ 1

(6)

(11)

(8)

(9)

(10)

Proof.

n

1I?1T(~E: A for some k> n) .. 1T' P PAe •

1I?1T(~E: A infinitely often) .. 1T' lim P~ e •

n+<>o

We only prove (6); the other assertions can be proved similarly.

1I? (3 1T n2:1 , • • • t Xn-1E:A

*

, Xn E: A n B) co

~

*

*

= lP 1T(XIE: A , ••• , ~E: A , ~+I E:A n B) k"O co =

L

1T'(PI *)k PI AnBe k=O A .. 1T' PAe B •

We conclude this section with a 1ennna to be used later.

(Proposition I)

(use (4) and (3»

o

Lemma 4. For every subset A c S the sequence of columnvectors {p~e}n2;O is nonincreasing.

Proof. For the j-th component P~e(j) of the vector P~e we have (use (7»

P~ e(j) == ej P~ e .. lPj (~E: A for at least n different k 2; I ) .

0

3. Transient sets

We start with the definition of a transient state. Our definition yields the usual partition of the state space S in transient and nontransient states

(12)

Definition 2. A state j € S is called transient if the probability that the

chain, when started in j, will ever return to j is less than one i.e. if (see (6) with B .. S and A ..

{j})

e! P. e < J • J J

Furthermore a subset

A

c

S

consisting of transient states only is called

transient and the set of all transient states of S is called its transient part and denoted by T.

Lemma 5. For every transient state j € S there exists a number 0 ~ q < J such that for all n ~

P~ e J

n-t

:;;; q e .

Proof. Suppose j € S is transient then by definition q := e! P. e < J.

J J

Hence I. P. e :;;; q e •

J J

We proceed by induction. For n = t the proposition is trivial. Suppose the proposition has been proved for 0 :;;; k :;;; n.

We have pn+Je

=

p~P. e j J J

=

p~ I. P. e J J J (use p~ .. pn, 1. for n ~ I) J J J (use 1. P. e ~ q e) J J Pn pO n :;;; .qe=q .e~q e. J J

Proposition 3. For every nontransient state j € S

e! pn, e

=

t for all n ~ I .

J J

o

Hence 1I? (X .. j infinitely often) .. I

(13)

Proof. We proceed by induction. Let j € S be nontransient then the propos i-tion is true for n = I (use Definition 2).

Suppose the proposition has been proved for 0 S k ~. n. We have

t pn+I =

e. . e

J J e!J

pt.'

J P. eJ (use p~J

=

p~I.J J for n ~ 1)

= e! p~1. P. e J J J J

=

e! P~e.

=

e! P~e

=

I .

J J J j J

o

Proposition 4. A subset A c S is transient iff the probability that the chain is in the set

A

infinitely often is zero for every starting position

j € S Le. iff (cf. (10»

e! lim P~e = 0 for all j € S • J n-+a>

Proof. First suppose Ac S is transient. If the chain is infinitely often

in the set

A

then, since the set

A

is finite, there exists

k

A

such that

x =

k infinitely often.

n

Hence

lP.(X € A infinitely

J n often) ~ k€A

r

lP.J(X • kn infinitely often) • From (10) we obtain

ej

lim p~

e

s:

n-+a>

L

e! lim p~e •

k€A J n-+a>

Since every k € A is transient we conclude from Lemma 5 that lim e! p~e • O.

n~ J

Conversely suppose A c S is not transient then there exists j E A such that

o

for all n ~ I (Proposition 3). Since lP. (X • j infinitely often)

J n

often) we have lim e! P~e i?: lim e! p~e • 1. Hence

n-+a> J n~ J J e! pt,le .. I J J s lP. (X EA infinitely J n lim e

j

P~

e '" O. n-+a>

(14)

If the subset A c S is transient Proposition 4 states that lim P~ e

=

O.

n-+-<>o

The following lemma expresses the fact that this convergence is exponential.

Lemma 6. If the subset A c S is transient then there exist an integer nO

and a number 0 ~ r < 1 such that

for all n 2:

Proof. From Proposition 4 and Lemma 4 we know: Pn

Ae ~ O. Hence there exist

an integer nO and a number 0 ~ w < 1 such that

n

PAe ~ w e for all n 2: nO •

Suppose n

~

2nO then n

~

no[n:] ([aJ:= largest integer not exceeding a). Hence Put r

[n:]

w e n ~ r < I and PAe n 2n O ~ w e . ~ rn e for all n [j

If A c S is transient then we conclude from Lemma 6 that

I pn n n

ej Aek ~ ej PAe ~ r for n sufficiently large.

(JI) Hence the sequence of matrices {P~}n2:0 converges exponentially to the zero matrix if the set A is transient.

Proposition 5. A subset A c S is transient iff the embedded matrix P A does not have an eigenvalue one.

(15)

Proof. I f A c S is transient then P~v + O(n+cx» for every vector v (use (II» and therefore the matrix P

A does not have an eigenvalue one.

Conversely, suppose Ac S is not transient, then it follows from Proposition 4

that v

=

lim P~e ;

o.

Since PAv .. v we conclude that PA has an eigenvalue n+cx>

one.

0

Proposition 6. If A c S is transient then

00 00

\ n -} \ n -2

L. PA .. (I - PA) PA and l. n PA .. (I - PA) PA •

n-O n=1

Proof. The following identities are easily verified by induction

k

(I - P )2 L: n PAn

=

k pk+2 - (k+ I) pk+I + PA ' k

~

0 •

A ncO A A

Using Proposition 5 and (II) and taking k + 00 completes the proof.

o

The last proposition of this section states that the chain can not enter the transient part T from a nontransient state.

Proposition 7. For every nontransient state j € S the probability that the

chain moves in one transition from j into the transient part T is zero i.e.

*

(16)

Proof. Suppose j € S is nontransient and ej PeT

=

r > O. I t follows from Proposition 3 that the chain, when started in j, will return to j infinitely many times. For every integer n ~ 1 consider the event A that after the

n

n-th visit to j the chain moves immediately into T. We have

P.

(A )

=

r for

J n

all n ~ 1. It follows from the Markov property that the events A are

inde-n

pendent, and using the Bore1-Cantelli lemma we conclude from 00

L

1 lP. (A ) = 00 that lP. (A infinitely often) = t.

~ J n J n

This contradiction with Proposition 4 completes the proof.

o

Suppose we have a Markov chain with a nonempty transient part T. Since it is impossible for the chain to enter the transient part T from outside (Proposi-tion 7), it is sometimes convenient to consider the Markov chain restricted to the transient part T. The corresponding transition matrix is sub-Markov and its entries are the transition probabilities between the transient states

in T.

Following the terminology of Kemeny and Snell we denote this matrix by

Q

(see

Kemeny

and

Snezt

(1976) p. 44). This chain with state space T and transi-tion matrix

Q

has only transient states and therefore I -

Q

is regular

(Proposition 5). As in

Kemeny

and

Snett

(1976) p. 46 we define the fundamen-tal matrix N of the original chain

(12) N := (I - Q)-1 •

4. A random walk

We start this section with three propositions that answer the questions mentionned in the introduction.

(17)

Proposition 8. For every transient starting position j E:T and every nontransient

*

state k E: T we have

~. (first entrance in T* takes place in k)

=

J

-1

ej P * ek = (I - PIT) PI * e k •

T T

Proof. Let j be a transient and k a nontransient state.

o

-1

= (I - PIT) PI * •

T

entrance in T* takes place in k) = e! P e

k•

J T*

From (6) we obtain lP. (first J

Using Lemma 3 and (5) we get P T*

Proposition 9. For every transient subset A c S let U

A be the number of visits to the set A at times n ~ O.

For every j E: S we have lEoU = -I

J A ej (I - PA) eA

.

For every j E: A we have lEjU2 -I -I

A = ej(2(I - PA) - 1)(1 - P)A e

.

j * 2 -1 - 21 - P )(1 - P )-1

For every E: A we have lEj U

A = ej(2(I - PA) A A e

.

Proof.

00

lE

J.UA = ~J' (X

o

€ A) + n=1

L

n PJ•(x..-l.< € A for exactly n different k ~ 1) (use (8»

00

ej(eA +

L

n pn+1e» (use (11»

= n(P Ae

-n=1 A 00 00 ej(e A +

L

n

L

n (use pn e= n for n ~ I) = n P A e

-

(n-l)PA e) PAeA n=l n=1 A 00

ej(eA +

L

Pn (use Proposition 6) e! (I - -1

AeA) = PA) eA•

(18)

For every j E A

co

lE.

U~

=

L

(n+1)2 P

J•(x... E:A for exactly n different k

~

1)

J . n=1 -K (use (8»

= e!

(2

I

n

P:

e

+

I

P:

e)

J n=1 n=O

(use Proposition 6)

*

For every j E: A we have

00

2 \' 2 )

lE.UA

=

l. n P.(~E: A for exactly n different k ~ 1

J n=1 J

and the proof is similar.

o

Corollary 1. For U

T, the number of visits to the transient part T at times n ~ 0, we have for every

JET

2

lEo U

T =: e! Ne and lEo UT = e!(2N - I)N e

J J J J

(cf.

Kemeny

and

Snett

(1976) p. 51) •

Proof. Consider the chain restricted to the transient part T with transition matrix

Q

(see (12». In this case we have in Proposition 9 A

=

T, PA

=

Q

and eA

=

eT

=

e.

0

In the last proposition we consider transient chains i.e. chains where T • S.

Proposition 10. For a transient chain and an arbitrary subset A c S let VA be the number of different states in A visited by chain at times n ~ and WA the number of different states in A visited by the chain at times n ~ O.

(19)

For every j E S we have lEo V .. e!

L

P. e J A J iEA 1. e!

{L

P. e + 2

L

J iEA 1. i,kEA -1 } (I - PI ) (PI. Pk e + PIk Pl.' e) , {i,k}* 1. i<k

*

2 2 I f j E A then lE

j WA = lEj VA and IEj WA == lEj VA' For j E A we have

IE. W A == 1 + e!

L

p, e J J iEA\ {j} 1. e!

{32

J iEA\ {j} P. e + 2 1. \' -)

}

l. (I-PI ) (PI. Pke + PlkP i e) .

i,kEA\{j} {i,k}* l. i<k

Proof. We only prove the formulas for VA' The formulas for WA can be proved similarly.

For i E A consider the random variable

Mi := { I if Xn

=

i for some n

~

o

if Xn

+

i for all n ~ ) , Evidently VA"

L

M .•

iEA 1.

For eVdry starting position j E S and every i € A we have (use (6»

Hence lE. V .. e!

L

P. e •

(20)

o

4 4 3 4

o

o

o

4

o

o

o

4

o

o

o

o

4

o

3

o

o

4 3

o

o

o

3

o

o

o

o

o

4 4

o

3 3 0 1 P

=

IT

0

o

3

o

o

3 3 3

o

o

o

o

3 3

o

o

o

o

o

o

o

o

o

o

3

o

o

3

o

o

3

o

o

0 3 0

o

12 0

o

0 12

The set T

=

{1,2, ••• ,7} is the transient part of the chain. For the embedded matrix P we obtain

T*

P

=

(use (5) and Lemma 3)

T*

990 516 1101 1116 984 3108 1452 1485 687 2811 1276 768 1476 880 1584 957 957 3184 1716 1116 1804

f

3465 1485 1485 2112 1320 1320 1188 1485 2811 687 1276 1476 768 880 990 1107 576 1116 3108 984 1452 o o o o o o o o o o o o I T2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 o 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 o 0 0 0 0 0 0 3 0 891 660 660 1804 1452 1452 3124 o 1941 o o o o o o o o o o o o o o 1941 o o o 0 0 0 0 0 0 0 3 0 o 0 0 0 0 0 0 12 0 o 0 0 0 0 0 0 0 12 I m7 o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o 957 990 o 781 1166 o 181 1166 o 1309 638 o 1386 561 o 1386 561 o 1501 440 o 1941 0 o o o o o o o o o o o o o o 1947

(21)

Using Proposition 8 we can calculate all absorption probabilities e.g.

~3 (absorption takes place in vertex 8)

=

e

3

PT* e8

=

1947781

=

0.401 •

For the fundamental matrix N

=

(I - Q)-1 (see (12» se obtain

3465 1485 1485 . .2112 1320 1320 1188 1485 2811 687 1276 1476 768 880 1485 687 2811 1276 768 1476 880 N

= -

1 1584 957 957 3784 1716 1716 1804 1947 990 1107 576 1716 3108 984 1452 990 576 1107 1716 984 3108 1452 891 660 660 1804 1452 1452 3124 1

Hence (Ne)'

=

T94'7(l2375,9383,9383,12518,9933,9933,10043) and «2N - I)Ne)'=

~(246584085,178329437,178329437,249901058,190164447,19016447,192874121).

1947

Since the random variable U

T in Corollary I equals the absorption time

we can calculate, using Corollary I, the mean and variance of the absorption time starting in an arbitrary transient state e.g.

:IEI (absorption time)

=

ejNe

==l'94'7=

12375 6.356 t

second moment absorption time starting in 1

=

e' (2N - I)N e

=

246584085 = 65.048

1 19472

var

(22)

From now we shall restrict the random walk to its transient part i.e.

we consider the transient chain on the state space {t,2, ••• ,7} with transi-tion matrix 0 4 4 4 0 0 0 4 0 0 0 4 0 0 4 0 0 0 0 4 0 I 3 0 0 0 3 3 3 P = -t2 0 3 0 3 0 0 3 0 0 3 3 0 0 3 0 0 0 3 3 3 0

Let A = {5,6,7}. We calculate the embedded matrix P

A = (use (5) and Lemma 3) -I (I - PI ) PI = A* A 144 48 48 48 0 0 0 0 0 0 0 0 0 0 48 116 16 16 0 0 0 0 0 0 0 4 0 0 48 16 It6 16 0 0 0 0 0 0 0 0 4 0 I 36 12 12 112 0 0 0 I 0 0 0 0 3 3 3

TOO

IT

= 21 32 7 32 100 0 0 0 0 0 0 0 0 3 21 7 32 32 0 100 0 0 0 0 0 0 0 3 9 3 3 28 0 0 100

a

0 0 0 3 3 0 0 0 0 0 84 84 36 0 0 0 0 128 28 12 0 0 0 0 28 128 12 1 0 0

a

0 96 96 84

360

a

0

a

0 56 31 99 0 0 0 0 31 56 99 0 0 0 0 99 99 21

(23)

Using the embedded matrix P

A we can calculate all recurrence probabilities for the set A e.g.

WI (first entrance in the set Atakes place at vertex 7) a (use (6»

36

= _ a

300 0.12

WI (the chain will ever visit the set A)

=

(use (7»

From Proposition (5) we know that I - P

A is regular. We find 0 0 0 0.6780 0.6780 0.6102 0 0 0 0.7581 0.3945 0.4520 0 0 0 0.3945 0.7581 0.4520 (1 - P A)-1 a 0 0 0 0.8814 0.8814 0.9266 0 0 0 0 1.5963 0.5054 0.7458 0 0 0 0 0.5054 1.5963 0.7458 0 0 0 0 0.7458 0.7458 1.6045

Hence (see Proposition 9)

-1 {(I - P A) eA} ' " (1.966,1.605,1.605,2.689,2.8475,2.8475,3.0961) and -I -I {(2(I-P) - I ) ( I - P ) e}I .. A A (5.750,7.286,7.286,11.577,9.992,9,992,11.087) •

Using Proposition 9 we can calculate the mean and variance of the number of visits to the set A. We obtain e.g.

(24)

lE7 (number of visits to the set A starting position included) =

-1

3.0961

I e;(I - PA) eA

=

var

7 (number of visits to the set A starting position included) = 2

11.087 - (3.0961) = 0.336 •

We now calculate the embedded matrices P

5, P6 and P7•

P5

=

(use (5) and Lemma 3) = (I - PI ) -I PI

=

{5}* 5 0 0 0 0 440 0 0 0 0 0 0 0 440 0 0 0 0 0 492 0 0 0 0 0 0 0 256 0 0 0 0 0 256 0 0 0 0 0 0 0 492 0 ) 0 0 0 0 572 0 0 I 0 0 0 0 0 572 0

T536

P 6 =

TO'36

0 0 0 0 387 0 0 0 0 0 0 0 328 0 0 0 0 0 328 0 0 0 0 0 0 0 387 0 0 0 0 0 484 0 0 0 0 0 0 0 484 0 0 0 0

a

0 0 1188 0 0 0 0 0 0 880 0 0 0 0 0 0 880 and P7 = -) 0 0 0 0 0 0 1804 3124 0 0 0 0 0 0 1452 0 0 0 0 0

a

1452

a

0

a

0 0 0 1177

(25)

Hence e.g.

387

= (use (7» = e

5

p 5 e == 1036 ::: 0.374.

m

1 (number of different states in A visited by the chain) 880 1188

== 1036 + 3 J 24 _ J. 230 .

References

Foguel, S.R., The ergodic theory of Narkov processes.

Van Nostrand Mathematical Studies

#

21, New York, Van Nostrand Reinlwld, 1969.

Kemeny, J.G. and J. Snell, Finite Markov Chains, Berlin etc., Springer Verlag, 1976.

Revuz, D., Markov Chains, North-Holland Publishi.ng Company, 1975.

Simons, F.R. and D.A. Overdijk. Recurrent and sweep-out sets for Markov proc.esses. Hh. Hath. 86, 305-326 (1979).

Referenties

GERELATEERDE DOCUMENTEN

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

If Y has not less than R positive eigenvalues, then the first R rows of Q are taken equal to the R eigenvectors that correspond to the largest eigenvalues

The second part of Koopmane' theorem eays tYhat if all elements of E-1 are strictly positive, then each vector in the convex hull of the elementary regression vectors is a

Er wordt aanbevolen de samenhang tussen natuur-, milieu- en landbouw- doelstellingen in zowel kwalitatieve als kwantitatieve zin zoals verwoord in de nota’s “Natuur voor Mensen,

De impasses in de uitvoering kunnen alleen doorbroken worden als alle partijen (zijnde bestuurders, beleidsmakers, projectleiders, en vertegenwoordigers uit maatschappelijke

De getransponeerde matrix A t van een matrix A is de matrix die men bekomt door rijen en kolommen te verwisselen. De getransponeerde matrix van een symmetrische matrix is de

Referentiepunt GA2005-36 Vegetatie: Charetum canescentis Associatie van Brakwater-kransblad 04D1b soortenarme subassociatie inops Chara connivens facies Dit referentiepunt betreft