• No results found

Capacity and Efficiency Analysis of Multiserver-Queuing Systems

N/A
N/A
Protected

Academic year: 2021

Share "Capacity and Efficiency Analysis of Multiserver-Queuing Systems"

Copied!
70
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MSc STOCHASTICS AND FINANCIAL

MATHEMATICS

Master Thesis

Capacity and E

fficiency Analysis of

Multiserver-Queuing Systems

Author: Supervisor:

Hui Zhang

prof. dr. R. Nunez Queija

Examination date:

(2)

Abstract

This thesis studies mean performances of various queuing systems with different num-ber of servers and service rates. Specifically we investigate how mean queue length or mean waiting time performs between systems with one server but fast service rate and with two or more servers but slow service rate. We address this problem with simple M/M/c queuing systems and then study more general phase-type systems. Analytical analyses of mean performance and their numerical analyses are presented.

For the analysis of phase-type systems we discuss the theory of quasi-birth-and-death processes with matrix-geometric stationary distributions. In particular, we present a detailed analysis of the waiting time distribution in multi-server systems with phase type service distributions. As a canonical example we especially focus on the two-server model with two-phase service distribution and discuss how to generalize to multiple servers and more phases in the service distribution.

This thesis will in the end give the main conclusion that fast single-server queuing system is not always the best (compared with the slow multi-server) with more general service distributions than exponential. The result is of practical interest on making dimensioning decisions.

Title: Capacity and Efficiency Analysis of Multiserver-Queuing Systems Author: Hui Zhang, zhang.gabriella@yahoo.com, 11388609

Supervisor: prof. dr. R. Nunez Queija Second Examiner: dr. J.L. Dorsman Examination date: October 30, 2018

Korteweg-de Vries Institute for Mathematics University of Amsterdam

Science Park 105-107, 1098 XG Amsterdam http://kdvi.uva.nl

(3)

Acknowledgement

Sitting in front of my laptop and reviewing my thesis journey, I especially would like to thank my supervisor, (Sindo) Nunez Queija, for contributing his valuable time and giving me his encouraging patient guide. His sincere and clear suggestions have helped me to achieve this work.

To my parents, my brother QiQi, my uncle: because I owe it all to you. I have been grateful to you for your unconditional love and trust.

And to my gym buddies, we have shared very valuable enthusiastic time together. You motivate me to build up a strong and healthy me both physically and mentally. And I wish to thank all my friends who provide unending inspiration.

(4)

Contents

1 Introduction 1

2 Queuing Models 4

2.1 Basic Theory . . . 4

2.2 M/M/c System and M/M/c/N System . . . 7

2.3 M/Er/1 System . . . 11

2.4 M/G/1 System . . . 13

3 Performance Evaluation of M/M/· Systems 17 3.1 Direct comparison between M/M/1 and M/M/c Systems . . . 17

3.2 Numerical Analysis between M/M/1 and M/M/c Systems . . . 19

3.3 Numerical Analysis between M/M/1/N and M/M/c/N Systems . . . 21

4 Phase-type Queuing System 22 4.1 Basic Concepts . . . 22

4.2 Some Phase-type Distributions . . . 23

5 Matrix-Geometric Method for Phase-type Models 28 5.1 Quasi-birth-and-death Process . . . 28

5.2 Matrix-Geometric Method . . . 29

5.3 Mean Waiting Time in QBD Process . . . 40

5.4 Waiting Time Distributions in GI/PH/1 System . . . 43

6 Performance Evaluation of Phase-type Models 46 6.1 Model Description . . . 46

6.2 Mean Waiting Time in M/PH/1 . . . 47

6.3 Mean Waiting Time in M/PH/2 . . . 49

(5)

7 Concluding Remarks 59 References . . . 62

(6)

1 Introduction

The following situation may be familiar: you are going to open a bank account and two banks are available in the neighborhood. One offers a single server with fast service rate, called Bank A while the other offers two servers with slow service rate, called Bank B. Which one do you prefer to choose? In fact, these are queue optimization problems. Very few explicit expressions are known for queues with multiple homogeneous servers. This, however, can be solved by using matrix-geometric methods, as we will show in the main body of the thesis. We intend to address this type of problem under some queuing systems including the fundamental queuing system M/M/c and the phase-type M/PH/c. Those can be formulated as continuous time Markov chains (CTMC). Some previous researches about queue theory are briefly introduced here before we go into details.

Early in 1908, the first study of queuing problem on telephone engineering was con-ducted by Erlang[1]. In 1929, Pollaczek[2]is best known by publishing two thesiss about Pollaczek-Khinchine formula, which derived queue length and waiting time distribu-tions in a queue with an arbitrary service time distribution. The result was also found by the Russian mathematician Yakovlevich Khinchin in 1932. Journal of the Royal Statistical Society, in 1951, first used the term ”Queuing System”.

In 1953, Kendall[3] gave a standard queuing notation a/b/c which are common used by scholars. Figure (1.1) shows a simple M/M/1 queuing system, where the first letter indicates that the arrival process follows Poisson distribution with rateλ and the second letter specifies that the service time follows exponential distribution with rate µ. The third letter says that there is one server in the system. This is the most basic queue. By changing the distributions of arrival process and service time, the queue can become more complex as well as changing the number of servers.

(7)

λ −→

waiting area

µ service node

−→

Figure 1.1: M/M/1 queuing system

queues. It indicates the capacity of queuing system, i.e. waiting room can only contain N customers. For instance, M/M/1/5 illustrates a queuing system that the arrival process follows Poisson distribution, service time follows exponential distribution and the waiting room only allows 5 customers in the system. Distributions of queuing systems can be somehow different by adding the capacity limitation into models. A more broad study in queuing are conducted by many scientists. Haight[7]described parallel servers in 1958. Considering priority into queuing system, Cobham[8]solved the problem of priority in waiting line problems. Two important rules in queue theory have been proved rigorously by Little[4], known as Little’s Law in 1961, and PASTA principle in 1982 by Wolff[5], respectively. Lindley gave more developed theory about

single-queue single-server queues.

Grassmann studied the numerical problem in queuing models. In 1981, an important method called Matrix-geometric analysis was introduced by Neuts[6]. It plays a vital role

in solving a large variety of queuing models like phase-type queues numerically. And he also discussed the structured block-partitioned stochastic matrices (M/G/1 type) and its applications. Moreover, Neuts gave the approximate performance measure to GI/PH/1 queues and GI/PH/c models with 2 servers. In 2006, Bolch, Gunter[39]and other writers

introduced matrix-geometric method to analyze models with block matrices in M/G/1 system. Takayuki[10]discussed the analytic tools of analyzing performance of multiple-server queuing systems. van Dijk and van der Sluis (2006)[13] showed that there is no

single answer on pooling or not pooling servers. However, for a single queue type, the waiting-time effect of pooling is positive. This thesis studies queue models basing on this results and discuss one-queue situations for identical servers.

Let us go back to our question in the beginning: do you prefer Bank A with fast single server to Bank B with slow multiservers? The answer most likely would be the system in which you spend less time. It could be Bank A or Bank B, depending on the distributions of service time and arrival rates and so on. Our interest is to figure out this problem.

(8)

We will see how queuing models behave given various service time distributions. We assume that customers arrive one by one (not in batches). And they are served in order of arrivals, called first-come-first-served discipline. Arriving processes of all sys-tems follow Poisson distribution and we also assume that customers are patient enough to wait in the queue until being served without abandonment behaviour.

This thesis is divided into 5 parts besides the Introduction part and conclusion remarks. In chapter 2, two main types of queuing models are presented, including M/M/c and M/Er/1 queuing systems. Their structures and mean performance are discussed in

detail. Chapter 2 also compares two methods of computing waiting time distributions. They are direct approach and transform method (Pollaczek-Kinchine formula).

In chapter 3, given the above queue systems, mathematical comparison for mean queue length between M/M/1 and M/M/c systems are completed. Besides, numerical analysis of mean performances are showed in both M/M/c and M/M/c/N queuing systems. Chapter 4 introduces phase-type distributions which are used to approximate any dis-tribution in multiserver queues. We analyze the structures of some examples in order to get insights of phase-type queuing systems.

In chapter 5, we introduce the matrix-geometric method, which allows one to find the mean waiting time in phase-type queues. Some corresponding definitions and coefficients are introduced and proved. Also the algorithms for computing them given in this chapter.

Chapter 6 gives the model descriptions of phase-type queuing systems with single and 2 servers, respectively. Numerical analysis for both of them are conducted. We compare their mean waiting time and give our conclusions.

(9)

2 Queuing Models

Before starting to compare the queuing performances of various systems, we introduce some common queuing models in this section which one can find in any queuing theory textbooks, see Adan and Resing [11]. They include M/M/1, M/M/c, M/Er/1

queuing models and M/M/c/N queuing system as an expansion. Below we present their structures and important mean performance measurements. They are, mainly, mean waiting time and mean queue length in the system.

We also give two methods to compute waiting time distribution and their mathematical comparisons. The two methods are: a direct approach derived by using probability theory and transform method (Pollaczek-Kinchine formula) which gives closed-form analytic expression for the mean performance by using conditional expectation.

2.1 Basic Theory

2.1.1 Markov Chain

Queue theory is one of the main applications of Markov chains. We describe Markov chains as follows ( see [15] ).

Definition 1. We say that stochastic process(Xn)n≥0is a Markov chain with initial distribution x

and transition matrix P= (pi j, i, j ∈ E) with pi j≥ 0 for all i, j if for all n ≥ 0 and i0, · · · , in+1∈E

where E is a countable set, (i) P(X0= i0)= xi0 ;

(ii) P(Xn+1= in+1|X0= i0, · · · , Xn= in)= P(Xn+1= in+1|Xn= in)= pinin+1 ,

Also, each i ∈ E is called a state and E is called the state-space.

Each move in the Markov chain is called a step. The probabilities pi jare called transition

(10)

state. We call condition (ii) the Markovian property. Note that the transition matrix P is a stochastic matrix andP

j∈Epi j = 1.

Markov chain is called time-homogeneous if pi j are independent of n, i.e. P(Xn+1 =

j|Xn = i) = P(Xn+m+1 = j|Xn+m = i) = pi jfor m ≥ −(n+ 1) and m ∈ N. We assume all

Markov chains in our work are time-homogeneous. Theorem 1. A Markov chain is irreducible if

pi j> 0, ∀i, j ∈ E.

That is all the states communicate with each other. The irreducibility is detected by checking pi jfor all i, j ∈ E.

A state is aperiodic if it has period 1. A Markov Chain is aperiodic if all states have period 1.

Theorem 2. Let P be the transition matrix for an irreducible, aperiodic Markov chain. Then there is a unique invariant probability vectorπ such that

πP = π.

Definition 2. An absorbing Markov chain is a Markov chain in which every state can reach an absorbing state. An absorbing state is a state that, once entered, cannot be left.

i is called an absorbing state if the one step probability from i to i equals 1 i.e. {i} is closed. A single absorbing state is particularly interest in this thesis. We call state i ∈ E recurrent if Markov chain revisits it within finite transition steps giving Markov chain starts in state i, i.e. P(Xn = i for infinitely many n|X0 = i) = 1. Note that an absorbing

state must be recurrent by their definitions.

The problems that we will study in this thesis are in continuous time. Continuous Time Markov Chains (CTMC) can be constructed from Discrete Time Markov Chains in the following way. If P is the transition matrix of a Discrete Time Markov Chain then Q := P−I is the generator of a Continuous Time Markov Chain. The entries of generator may be interpreted as follows.

(11)

• −qii is the parameter defining the stochastic residence time of process in state i; to be specific: when the CTMC visits state i, it will remain in that state for an exponentially distributed time with expectation 1/(−qii).

Now write πP = π leads to πQ = 0. Note: for a given generator Q and a scalar β, the productβQ is also a CTMC. Multiplication by β simply represents a change of time scale. This is useful when doing the opposite translation: If Q has diagonal elements that are< −1, then Q must first be re-scaled with a parameter β which makes the largest −qiismaller or equal to 1, to make sure that I+ βQ is a probability transition matrix. See Chapter 5.

The so constructed CTMC with generator Q is a stochastic process having the Markovian property (memoryless property). The times between transitions are continuous random variables with exponential distributions. In our study we use continuous-time Markov chains to analyze queuing models that can be characterized by exponential distributions, or so called phase-type distributions which are composed of exponential distributions, as we will describe in chapters 4 and 5.

2.1.2 Useful Theories

Laplace-Stieltjes transform is an integral transform. It is defined as ˜ X(s)= E(e−sX)= Z ∞ x=0e −sx dF(x), s ≥ 0,

where X is a nonnegative random variable X with distribution F(·). Further, ˜

X(0)= 1, X˜0(0)= −EX.

There is a result playing an important role in queuing theory. It is Little’s law. It is defined by

EL = λEW,

where EL is the mean queue length in the system and EW is the mean waiting time in the system. Little’s law states that the mean waiting time multiplies the arriving rate is exactly the mean queue length. This is widely applied to any queuing systems.

(12)

2.2 M

/M/c System and M/M/c/N System

2.2.1 M/M/1 System

M/M/1 System is a model where the arriving process follows Poisson distribution with arrival rateλ and the service time follows the exponential distribution with service rate µ. A single server deals with one job per unit time. Figure 2.1 shows a flow diagram of M/M/1 queue model. 0 1 2 · · · n-1 n · · · λ λ λ µ µ µ

Figure 2.1: Transition diagram of M/M/1 queuing system

We first introduce and analyze some valuable quantities and then formulate our interest, the mean performance.

• Occupation rateρ: the fraction of time that the server is busy. ρ = λ/µ.

Since the mean service time, say, ES, it equals to 1/µ in M/M/1 queuing system. SoλES is the amount of work arriving per unit time. A stable process needs jobs leave faster than they arrive to the queue. The need for this condition will be clear in what follows. We then assumeρ < 1 to avoid the queue to become infinity i.e. explosion. Note the fact thatρ = 1 yields an infinity queue.

• Recallρ < 1, probability that the system is idle p0: namely, when the server is not

occupied,

p0= 1 − ρ = 1 − λ/µ.

• Recallρ < 1. Let pndenotes the probability that n customers in the system. It is

said that the systme is in state n. Assume all our queuing systems are equilibrium, which means that the average number of customers leaving and entering the

(13)

system are equal. Thus, according to Figure (2.1), we have equilibrium equation λp0= µp1

(λ + µ)pn= λpn−1+ µpn+1, n = 1, 2, · · ·

Also we know thatP∞

n=0pn= 1, called normalization equation. Hence, the

proba-bility that there are n customers in the system is solved pn= ρn(1 −ρ).

• Recall ρ < 1. Expected number of customers in the system E(L) (in waiting line and being served):

E(L) = ∞ X n=0 npn= ρ 1 −ρ = λ µ − λ.

• Recallρ < 1. Expected time a customer spends in the system (in waiting line and being served) EW by Little’s Law:

E(W) = E(L)λ = µ − λ1 .

Note that the mean queue length and the mean waiting time in the queue grow to infinity asρ approaches to 1. 2.2.2 M/M/c System 0 1 · · · c-1 c · · · n-1 n · · · λ λ λ λ λ cµ cµ cµ 2µ µ

Figure 2.2: Transition diagram of M/M/c queuing system

The system we are going to compare with is c-parallel-server system, M/M/c system. Other assumptions are still hold but the service rates change toµieach server, i= 1, · · · , c.

(14)

i, sayµi = µc. Letµi = 1/cµ. Namely, the service time in M/M/1 system (µ) is c times

faster than it in c-server system M/M/c. We give mean queue length direct comparison for c= 1, 2 in chapter 3. The corresponding formulations are given below.

• Knowing that the mean service time for c-parallel identical servers is 1/µc, we

now the mean service time per server is 1/cµc. It follows that the amount of work

arriving per unit time per server isλ/cµc. Thus, define occupation rate ρc per

sever:

ρc= λ/cµc.

Due to the same reason in M/M/1 system, ρc< 1.

• Between two neighboring states in the phase diagram of M/M/c queuing system (Figure 2.2), the equilibrium relations areλpn−1= min(n, c)µpn, n= 1, 2 · · · .

Com-bining with normalization equationP∞

n=0pn= 1, we have the probability that the

server is not occupied p0,

p0=        c−1 X n=0 (cρc)n n! + (cρc)c c! 1 1 −ρc        −1 .

• Iterating the above equations gives the probability that there are n customers in the system pn, pn= (cρc)n n! p0= (cρc)n n!        c−1 X n=0 (cρc)n n! + (cρc)c c! 1 1 −ρc        −1 , , n = 0, · · · , c and pc+n= ρncpc, n = 0, 1, · · ·

• Mean queue length E(L) in the system is obtained from above formulation,

E(L) = ∞ X n=0 npc+n+ ρc = ρcpc (1 −ρc)2 + ρc.

• Waiting time in the system EW by Little’s Law: E(W) = E(L)λ =

pc

cµc(1 −ρc)2 +

1 cµc.

(15)

Now we formulate waiting time distribution for M/M/c system in details. Later we will also discuss waiting time distribution for another special queuing system (phase-type). Denote Sk the service time of the kth customer and they are independent and

expo-nentially distributed with mean 1/cµ for k = c, c + 1, . . . . Then we have waiting time W, W = n+1 X k=1 Sk. And P( n+1 X i=1 Si> t) = (cµct)i i! e −cµct .

Since Sk is independent of n+ 1, the waiting time distribution by conditioning on the

state seen on arrival is obtained:

P(W > t) = ∞ X n=0 P( n+1 X i=1 Si > t)pc+n = ∞ X n=0 n X i=0 (cµct)i i! e −cµct pcρnc = ∞ X i=0 ∞ X n=i (cµct)i i! e −cµct pcρnc = pc 1 −ρc ∞ X i=0 (cµcρct)i i! e −cµct . (2.1) 2.2.3 M/M/c/N System

In some cases, limited customers are allowed to enter into the queues. The correspond-ing queucorrespond-ing model is M/M/c/N by limitcorrespond-ing the queue length compared with M/M/c. The previous M/M/c model’s assumptions hold. Denote N the maximum number of customers allowed in the service system and customers after the Nth arrival will be re-jected and lost. Notice the fact that the capacity N is equal or greater than the number of servers c. Below we give the relative mean performance measurements without going in detail (see [40]).

(16)

yields the probability that the server is not occupied p0: p0 =         c X n=0 (cρc)n n! + (cρc)c c! N X j=c+1 (cρc c ) j−c         −1 , whereρc= λ/cµc.

• Probability that there are n customers in the system pnis given by:

pn=                  (cρc)np0/n! for i= 0, 1, . . . , c (cρc)np0/(cn−cc!) for i= c + 1, c + 2, . . . , N 0 for n> N

• Mean queue length E(Lq):

E(Lq)= N X n=0 npc+n=          p0(cρc)sρc c!(1−ρc)2  1 −ρN−cc − (N − c)ρN−c c (1 −ρc) , ρc, 1 p0(cρc)s(N−c)(N−c+1) 2c! , ρc= 1

• We know that pN represents the probability that no customers join in the system

andλpN represents the average customers rejected by the system due to the state

limit. Then we defineλethe effective arrival rate of the queue and obtain waiting

time in the queue EWqby Little’s Law:

E(Wq)=

E(Lq)

λe .

We will numerically analyze the mean queue lengths in M/M/c/N queues in chapter 3 to see how many servers are best.

2.3 M

/Er/1 System

This section focuses on single-server case. In reality, the service process may contain more than one stage. M/Er/1 is the queuing system where there are a series of r

(17)

with rate µ, so the Erlang-r mean service time is r/µ. It is represented to be the sum of r exponential phases with the same intensity. Thus, the occupation rate equals to ρ = λ · r

µ. For stability, ρ < 1. We show the mean performance of M/Er/1 queuing

system below, and Pollaczek-Khinchine formulas for its waiting time distribution. In order to find waiting time distribution, solving equilibrium equations is necessary. The equilibrium equations in M/Er/1 system are given by:

λp0 = µp1 (2.2)

(λ + µ)pn= µpn+1, n = 1, 2, · · · , r − 1 (2.3)

(λ + µ)pn= λpn−r+ µpn+1, n = r, r + 1, · · · (2.4)

These equations are dependent.

See Adan and Resing [11], we first consider a solution of (2.4) of the form,

pn= xn, n = 0, 1, 2, · · · . (2.5)

Plugging it into the equilibrium equations and normalization equation (P∞

n=0pn = 1)

and dividing by the common power xn−r, yields

(λ + µ)xr= λ + µxr+1, (2.6)

where x= 1 is one of the roots which is useless since we need to normalize the solution of (2.2), (2.3) and (2.4). Recall thatρ = λ ·µr < 1. It can be shown that (2.6) has r different roots less than 1, say x1, · · · , xr. Then we can consider the solution form,

pn= r

X

k=1

ckxnk, n = 0, 1, 2, · · · , (2.7)

such that they satisfy the above equilibrium equations and normalization equation. Substituting (2.7) into them, yields a set of r linear equations for r unknown coefficients. A unique solution for this set of equations is given by

ck = 1 −ρ

Πj,k(1 − xj/xk), k = 1, · · · , r,

(18)

We now derive the waiting time distribution of M/Er/1. Define the random variable Lf

the number of phases work in the system and Si the amount of work for the ith phase

which independent and follow exponential distribution with rateµ. Then we know the waiting time is just the total amount of the phase work in the system, i.e. W = PLi=1f Si.

Since Lf and Siare independent, conditioning on Lf we obtain

P(W > t) = ∞ X n=1 P( n X i=1 Si > t|Lf = n)P(Lf = n) = ∞ X n=1 P( n X i=1 Si > t)pn = ∞ X n=1 n−1 X i=1 (µt)i i! e −µt r X k=1 ckxnk = r X k=1 ck ∞ X i=0 ∞ X n=i+1 (µt)i i! e −µt xnk = r X k=1 ckxk 1 − xk ∞ X i=0 (µxkt)i i! e −µt = r X k=1 ckxk 1 − xk e−µ(1−xk)t, t ≥ 0. (2.9)

The waiting time distribution in M/Er/1 queue by calculating conditional expectation

is obtained. Pollaczek-Khinchine formula is another way to calculate waiting time distributions for one server queuing system. Surely, the two methods should produce the same results. The comparison will be showed in section 2.4 .

2.4 M

/G/1 System

As an expansion of M/M/1 model, M/G/1 queuing model is a system where the arrival process follows Poisson distribution with rateλ and the service time follows general distribution. In this chapter Pollaczek-Khinchine (P-K) formula for waiting time distri-bution will be given without going in details.

In queuing theory, Pollaczek-Khinchine formula shows a relationship between the ser-vice time distribution and the queue length by using Laplace transform for M/G/1 queue models. We are interested in comparing the waiting time distribution by

(19)

ap-plying P-K formula and the direct approach showed in previous parts for M/M/1 and M/Er/1 queues.

2.4.1 Pollaczek-Khinchine Formula of Waiting Time Distribution

Pollaczek-Khinchine formula can be derived from LST and probability generating func-tion. Reference to Adam and Resing [11], the P-K formula of waiting time distribution for single server queue is given by

˜

W(s)= (1 −ρ)s

λ ˜S(s) + s − λ, (2.10)

where ˜S(s) is the laplace-Stieltjes transform of service time. One can compare its ap-plication results in M/M/1 and M/E2/1 queuing systems with their results by direct

computation showed in previous sections, respectively. Surely, the two methods should produce the same results.

2.4.2 Application in M/M/1 System

In M/M/1 system, given the service time exponentially distributed with mean 1/µ, we have the Laplace-Stieltjes transform of service time,

˜

S(s)= µ µ + s.

Then by applying P-K formula (2.10), the waiting time distribution is obtained, ˜ W(s)= (1 −ρ)s λ ˜S(s) + s − λ = (1 − ρ) + ρ (1 −ρ)µ µ(1 − ρ) + s. P(W > t) = ρe−(1−ρ)µt, t ≥ 0, P(W = 0) = lim s→∞W(s)˜ = 1 − ρ.

It is observed that given that the waiting time is positive (i.e., the system is not empty on arrival), the waiting time is exponentially distributed with parameter (1 −ρ)µ. Recalling the waiting time distribution (2.1) derived from conditional expectation method. Substituting c= 1, µc = µ, ρc = ρ, we obtain the same result as in (2.4.2). One can

(20)

ob-serve that both two methods give the same waiting time distribution for M/Er/1 queuing

system.

2.4.3 Application in M/E2/1 System

Recalling waiting time result in (2.12), We would like to compare it with the follow-ing result computed accordfollow-ing to Pollaczek-Khinchine formula (2.10). Partial Fraction Expansion approach will be applied. Suppose the service time is Erlang-2 distributed with mean 2/µ. Then

˜ W(s)= (1 −ρ)s λ(µ+sµ )2+ s − λ = (1 −ρ)µ2− 2λµ) + (2µ − λ)s + s2+ 2µ(1 − ρ)s + (1 − ρ)s2 2 = 2λµ(1 − ρ) + λ(1 − ρ)s (µ2− 2λµ) + (2µ − λ)s + s2 + (1 − ρ) = 2λµ(1 − ρ) + λ(1 − ρ)s (s+ r1)(s+ r2) + (1 − ρ) = K1 r1+ s+ K2 r2+ s + (1 − ρ) = K1 r1 r1 r1+ s+ r2 r2+ s K2 r2 + (1 − ρ) (2.11) where r1= (2µ − λ) − p4µλ + λ2 2 , r2= (2µ − λ) + p4µλ + λ2 2 .

One can get K1, K2by letting s → −r1, s → −r2, respectively, then

K1 = (1 − ρ)

2µλ + λ2+ λ p4µλ + λ2

2p4µλ + λ2 , K2= (1 − ρ)

2µλ + λ2λ p4µλ + λ2

2p4µλ + λ2 .

Recalling the generating function, we have

P(W > t) = K1 r1

e−r1t+K2

r2

(21)

P(W = 0) = ˜W(∞)= 1 − ρ.

Now we check the result by takingλ = 1, µ = 6 (ρ = 2/6 = 1/3) in M/E2/1 model, yields

r1= 3, r2= 8, K1=

2

5, K2 = 1 15 Hence, the waiting time distribution is

P(W > t) = 2 5e

−3t 1

15e

−8t, t ≥ 0

Recalling the solution form for M/Er/1 queuing model is pn= Prk=1ckxnk, n = 0, 1, 2, · · · .

We find the solution for pnwith respect toµ = 6, λ = 1 is

c1= 2 5, c2= 4 15, x1 = 1 2, x2= − 1 3.

Hence, substitute them into (2.9) derived by direct method, we have the waiting time distribution P(W > t) = 2 5e −3t 1 15e −8t, t ≥ 0. (2.12)

The direct approach and P-K formula give the same waiting time distribution for M/Er/1

queuing system. Therefore, one can apply both of them to calculate the waiting time distribution for Erlang queues with single server.

P-K formula gives closed-form expression for waiting time distribution. It, however, only works for single server queues. The goal of this thesis is to study multiserver queues (and compare them with single server queues). For multiserver queues, we have the direct approach (P-K method) if service times are exponential and we have Neuts’ method (called Matrix-geometric method, [12]) when service time distributions are complicated. This method will be specified in chapter 5.

(22)

3 Performance Evaluation of M

/M/·

Systems

How many servers are best in reducing queue lengths and waiting time in M/M/· system. The answer is showed by using analytical computation and numerical analysis, respectively. Moreover, direct computation to M/M/c system with capacity N is not tractable. We give its numerical analysis. All models are limited to FCFS service pattern.

3.1 Direct comparison between M

/M/1 and M/M/c Systems

Notice that all servers in a system are independent identically distributed. Arrival rate for all queuing systems areλ. Denote µ the service rate of the server in M/M/1 queuing system and µc, c ≥ 2 the service rate of each server in M/M/c queuing system. Let

µ = cµc. It means that the single server’s service speed is c times faster than the speed of

the each server in multiple server systems. The total service speeds are equal. It yields a relationship between their occupation rates which is

ρ1 = λµ = λ

c = ρc< 1,

whereρ1 andρc represent occupation rates in M/M/1 and M/M/c systems per server,

respectively.

DenoteiE(L) mean queue length in the system with i servers. Recall (2.2.2), the mean queue lengths in M/M/c models. The direct quotient between1E(L) andcE(L) is given in the following.

(23)

1E(L) cE(L) = ρ1 1 −ρ1         cc+1ρc1+1 (c − 1)!(c − cρ1)2        c−1 X i=0 ciρi1 i! + ccρc1 c! ( 1 1 −ρ1 )        −1 + cρ1         −1 = ρ1 1 −ρ1         ccρc+11 c!(1 −ρ1)2        c−1 X i=0 ciρi1 i! + ccρc1 c! ( 1 1 −ρ1 )        −1 + cρ1         −1 =          ccρc 1 c!(1 −ρ1)3Pc−1i=0 ciρi 1 i! + ccρc1(1 −ρ1)2 + c(1 − ρ1)          −1 = c!(1 −ρ1) 3Pc−1 i=0 ciρi 1 i! + ccρc1(1 −ρ1) 2 ccρc 1+ cc!(1 − ρ1)4 Pc−1 i=0 ciρi 1 i! + cc+1ρc1(1 −ρ1)3 = c!(1 −ρ1) 3Pc−1 i=0 ciρi 1 i! + ccρc1(1 −ρ1)2 ccρc 1+ c(1 − ρ1)  c!(1 −ρ1)3Pc−1i=0 ciρi 1 i! + ccρc1(1 −ρ1)2  = x ccρc 1+ (c − cρc)x < (cρ x c)c+ x < 1, c ≥ 2 (3.1) where x := c!(1 − ρ1)3 c−1 X i=0 ciρi 1 i! + c cρc 1(1 −ρ1) 2,

and the last two inequalities follow from

c − cρc≥ 2 − cρc> 1.

Particularly, take c= 1, 2, µ = 2µ1= 2µ2,ρ1 = µλ = λ2 = ρ2 < 1 and 2ρ2< 1. We have,

1E(W) 2E(W) = 1 2µ2−λ (2µ2−λ)(2µ2+ λ) 4µ2 (3.2) = 1 2 + 1 4 λ µ2 < 3 4 < 1. (3.3)

Clearly, the computation results show that single-server queuing system is more efficient with respect to mean queue length and mean waiting time in the system for any service speed. Customers spend less time in M/M/1 system with fast service speed. The single server minimizes the mean waiting time in the system. Thus, M/M/1 system has a

(24)

big advantage of reducing queue lengths. Recall that we take more servers but each with a smaller speed. Therefore, increasing number of servers does not improve the performance to the M/M/· queuing systems. Another focus is: does a system with c − 1 fast servers of M/M/c queues performs better than the slow c-server in terms of mean waiting time? This will be illustrated in the following.

3.2 Numerical Analysis between M

/M/1 and M/M/c Systems

We illustrate the results of the previous section in numerical examples. Throughout our numerical analysis, mean queue lengths in the system (show in vertical axis) are studied under a range of number of servers in M/M/c system, against arrival rates (show in horizontal axis). We expect that the mean queue length of single-server is the shortest. Another expectation is that c − 1-server model performs better than the c-server in M/M/· queuing systems.

For illustration, our analysis focuses on number of servers c= 1, 2. Let µ = 2µc= 0.6, 0.8

(corresponding to mean service time 1/µ). In avoiding the explosion, the arrival rates λ(= 0.6) < µ for all cases. In Figure (3.1), solid lines represents the mean queue length with 1-server while dot lines represent the system with server number c = 2. We see that mean queue lengths gradually build up along the growing arrival rate λ. And all lines show increasing trends along with the rising arrival rates. This is easy to understand that system will be packed with growing numbers of customers and causes a long queue. This is reflected from the sharp upward increasing tails. Figure (3.1) also shows that to both µ = 2µc = 0.6, 0.8, the fast single-server system produce a shorter

length than the 2-server. The system with larger service time (the upper two lines under µ = 2µc= 0.6 ) causes longer queues.

In terms of M/M/c queuing systems with multiple servers c = 1, 2, 4, 8, service rates µ = cµc = 0.8, Figure (3.2) shows that the optimal number of server with respect to

mean queue length is 1 (the bottom line). Fixing a value ofλ, we see that8EL >4EL >

2EL > 1EL. The single-server system minimizes the mean queue length and mean waiting time in the systems. Our guess are proved be to true. Customers spend less time in M/M/c system with 4 fast servers than in system with 8 slow servers. Similarly, the fast 2-server system performs better than the slow 4-server system.

(25)

Figure 3.1: Expected number of customers in the system (c= 1, 2; µ = 0.6, 0.8) There are increasing trends in all lines. This means queue lengths grow along the arising arrival rates, steadily increasing at the beginning and rapidly around the tail. The reason is that queue lengths are likely to explode once the arrival rate hit a certain point under fixed service rates. In our case, the value isλ ≥ 0.8. The FCFS jobs in M/M/c queuing systems prefer single fast server instead of multiple slow servers.

Note that as arrival rateλ increases to its maximum value, the mean queue length in the single-server and the multi-server get closer to each other. The reason is that the more amount of work arriving to the system per unit time, the more fraction of time that the server is busy. To both single and multi server systems, their queues are close to an explosion status. The advantage of the single server system is not very significant in terms of minimizing the mean queue length and mean waiting time in the system.

(26)

3.3 Numerical Analysis between M

/M/1/N and M/M/c/N

Systems

Let service rateµ = 0.8, number of servers c = 1, 2, 4, λ ∈ [0.3, 0.8]. The finite capacity factor N explains that after N arrivals, customers are rejected to enter the system. Considering N = 5, 15 into M/M/c queuing systems, Figure (3.3) shows exactly the same results as in M/M/c systems. Single server is the best to both capacities. They produce the shortest queue lengths. The more and slow servers are in M/M/c/N systems, the longer the queues are.

Another significant observation is that by restricting the capacity up to N, the mean queue length will never explode and keep below level N.

Figure 3.3: Expected number of customers in the system (c= 1, 2, 4; N = 5, 15; µ = 0.8)

According to the above analysis, there exists an optimal number of servers which is single server in all cases to both M/M/c and M/M/c/N queuing systems.

(27)

4 Phase-type Queuing System

Having exponential distributed arrival and interarrival processes are severe restricted requirements. In reality, service time distribution can be more complicated instead of having only one exponential distributed phase in single-server system. Fortunately, any distribution in multi-server queues can be approximated by using phase-type dis-tributions, introduced by Erlang and generalized by Neuts (see [12]). It is a probability distribution constructed by a convolution or mixture of exponential distributions in sequence, or phases. The notation is PH.

Phase-type distribution can be considered as a generalization of exponential distribu-tions in phases and can be completely characterized by the rate parameter. It simplifies the future study about Markov chains. The previous distribution are all special cases of it. Related contents like absorbing continuous time Markov chain and some typical examples of continuous phase-type distribution are introduced.

4.1 Basic Concepts

Let 0 denote the vector of zeros with appropriate dimension. Correspondingly, use e to denote a column vector of ones of appropriate dimension, i.e. e= [1, · · · , 1]T. Matrices is represented by capital primarily roman letters like T and Ak. The symbol I will be

used for a unity matrix of appropriate dimension.

In general, the infinitesimal generator matrix for the Markov chain in the continuous case for a given representation is given by (see Neuts [12])

Q=        T t 0 0        ,

(28)

ab-sorbing state. Since no transition occur from abab-sorbing state to transient states, the infinitesimal generator matrix Q is consisted with a 1 × m row zero vector. The remain-ing element 0 indicate no transitions among states.

Phase-type distribution is represented by a pair (α, T). (α, αm+1) represents the initial probability of the generator. And αe + αm+1 = 0. We assume that states 0, · · · , m are transient, then the initial probability to the absorbing state m+ 1 is determined. Besides, the m × m matrix T can be interpreted as the infinitesimal generator matrix among the transient states in the continuous case.

By the property of matrix Q (denotes the transition rate), it yields that the row sum to zero, i.e. Te+ t = 0, where e is the column vector with all its components equals to 1 and 0 are both column m-vector of 0’s. Note that t andαm+1are implicit, that is why we

do not include them into the phase-type representation.

We now look at how the fact that PH distributions are associated to Markov processes. Recall the absorbing continuous-time Markov chain. Occurrence of each of the phases in phase-type queues may be stochastic process. And a random variable that describing the time to absorbing state of CTMC with one absorbing state represents the phase-type distribution. Each state in the state space of the absorbing CTMC represents one of the phases. Phase-type distribution shows a simple framework and one can apply many simple results on exponential distributions to more complex models without losing computational tractability.

4.2 Some Phase-type Distributions

Depending on the structure of transition matrix T and initial probabilityα, phase-type distribution can be divided and we will show an overview of these examples in this section.

Exponential Distribution

Exponential distribution is the simplest phase-type distribution with one transient state

1 1 λ 2

(29)

shown in Figure (4.1), therefore, only one transient phase before entering absorbing state 2. The transient rate isλ. Obviously, the initial probability α = [1], and T = [−λ]. The infinitesimal generator matrix for the absorbing CTMC is obtained

Q=        −λ λ 0 0        . (4.1) Erlang Distribution

As we showed in chapter 2, Erlang distribution is the sum of n exponential phases

1 1 λ 2 λ · · · λ n λ n+1

Figure 4.2: Markov chain of Erlang distribution with absorbing state n+ 1 with the same rateλ and the Markov process starts from phase 1 and reaches to the absorbing state n+ 1 as shown in Figure (4.2). All the n phases are independent. The initial probability vector of the absorbing Markov chain isα1×n= [1, 0, · · · , 0]. And the

infinitesimal generator matrix for the absorbing CTMC is obtained

T=                          −λ λ 0 · · · 0 0 0 −λ λ · · · 0 0 ... 0 0 0 · · · −λ λ 0 0 0 · · · 0 −λ                          . (4.2)

It is observed that transition matrix T is tridiagonal in Erlang distribution.

Hypo-exponential Distribution

Hypo-exponential Distribution simply is a generalized Erlang distribution with n pa-rametersλ1, λ2, · · · , λn. And they are not necessary identical. The initial probability

1 1 λ1 2 λ2 · · · λn−1 n λn n+1

(30)

vector isα1×n = [1, 0, · · · , 0]. And the infinitesimal generator matrix for the absorbing CTMC is obtained T=                          −λ1 λ1 0 · · · 0 0 0 −λ2 λ2 · · · 0 0 ... 0 0 0 · · · −λn−1 λn−1 0 0 0 · · · 0 −λn                          . (4.3)

This is also a tridiagonal.

Hyper-exponential Distribution

The next example of PH is hyper-exponential distribution. It is a mixture of n expo-nential distributions (i.e. n phases). This Markov chain can start in any of its phases. The initial probability vector shown in Figure (4.4) isα1×n = [α1, α2, · · · , αn]. And the

α1 α2 ... αn 1 2 n n+1 λ1 λ2 λn−1

Figure 4.4: Markov chain of hyper-exponential distribution with absorbing state n+ 1 infinitesimal generator matrix for the absorbing CTMC is obtained

T=                          −λ1 0 0 · · · 0 0 0 −λ2 0 · · · 0 0 ... 0 0 0 · · · −λn−1 0 0 0 0 · · · 0 −λn                          . (4.4)

(31)

Hyper-Erlang Distribution

A Hyper-Erlang distribution is considered as a mixture of m mutually independent Er-lang distributions. They are weighted with the initial probability vectorα = [α1, · · · , αm],

where αi ≥ 0 and α is stochastic (i.e. Pmi=1αi = 1). And the distribution consists

of one absorbing states. Recall that Erlang distribution is the sum of independent identical exponential distribution, then hyper-Elang distribution can be recognized as a mixture of sums of exponential distributions. The initial probability vector is

α1 ... αn · · · · · · λ1 λ1 λ1 λ1 λ2 λ2 λ2 λ2

Figure 4.5: Markov chain of hyper-Erlang distribution with a absorbing state α1×n= [α1, 0, · · · , 0, α2, 0, · · · , 0, αm, 0, · · · , 0]. And the infinitesimal generator matrix for

the absorbing CTMC is obtained

Q=                    Q1 Q2 ... Qn                    , (4.5)

where matrix Qi on its diagonal represents the infiniesimal generator of the ith Erlang

branch and it is given by (4.3). Coxian Distributions

A mixture of hypo- and hyper-exponential distributions is called Coxian distributions. Starting with probability αi to phase i, the Markov process goes through n phases

with probability fi, rates λi until absorbed by state n+ 1. States are absorbed with

(32)

α1 1 α2 2 · · · αn n λn n+1 f1λ1 f2λ2 fnλn (1 − f1)λ1 (1 − f2)λ2 (1 − fn−1)λn−1

Figure 4.6: Markov chain of hypo-exponential distribution with absorbing state 2

And the infinitesimal generator matrix for the absorbing CTMC is obtained

T=                          −λ1 f1λ1 0 · · · 0 0 0 −λ2 f2λ2 · · · 0 0 ... 0 0 0 · · · λn−1fn−1λn−1 0 0 0 · · · 0 −λn                          . (4.6)

We find that all these transition matrix T are tridiagonal which are highly structured. This special character of phase-type distribution simplifies the numerical computations of complex queuing models.

(33)

5 Matrix-Geometric Method for Phase-type

Models

In this chapter, we introduce a special class of continuou-time Markov processes known as quasi-birth-and-death (QBD) processes and describe an associated absorbing Markov chain that will be needed for the analysis of waiting time. We then discuss a matrix-geometric method for analyzing QBD processes (see[12]). Our study then focuses on QBD process of CTMC with one absorbing state.

5.1 Quasi-birth-and-death Process

Definition 3. A quasi-birth-and-death Process is a Markov process with state space E = {(i, j), i ≥ 0, 1 ≤ j ≤ m} with canonical infinitesimal generator ˜Q given by

˜ Q= B0 A0 B1 A1 A0 A2 A1 A0 A2 A1 A0 A2 A1 · · · ... ,

where Bkand Al(k= 0, 1, l = 0, 1, 2) are square matrices of order m and they satisfy B0e+A0e=

B1e+ A1e+ A0e= (A0+ A1+ A2)e= 0, e is a column vector with all elements equal to 1 of

appropriate dimension.

The m×m nonnegative submatrix A0contains transition rates that the process moves one

level forward, i.e. from level k to level k+ 1 for k ≥ 0. Similarly, the m × m nonnegative submatrix A2contains transition rates that the process moves one level backward and

(34)

the m × m strictely negative submatrix A1 contains transition rates that process moves

within the same level, k ≥ 0 and its off-diagonal elements are nonnegative. A0+ A1+ A2

has zero row sums. Also notice that the one-step transition rate matrix ˜Q possesses a block tridiagonal form of a QBD process.

Recalling the generator matrices of examples in phase-type distributions. Many queuing models such as M/PH/1, GI/M/c are recognized having repetitive block-tridiagonal forms like QBD matrix. Elements in previous examples are replaced by entire matrices in QBD process. These special structures make numerical solutions tractable.

5.2 Matrix-Geometric Method

It is seen that close-form solutions for mean performance is hard to obtain and one need to seek efficient numerical stable solutions. In terms of the special structure of QBD matrix, a method corresponding to it called matrix-geometric method gives a way to solve Markov chains. This method plays an important role between algebraic manipulations of mathematical expressions and the probabilistic interpretation of these expressions. And phase-type distribution is a start point of it. The distribution of a random variable is defined through a matrix in matrix-geometric method.

Matrix-geometric method will be introduced below. Specifically, we will show how stationary probability vector can be represented in QBD process. It corresponds to mean performance such as mean waiting time in the queue. We also give related theorems to build up our models and conduct numerical analysis in next chapter.

5.2.1 Stationary Probability

In finding distribution of waiting time, we characterize the steady state probability vector x of ˜Q according to the above transforms. Let x = [x0, x1, x2, · · · ] be stationary

probability of transition rate matrix ˜Q of canonical QBD process. xk for k ≥ 0 are m

dimensional row vectors and the dimensions change according to boundary behavior. Its components describe the probabilities that QBD process is in any phase of level k for k ≥ 0. We know thatP∞

k=0xke= 1.

(35)

satisfies

x ˜Q= 0, (5.1)

where 0 denotes the zero vector. In general, xkhave particular relations in QBD process.

For instance, in M/M/1 system (phase m = 1), let λ, µ be arrival rate and service rate, respectively. The transition matrix ˜Q is then given by

˜ Q= −λ λ µ −(λ + µ) λ µ −(λ + µ) λ µ −(λ + µ) λ µ −(λ + µ) · · · ... .

We see that the invariant probability xk = λµxk−1, for k ≥ 0, where the factor λµ is a real

number and stands for the amount of work arriving to the state k, given the chain starts from state k − 1. We call x a matrix-geometric probability vector if it has similar property. The definition of this type vector is given in [12], see below.

Definition 4. Steady state probability vector x = [x1, x2, · · · ] is called a matrix-geometric

probability vector if

xk = x0Rk, for k ≥ 0, (5.2)

where R is a nonnegative square matrix.

On the basis of the above argument, rate matrix R must represent the amount of work arriving per unit time.

One of our goals is to evaluate the mean performance of a stable QBD process. We also need a condition to check the stability of a QBD process. This condition is known as Neuts’ mean drift condition (see Neuts [12]). It is given as follows.

Theorem 3. A Markov process is stable if and only if

πA2e> πA0e, (5.3)

(36)

πA2eis the mean drift from state i+ 1 to i and πA0eis the mean drift from state i to i+ 1.

Clearly, the queue process is stable if and only if the departure jobs are faster than the arriving jobs.

In the M/M/1 system, A0 = λ, A1 = −(λ + µ), A2 = µ, then the above inequality yields

λ < µ, i.e. λ

µ = ρ < 1 which assures that the queuing system is not exploded and

therefore steady. The remaining is to solve for stationary probability vector x of ˜Q. In doing so, finding the explicit expression for R is necessary. It will be specified in 5.2.2.

5.2.2 The Main Theorems

Since ˜Q is positive recurrent, the total expected time the process spends in higher levels before returns to the started level must be finite. It is seen that matrix R is finite. Moreover, in finding algorithm construction of R, we need an explicit expression for R. We check these by studying some important lemmas. These useful results are proved in Neuts’s book ([12], Lemma 1.2.1, Lemma 1.2.3). We add more details to the original proofs.

A transition probability called Taboo probability is introduced, which is a conditional probability and denoted by HP(n)h,k . H is an arbitrary set of states, called Taboo set. Taboo probability describes that Markov chain starts from state h at time 0 and reaches to state k at time n without visiting set H in between. For Markov chain, we give a transformed notationiP(n)i,j;i+k,v to describe the probability that given the chain starts from state (i, j) and reaches to (i + k, v) at time n without going back to level i, for n ≥ 0, i ≥ 0, 1 ≤ j, v ≤ m, k ≥ 1. Notice thatiP(n)i,j;i+k,v = 0 for n < k. Moreover, this taboo probability is independent of level i due to the special structure of ˜Q.

Taboo probabilityiP(n)i,j;i+k,vhelps to characterize matrix R(k). The element of it is defined by R(k)jv = ∞ X n=0 iP(n)i,j;i+k,v , for k ≥ 1, i ≥ 0, 1 ≤ j ≤ m, 1 ≤ v ≤ m, (5.4)

which is the expected number of visits starting from states (i, j) to (i + k, v) before first return to level i. Similarly, it equals to zero when n< k. Clearly, R(0)= I. We call R = R(1) the rate matrix of ˜Q. This R is exactly the same R in xk = xk−1R, which both stands for the amount of work arriving to states (i+ k, v) before the first return to level i, given that the Markov chain starts in the states (i+ k − 1, j). The very good news is that R is

(37)

tractable which is showed in Lemma 2 in the following.

Before giving the lemmas, we introduce some definitions corresponded. Let X = {Xt}t∈R+, Y = {Yt}t∈R+ denote level and phase variables, respectively. We consider

a two-dimensional Markov process {(Xt, Yt), t ∈ R+} with values in state space E =

{(X, Y), 1 ≤ X ≤ m, 1 ≤ Y ≤ m}.

Definition 5. For the Markov chain , we define γ˜(i,j)(i+k,ν) as the expected number of visits given that the chain starts in (i, j) and it visits to (i + k, ν) before the first return to (i, j), for i ≥ 0, 1 ≤ j ≤ m, 1 ≤ ν ≤ m, k ≥ 1, i.e.

˜

γ(i,j)(i+k,ν) = E(i,j) T(i,j)−1

X

n=0

1{Xn=i+k,Yn=ν} , (5.5)

where T(i,j)indicates the first passage time to state (i, j) that is defined by

T(i,j) = inf{n ≥ 1 : Xn= i, Yn= j}. (5.6)

Notice that T(i,j)T(i,·).

Lemma 1. If the Markov chain is positive recurrent, the matrices R(k), k ≥ 1, are finite. ([12], p.8-9)

Proof. Notice that the Markov chain ˜Q is irreducible. We first show the finiteness of expected numbers of visits to the level i+ k before first return to level i, i.e. ˜γ(i,·)(i+k,·) < ∞, given that the chain starts in (i, j)

˜ γ(i,·) (i+k,·) = m X j=1        m X ν=1 ˜ γ(i,·) (i+k,ν)· P(Y0= j|X0= i)        = m X j=1         m X ν=1 E(i,·) T(i,·)−1 X n=0 1{Xn=i+k,Yn=ν}· P(Y0= j|X0= i)         ≤ m X j=1         m X ν=1 E(i,j) T(i,j)−1 X n=0 1{Xn=i+k,Yn=ν}· P(Y0= j|X0= i)         = m X j=1        m X ν=1 ˜

γ(i,j)(i+k,ν)· P(Y0= j|X0= i)

      < ∞, (5.7)

(38)

where the first inequality is based on the fact that T(i,j)T(i,·). The last inequality follows from the fact that the Markov chain ˜Q is positive recurrent. As in ([15], Theorem 1.75 ),

˜

γ(i,j)(i+k,ν)< ∞ . Moreover, we have the relation between ˜γ(i,·)(i+k,·)and R(k)jv, ˜

γ(i,·)(i+k,·)

= E [# visits to level i+k before return to level i | start in level i] =X

j

E# visits to level i+k before return to level i | start in (i,j) | {z } =PvR(k)jv ·P(Y0 = j|X0= i) =X j       X v R(k)jv · P(Y0= j|X0= i)      . (5.8)

Since ˜γ(i,·)(i+k,·)< ∞, we conclude that the matrices R(k), k ≥ 1, are finite. 

Lemma 2. If the Markov chain ˜Q is positive recurrent, the matrices R satisfies the equation

R=

X

k=0

RkAk (5.9)

and is the minimal nonnegative solution to the matrix equation

X=

X

k=0

XkAk . (5.10)

Proof. I. First of all we prove the matrices R satisfies the equation (5.9).

For n = 1, clearly iP(1)i,j;i+1,v = (A0)jv. For n ≥ 2, we introduce a new variable τ(a,b)(l,g) to

(39)

partitioning the value of it, iP(n)i,j;i+k+1,v = P  (Xn, Yn)= (i + k + 1, ν), τ(0,n)(i,·) = 0|(X0, Y0)= (i, j)  = n−1 X r=1 P 

(Xn, Yn)= (i + k + 1, ν), τ(0,n)(i,·) = 0, τ(0,n)(i+k,·)= r|(X0, Y0)= (i, j)

 = n−1 X r=1 m X h=1 P  (Xn, Yn)= (i + k + 1, ν), τ(0,n)(i,·) = 0, τ(i(0,n)+k,·) = r, Yr= h|(X0, Y0)= (i, j)  = n−1 X r=1 m X h=1 P  (Xr, Yr)= (i + k, h), τ(0,r)(i,·) = 0|(X0, Y0)= (i, j)  · P 

(Xn, Yn)= (i + k + 1, ν), τ(r,n)(i+k,·)= r|(X0, Y0)= (i, j), (Xr, Yr)= (i + k, h), τ(0,r)(i,·) = 0

 = n−1 X r=1 m X h=1 P  (Xr, Yr)= (i + k, h), τ(0,r)(i,·) = 0|(X0, Y0)= (i, j)  · P  (Xn−r, Yn−r)= (i + k + 1, ν), τ(0(i+k,·),n−r) = 0|(X0, Y0)= (i + k, h)  = n−1 X r=1 m X h=1

iP(r)i,j;i+k,h· i+kP(n−r)i+k,h;i+k+1,v

= n−1 X r=1 m X h=1 iP(r)i,j;i+k,h· iP(n−r)i,h;i+1,v , (5.11)

where τ(r(i+k,·),n) = r describes the event that the path does not visit level i + k between time r and n and the last equality bases on the fact that taboo probabilityiP(n)i,j;i+k+1,v is

independent of level i. It then gives

iP(n)i,j;i+1,v = m X h=1 ∞ X k=1 iP(n−1)i,j;i+k,h·i+kP(1)i+k,h;i+1,v = m X h=1 ∞ X k=1 iP(n−1)i,j;i+k,h· (Ak)hv . (5.12)

(40)

we write the above summation. Summation on n, it yields R(1)jv = ∞ X n=1 iP(n)i,j;i+1,v= ∞ X n=1 m X h=1 ∞ X k=1 iP(n−1)i,j;i+k,h· (Ak)hv = ∞ X k=1 ∞ X n=1 iP(n−1)i,j;i+k,h m X h=1 (Ak)hv = ∞ X k=1 ∞ X n=0 iP(n)i,j;i+k,h m X h=1 (Ak)hv = ∞ X k=1 R(k)jhAk = ∞ X k=1 RkjhAk , (5.13)

where the last equality follows from Lemma 1.2.2 (i.e. R(k) = Rk). Since the square matrix with the left elements equals to R(1)= R, thus the equation (5.9) is satisfied.

II. We now show that the matrix R is the minimal nonnegative solution to the matrix equation (5.10).

Define the matrices R(N, k) with elements Rjv(N, k) =

N

X

n=1

iP(n)i,j;i+k,v , for k ≥ 1, i ≥ 0, 1 ≤ j ≤ m, 1 ≤ v ≤ m. (5.14)

Hence adding equation (5.13) ranging from 1 to n,

N X n=1 iP(n)i,j;i+1,v= Rjv(N, 1) = N X n=1 m X h=1 ∞ X k=1 iP(n−1)i,j;i+k,h· (Ak)hv = ∞ X k=0 N−1 X n=0 iP(n)i,j;i+k,h· m X h=1 (Ak)hv = ∞ X k=0 Rjh(N − 1, k)Ak . (5.15)

(41)

In the end, we get this form: R(N, 1) = ∞ X k=0 R(N − 1, k)Ak (5.16)

Suppose that matrix Y is any nonnegative solution to (5.10). Consider the sequence {Y(N), N ≥ 0} of matrices obtained by performing successive substitution in (5.10), starting with Y(0)= 0. Notice that Y(1) = A0 ≥ 0= Y(0). Based on the fact that Akand

Y(N) are nonnegative, then

Y(2)= ∞ X k=0 [Y(1)]kAk = A0+ Y(1)A1≥A0= Y(1). (5.17) Y(3)= ∞ X k=0 [Y(2)]kAk ≥A0+ Y(2)A1A0+ Y(1)A1 = Y(2). (5.18)

By repeated substitution for Y(N) we obtain after N − 3 steps,

Y(N)=

X

k=0

[Y(N − 1)]kAk

= A0+ Y(N − 1)A1+ [Y(N − 1)]2A2+ · · · + [Y(N − 1)]N−1An−1

A0+ Y(N − 2)A1+ [Y(N − 2)]2A2+ · · · + [Y(N − 2)]N−2AN−2 | {z }

=Y(N−1)

+[Y(N − 2)]N−1A N−1

≥Y(N − 1). (5.19)

The above processes gives that (Y(N))N≥0is increasing everywhere. Then Y= lim

(42)

For n ≥ k+ 1, we have iP(n)i,j;i+k+1,v= m X h=1 n X r=0

iP(r)i,j;i+k,h·i+kP(n−r)i+k,j;i+k+1,v

= m X h=1 n X r=0 iP(r)i,j;i+k,h·iP(n−r)i,j;i+1,v . (5.21)

Add the above equation for n ranging from 1 to N − 1 and we obtain

N−1 X n=1 iP(n)i,j;i+k+1,v= [R(N − 1, k + 1)]jv = m X h=1 N−1 X n=1 n X r=0

iP(r)i,j;i+k,h·i+kP(n−r)i+k,j;i+k+1,v

= m X h=1 N−1 X n=1 n X r=0 iP(r)i,j;i+k,h·iP(n−r)i,j;i+1,v = m X h=1 N−1 X r=0 N−1−r X n=0 iP(r)i,j;i+k,h·iP(n)i,j;i+1,v ≤ m X h=1 N−1 X r=0 iP(r)i,j;i+k,h N−1 X n=0 iP(n)i,j;i+1,v = m X h=1 [R(N − 1, k)]jh[R(N − 1, 1)]hv, (5.22)

where the third equality yields from the fact that taboo probability is independent of i. We write the above inequality in matrix form,

R(N − 1, k + 1) ≤ R(N − 1, k)R(N − 1, 1). (5.23) By induction, we obtain for k ≥ 1

R(N − 1, k + 1) ≤ R(N − 1, k)R(N − 1, 1) ≤R(N − 1, k − 1)[R(N − 1, 1)]2 · · ·

(43)

And upon substitution to (5.22), R(N, 1) ≤ ∞ X k=0 R(N − 1, 1)kAk for N ≥ 2. (5.25)

We now have that R(1, 1) = A0= Y(1) and

R(2, 1) ≤ ∞ X k=0 R(1, 1)kA k = ∞ X k=0 Y(1)kAk= Y(2). (5.26)

By induction, it follows that R(N, 1) ≤ Y(N), for N ≥ 1. The sequences of matrices R(N, 1) is nondecreasing and nonnegative by its definition and it follows that R(N, 1) → R as N → ∞ according to Monotone Convergence Theorem. Therefore, R ≤ Y i.e. R is the minimal nonnegative solution to the matrix equation

X= ∞ X k=0 XkAk.  It is showed that nonnegative matrix R is finite from Lemma 1 and by Lemma 2, R is the minimal nonnegative solution to

R2A2+ RA1+ A0= 0. (5.27)

Another useful theorem corresponding to R and x is given herein (see [12], Theorem 3.1.1).

Theorem 4. The irreducible Markov process ˜Q is positive recurrent if and only if the minimal nonnegative solution R of the matrix-quadratic equation

(44)

has all its eigenvalues inside the unit disk and the finite system of equations x0(B0+ RB1)= 0,

x0(I − R)−1e= 1. (5.29)

has a (unique) positive solution x0.

So far we have proved that the continuous-time QBD process has the matrix-geometric stationary probability xk = x0Rkfor k ≥ 0. We construct a simple iterative algorithm to

find R by approximating the CTMC from a discrete time Markov chain. Since the QBD process has infinite number of levels, the iteration methods is possible. In fact, there are other methods for finding R and we give one which is simple to program.

• Letν = largest diagonal element of −A1 , (Recall that A1 has negative diagonal

elements). It then is the largest diagonal element of generator of continous-time QBD process ˜Q.

• By Jensen’s uniformization, the discrete-time process ˜P is determined by ˜P= I+1νQ.˜ Note that the all the elements outside the diagonal of the generator ˜Q become probabilities and after adding the identity matrix, the diagonal elements become probabilities as well, and each row sums to 1. We thus have a discrete time QBD and it remains a block tridiagonal form, which produce the same matrix-geometric solution as ˜Q. Clearly, a subprobability matrix corresponding to A1is determined

(see Asmussen [19]),

L= I + 1 νA1.

Then the equation (5.28) becomes

R2A2+ R (ν(L − I)) + A0= 0. (5.30)

• We now start the iterative algorithm for R.Take the initial value of R(0) = 0 and

substitute it into (5.30), yields R(1)

R(1) =

1 ν 

(45)

• Let R(0)= R(1)and substitute it into (5.31), yields R(2) R(2) = 1 ν  R2(1)A2+ νR(1)L+ A0 , (5.32)

• Continuing the iteration we get

R(k+1)= 1

ν 

R2(k)A2+ νR(k)L+ A0 , (5.33)

and restricting it to a certain accuracy (say, 10e−20), the final value for R is obtained. Since B0e+ A0e= B1e+ A1e+ A0e= (A0+ A1+ A2)e= 0 and x0(B0+ RB1)= 0, we have

RA2e= A0e, (5.34)

which shows that the transition rate from state i to i+ 1 correlated to the transition rate from state i+ 1 to i. Note that A0, A2 are positive and R is finite (refer to (1)). After

each iteration, R increases monotonically. Thus, R converges. In the end the stationary probability x corresponds to R can be found by solving (5.29).

5.3 Mean Waiting Time in QBD Process

In most situations, customers leave system after completing their services. Thus, study of absorbing Markov chain is necessary. In terms of rate matrix R, stationary waiting time distributions in GI/PH/1 queue can be derived from highly structured matrices. This has been proved by Neuts in [12]. In this section, we give our formulations for GI/PH/1 system. The formulation for two-server system will be specified in next chapter.

(46)

with infinitesimal generator ˜Q0(without considering arrivals in our models), ˜ Q0=                         0 1 2 3 · · · 0 0 0 0 0 1 A2 D 0 0 2 0 A2 D 0 3 0 0 A2 D · · · ... ...                         ,

where m states in level 0 are absorbing. Once entering the absorbing states, customers will start receiving service. If there are two servers in system, customers receive service either at server 1 or server 2 depends on which one is not occupied. Assume server 1 always offers service first if both servers are idle.

State space of this absorbing Markov chain is E = {0 ∪ (i, j), i ≥ 0, 1 ≤ j ≤ m}. All matrices in ˜Q0 are square matrices of order m. D has nonnegative off-diagonal but negative diagonal elements. D−1 exists because of its triangular form. Define m × m

matrix K= −D−1A2which is stochastic due to De+ A2e= 0.

5.3.1 Mean Waiting Time in Queue

Denote the initial probability vector of ˜Q0by y(0)= [y0(0), y1(0), · · · ]. The quantity yk(0)

is the probability that the process ˜Q0is in any phase of level k, k ≥ 0 at time 0. Similarly, yk j(x) is the probability of the fist visit to level k, phase j, k ≥ 0, 1 ≤ j ≤ m, at time x.

Recall the steady state probability x of ˜Q, it is clearly that yk(0)= xk= x0Rk.

Denote W(x) the probability that absorption occurs( in level 0 ) before time x. Clearly, W(x)= y0(x), x ≥ 0. (5.35)

In order to find distribution of waiting time, we consider differential equations y0

(x)= y(x) ˜Q0 with initial probability y(0). It derived from Chapman-Kolmogorov forward

(47)

equations. It can be reduced to y00(x)= y1(x)A2, y0k(x)= yk(x)D+ yk+1(x)A2, k ≥ 1, x ≥ 0. (5.36) (5.37) It follows that W0(x)= y00(x)= y1(x)A2.

Thus, the Laplace-Stietjes transform (LST) of W(x) is then given by w(s)= Z ∞ x=0e −sx W0(x)dx =Z ∞ x=0e −sx y1(x)A2dx = Z ∞ x=0e −sx y01(x)D−1A2−y2(x)A2D−1A2  dx = Z ∞ x=0e −sx y01(x)D−1A2−y02(x)(D−1A2)2+ y3(x)A2(D−1A2)2  dx = Z ∞ x=0e −sx y01(x)D−1A2−y02(x)(D−1A2)2+ y03(x)(D−1A2)3−y4(x)A2(D−1A2)3  dx = ∞ X k=0 yk(0)[(sI − D)−1A2]k, s ≥ 0.

It is a row vector of order m. And its component gives the probabilities that absorption occurs in state (0, j), 1 ≤ j ≤ m. Let s = 0,

w(0)=

X

k=0

yk(0)[(−D)−1A2]k (5.38)

Note that for s= 0, w(0)e = 1 since K = −D−1A

2is stochastic as we mentioned.

For use, we introduce a matrix ˜K such that I − K+ ˜K is nonsingular. It can be chosen as ˜

K= e · κ for irreducible matrix K (in most cases). κ is the invariant probability vector of K.

(48)

customers spend in queue is given by, EWq= −w0(0)e = ∞ X k=1 yk(0) k−1 X v=0 Kv(−D)−1Kk−ve = ∞ X k=1 yk(0) k−1 X v=0 Kv(−D)−1e =        ∞ X k=1 yk(0) k−1 X v=0 Kv        (I − K+ ˜K)(I − K + ˜K)−1(−D)−1e =        ∞ X k=1 yk(0) − ∞ X k=1 yk(0)Kk+ ∞ X k=1 yk(0) k−1 X v=0 KvK˜        (I − K+ ˜K)−1(−D)−1e, (5.39) Let Y= ∞ X k=0 yk(0), Y∗= ∞ X k=1 yk(0) k−1 X v=0 Kv, then EWq= −w(0)0e= Y∗(−D)−1e= [Y − w(0) + Y∗K](I − K˜ + ˜K)(−D)−1e.

This is the explicit formula for calculating waiting time ([12], p. 134-135) of GI/PH/1 queuing system. Notice that The term P∞

k=1yk(0) represents the sum of probabilities

that absorbing Markov chain is in level k at time 0 and its row sum equals to 1.

So far, an explicit expression for the mean waiting time in the queue of M/PH/1 queuing model is specified. By efficient programming, numerical solutions can be saved which are given in chapter 5.

5.4 Waiting Time Distributions in GI

/PH/1 System

An explicit expression for waiting time distributions in GI/PH/1 queue is given in ( [12], Theorem 3.9.2 ). We will develop the computational algorithms for all the coefficients of it.

Theorem 5. The vector W(·) with transform given in formula (5.38) may be found by impleting the following algorithmic steps.

Referenties

GERELATEERDE DOCUMENTEN

Indien de prospectie met ingreep in de bodem de noodzaak van een archeologisch vervolgonderzoek niet kan duiden, worden de gronden door de daartoe bevoegde ambtenaar van

Er zijn verschillende archeologische waarden in de onmiddellijke omgeving van het plangebied, deze wijzen op potentiële aanwezigheid van sporen uit de bronstijd, ijzertijd,

Rapporten van het archeologisch onderzoeksbureau All-Archeo bvba 125 Aard onderzoek: Prospectie Vergunningsnummer: 2012/461 Naam aanvrager: Annick Van Staey Naam site: Sint-Niklaas

Thus, a successful integrated energy-economy sys- tem dynamics model should exhibit an energy cost share range above which recessionary pressures may limit economic growth

Ook dit vereist een meer verfijnd management van toepassingen waaronder bijvoorbeeld versiebeheer (verschillende versies voor verschillende personen).. Tot slot: toepassingen

Deze oefeningen doet u dagelijks gedurende twee tot vier weken, na het afronden van oefening 1. Kuitspier

Met behulp van deze indicatoren kunt u nagaan hoe het met de kwaliteit van de decubituszorg binnen uw instelling